id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2309.09338 | Performance of the Pre-Trained Large Language Model GPT-4 on Automated
Short Answer Grading | Automated Short Answer Grading (ASAG) has been an active area of
machine-learning research for over a decade. It promises to let educators grade
and give feedback on free-form responses in large-enrollment courses in spite
of limited availability of human graders. Over the years, carefully trained
models have achieved increasingly higher levels of performance. More recently,
pre-trained Large Language Models (LLMs) emerged as a commodity, and an
intriguing question is how a general-purpose tool without additional training
compares to specialized models. We studied the performance of GPT-4 on the
standard benchmark 2-way and 3-way datasets SciEntsBank and Beetle, where in
addition to the standard task of grading the alignment of the student answer
with a reference answer, we also investigated withholding the reference answer.
We found that overall, the performance of the pre-trained general-purpose GPT-4
LLM is comparable to hand-engineered models, but worse than pre-trained LLMs
that had specialized training. | Gerd Kortemeyer | 2023-09-17T18:04:34Z | http://arxiv.org/abs/2309.09338v1 | # Performance of the Pre-Trained Large Language Model GPT-4
###### Abstract
Automated Short Answer Grading (ASAG) has been an active area of machine-learning research for over a decade. It promises to let educators grade and give feedback on free-form responses in large-enrollment courses in spite of limited availability of human graders. Over the years, carefully trained models have achieved increasingly higher levels of performance. More recently, pre-trained Large Language Models (LLMs) emerged as a commodity, and an intriguing question is how a general-purpose tool without additional training compares to specialized models. We studied the performance of GPT-4 on the standard benchmark 2-way and 3-way datasets SciEntsBank and Beetle, where in addition to the standard task of grading the alignment of the student answer with a reference answer, we also investigated withholding the reference answer. We found that overall, the performance of the pre-trained general-purpose GPT-4 LLM is comparable to hand-engineered models, but worse than pre-trained LLMs that had specialized training.
## I Introduction
Providing meaningful feedback to learners is one of the most important tasks of instructors [1]. yet it can also become one of the most work-intensive or even tedious tasks. Particularly for large-enrollment courses, lack of grading personnel can limit this feedback to automatically gradable closed-answer formats such as multiple-choice or numerical inputs. This limitation might be overcome by using Artificial Intelligence (AI) solutions [2]; it is therefore not surprising that when it comes to the use of AI in higher education, assessment and evaluation are the most prominent topics [3], and acceptance of this technology for education is increasing based on its perceived usefulness [4]. In particular, studies on Automated Short Answer Grading (ASAG) [5; 6] are highly relevant for educators to extend the limits of what can be assessed at large scales.
It is impossible to do justice to the spectrum of sophisticated ASAG methods in this short study; Burrows, Gurevych, and Stein provide an excellent overview up to 2015 [5]; Haller, Aldea, Seifert, and Strisciuglio look at later developments up to 2022 [6]. The latter survey notes a particular shift in recent years as models are moving from hand-engineered features to representation-learning approaches, which draw their initial training data from large text corpora [6] ("pre-trained"). However, most models used for ASAG still have in common that they are explicitly trained or fine-tuned for particular grading tasks, and datasets used in competitions such as SemEval [7] thus include training and testing items. By contrast, recently publicly released Large Language Models (LLMs) such as GPT-4 [8] and Bard [9] have not only been pre-trained from large text corpora, but subsequently extensively fine-tuned following general instead of task-specific strategies. Their users are neither expected nor actually able to further train or fine-tune the model, and an intriguing question is how these out-of-the-box general-purpose tools perform compared to specially trained or fine-tuned models.
In this study, GPT-4 is prompted to grade the items from two standard datasets, SciEntsBank and Beetle [7], which allows comparison of precision, recall, and F1-score (or weighted F1-score in case of 3-way items) to legacy and state-of-the-art ASAG models. SciEntsBank covers general science questions for 3rd to 6th grade, while Beetle covers questions and student answers from a tutorial system for basic electricity and electronics.
The standard judgment method is to compare the student answer to a reference answer, but in addition, it was also investigated if GPT-4 can adequately grade the student answers based on the question alone. For the latter task, the model would need to draw on its own pre-training from its text corpus to independently judge the correctness of the student answer.
## II Methodology
The SciEntsBank and Beetle datasets [7] were downloaded from kaggle [10]. They included both training and test data. The training data were discarded, while the test data included the 504 items and \(14,186\) student answers and their reference grading that were used for this study. As no training took place, the distinction between unseen answers (UA), unseen questions (UQ), and unseen domains (UD) that the dataset provided was dropped for this study, since all items were "unseen."
Each item in the datasets contains a question, a reference answer, and student answers including their reference grade. The items came in two versions:
* a 2-way version, where each student answer is either _correct_ if it is complete and correct paraphrase of the reference answer or _incorrect_ otherwise, and
* a 3-way version, where an additional judgment of _contradictory_ replaces some of the _incorrect_ labels if the student answer explicitly contradicts the reference answer.
The XML-coded items were translated into prompts for the GPT-4 API, see Figure 1 for an example. Each item was graded with and without providing a reference answer. The definitions of the judgment criteria for grading were taken from SemEval-2013 [7].
For 6 of the 504 items, errors occurred during evaluation, which led to 58 of the \(28,372\) student statements receiving no or invalid grades. The invalid grades were _unclear_, _creative_, _epoch_, _accurate_, and _correc_ [sic]. These missing or invalid student grades were counted as neither positives nor negatives.
Subsequently, the precision, recall, and (weighted) F1-score were calculated:
**Precision::**: Out of all the _correct_ grades given by a model, how many were actually correct??
**Recall (or Sensitivity)::**: Out of all the actual correct student answers, how many were graded as _correct_?
**F1-score::**: Harmonic mean of precision and recall; a way to balance the trade-off between precision and recall.
In the 3-way scenario, the above characteristics are correspondingly calculated for the classes _contradictory_ and _incorrect_, and a weighted average is calculated for these class F1-scores to form the weighted F1-score (w-F1).
## III Results
### Precision, Recall, and F1-Scores
Table 1 shows the precision, recall, and F1-scores for SciEntsBank and Beetle for the 2-way and 3-way items, as well as for the scenario where the reference answer was withheld. For the 3-way scenario, the individual-class results and the weighted F1-score (w-F1) are provided.
Looking at the precision and recall, an outlier is the recall on _contradictory_ in the 3-way Beetle dataset: a large number of student answers that were labelled as _contradictory_ were not recognized as such, but simply as _incorrect_ (as evidenced by the low precision on _incorrect_).
GPT-4 generally performs better on SciEntsBank than on Beetle. For SciEntsBank, the model showed its highest performance on the 2-way task (F1=0.744), followed closely by the no-reference scenario (F1=0.731), with the 3-way scenario in last place (w-F1=0.729). For Beetle, the no-reference scenario showed the highest performance (F1=0.651), followed by the 2-way (F1=0.611) and 3-way (w-F1=0.516) scenarios. In other words, for Beetle, providing a reference answer lowered its performance on correctly judging the student answers.
<?xml version="1.0"?> <question testSet="unseen-domains" id="WA_52b" module="WA"> <questionText>Johnny drove to the store with his father one cold and rainy night. They had only driven a short distance when the windows "fogged up" on the inside. What was it about the windows that caused the "fog" to form on them?</questionText> <referenceAnswer> <referenceAnswer id="WA_52b-a1">The windows were cooler than the water vapor in the air, causing the vapor to condense.</referenceAnswer> </referenceAnswer> <studentAnswer> <studentAnswer id="WA_52b.171.1" accuracy="correct">The windows in the car were cold because it was cold outside. There was lots of water vapor in the car, so it stuck to the cold windows and changed into condensation water.</studentAnswer id="WA_52b.174.1" accuracy="correct">They were cold and since Johnny probably turned the heat...
### Comparison to Specialized ASAG Models
Table 2 shows a comparison of specifically trained models versus the out-of-the-box GPT-4 model. At the time of the SemEval-2013 competition [7], had GPT-4 been around, it would have won the competition for 3-way SciEntsBank, and it would have outperformed all but one competing models in the unseen questions (UQ) category. In these specifically trained models, performance strongly depends on what was "seen" and what was "unseen."
Newer systems perform better, in particular those of the BERT [11] LLM family. These models are pre-trained and then specifically trained for SciEntsBank and Beetle using for example PyTorch [12]. Unfortunately, for the highly successful roberta-large model [13], the performance was not separately reported for the different 'unseen" categories, and no 3-way grading was performed.
Overall, the performance of the pre-trained general-purpose GPT-4 LLM is comparable to hand-engineered models, but worse than pre-trained LLMs that had specialized training.
## IV Limitations
Since GPT is a probabilistic model, running it again, possibly at a different temperature, is likely going to
Figure 1: Original XML-code of a 3-way item and the generated prompts for its evaluation with and without providing a reference answer.
yield different results. However, due to the already large amount of computing required for one run, and in light of the high statistics gained from over 500 items, only one run was considered here. Also, different prompts from the ones shown in Fig. 1 may result in higher or lower performance.
OpenAI, the company behind GPT, does not release information about what constituted the text corpus used for training. Though unlikely, since the datasets are only available as ZIP-files and in XML-format, there is still a possibility that SciNetsBank and Beetle had been used for training. When asked about SciNetsBank, though, the model stated that it was not familiar with a dataset or source by that name; GPT-4 performed better on SciNetsBank than on Beetle, for which it stated that it is a known dataset in the domain of natural language processing and educational research. The model, however, demonstrated ignorance when asked about any specific details regarding Johnny, his father, and the windows in the scenario quoted in Fig. 1, making it unlikely that it had seen the text before.
## V Discussion
The last five years saw the strong emergence of Deep-Learning-based models for ASAG. These models generally exhibit higher performance than hand-engineered models, but still strongly depend on training, which may be pre-training or task-specific. LLMs usually come pre-trained, but the extend of that pre-training greatly varies: while details on GPT-4's text corpus are proprietary, it can be assumed that it was trained and fine-tuned with orders of magnitude more data than for example BERT [11]. However, as this study shows, the difference in pre-training can be more than made up by the BERT-family's openness to additional task-specific training by the user.
At least for the grade-school educational content covered by the datasets in this study, GPT-4 performs ASAG at a performance level comparable to hand-engineered systems from five years ago. It does so even without the need for providing reference answers. There are strong indications that this ability would extend to higher education, for example university-level physics content [19], and that automated grading of open-ended assessment content is possible beyond short answers [20]. In addition, a general-purpose LLM can give more tailored feedback than simple _correct/incorrect_ judgments, which has high potential for learning from short answer grading [21].
A problem with general-purpose tools like GPT-4 [8] and Bard [9] is that they are running in the cloud. When it comes to grade-relevant student data, the question of data security and privacy cannot be ignored, which may limit the applicability of this approach to ASAG. An alternative for a model that might also not need additional training, but which could be locally installed, is
Llama 2 [22], However, preliminary studies by the author indicate that Llama 2 does generally not perform as well as GPT-4.
## VI Conclusion
The performance of the general-purpose Large Language Model GPT-4 on Automated Short Answer Grading does not reach that of specifically trained Deep-Learning models, but it is comparable to that of earlier hand-engineered ASAG models. A clear advantage of GPT-4 is that it does not need to be specifically trained for the task and can be used "out-of-the-box," which has the potential to turn it into a commodity for educators. In addition to not needing additional training, GPT-4 can also perform ASAG without the need for providing reference answers, at least at the grade-school level covered by the datasets used in this study and likely at the introductory higher-education level.
## Acknowledgements
The author would like to thank Julia Chatain for her help in connecting to the GPT API.
## Appendix A Availability of data and material
The benchmark datasets SciEntsBank and Beetle[7] are available from kaggle [10]. Code and calculated data are made available as supplemental material alongside this paper from [https://www.polybox.ethz.ch/index.php/s/mByV0od7uscm3VV](https://www.polybox.ethz.ch/index.php/s/mByV0od7uscm3VV) (the file readme.txt in the downloadable package explains the code and data files).
## Funding
Not applicable.
|
2309.11577 | Segregation on small rubble bodies due to impact-induced seismic shaking | We present a framework to study regolith segregation on rubble-pile asteroids
(self-gravitating granular aggregates) due to seismic shaking induced by
impacts sustained during their lifetimes. We first relate the amplitude and
frequency of surface vibrations to the location and severity of an impact, and
the rubble body's geometry and bulk properties. For clarity, the body is taken
to be an ellipsoid with size and spin close to that of Itokawa, although other
asteroids are also easily incorporated. We then model the body's collisional
history stochastically given the variability in the impact activity on an
asteroid. Finally, we utilize discrete element simulations to investigate the
regolith's response to impacts. In these simulations, in any sample collisional
history, every time an impact occurs, a bin filled with a grain mixture and
located at the region of interest on the asteroid is vibrated at that impact's
associated amplitude and frequency. Utilizing this framework we find that
impact-driven seismicity is sufficient to drive size segregation on small
rubble-piles, but the segregation quality depends on several aspects, e.g.
total impact energy supplied, placement of the region of interest, bulk wave
speed, and seismic diffusivity. | Sohanjit Ghosh, Ishan Sharma, Deepak Dhingra | 2023-09-20T18:29:26Z | http://arxiv.org/abs/2309.11577v1 | # Segregation on small rubble bodies due to impact-induced seismic shaking
###### Abstract
We present a framework to study regolith segregation on rubble-pile asteroids - self-gravitating granular aggregates - due to seismic shaking induced by impacts sustained during their lifetimes. We first relate the amplitude and frequency of surface vibrations to the location and severity of an impact, and the rubble body's geometry and bulk properties. For clarity, the body is taken to be an ellipsoid with size and spin close to that of Itokawa, although other asteroids are also easily incorporated. We then model the body's collisional history stochastically given the variability in the impact activity on an asteroid. Finally, we utilize discrete element simulations to investigate the regolith's response to impacts. In these simulations, in any sample collisional history, every time an impact occurs, a bin filled with a grain mixture and located at the region of interest on the asteroid is vibrated at that impact's associated amplitude and frequency. Utilizing this framework we find that impact-driven seismicity is sufficient to drive size segregation on small rubble-piles, but the segregation quality depends on several aspects, e.g. total impact energy supplied, placement of the region of interest, bulk wave speed and seismic diffusivity.
Impact, Seismic shaking, Segregation, Granular materials, Rubble-pile asteroids, Itokawa
## 1 Introduction
Several small bodies in the solar system, e.g. Itokawa, Bennu, and Ryugu, display a prominently granular surface; see Fig. 1. High-resolution images at different illumination angles indicate common features between them [1, 2, 3]. For example, large meter-sized boulders dominate the surface, against the expectation that thermal fragmentation and micrometeorite impacts should have ground boulders into fine regolith [4]. This suggests a dynamically active surface, where we may observe segregation of the regolith, even though gravity is several orders of magnitude lower than the Earth's; indeed Itokawa's surface in Fig. 1(_left_) shows significant segregation between fines in the Muses Sea and surrounding boulder-rich regions. The main aim here is to investigate if regolith segregation on small rubble asteroids could result from impact-induced seismic shaking.
Segregation of different-sized grains in a regolith may be achieved only after the latter is sufficiently mobilized. This, in turn, may occur for a variety of reasons, which we now discuss. The Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect due to solar radiation torque [5] may alter the spin of a small body significantly during its lifetime. This may then facilitate large-scale resurfacing, be it through fission or global landslides [6, 7]. Tidal encounters with terrestrial planets too may induce global regolith migration [8]. Impacts are also capable of initiating granular landslides. For example, due to Itokawa's small size, even centimeter-sized impactors may lead to global seismic shaking [9] and subsequent grain motion; this was investigated in the context of Bennu by [10]. Finally, non-gravitational forces, such as those due to the interaction of grains and solar wind plasma, produce electrostatic forces that may sometimes overcome gravity and cause regolith motion [11, 12].
All of the above processes act independently and over different timescales. Moreover, each process affects individual asteroids to varying degrees. Figure 2 represents potential routes to granular segregation on an asteroid's surface through a flowchart, in which the primary agents are shaded in proportion of their importance for an asteroid such as Itokawa, and this may change for other bodies. We estimate that impact-induced seismic shaking is most relevant for a body like Itokawa, because tidal effects, YORP, and electrostatic forces are not strong enough to initiate grain motion in this case. We now discuss the reasons for this claim in the following few paragraphs.
Numerical simulations by [13] of a close encounter with Earth of the near-Earth asteroid (NEA) Apophis, which has size similar to Itokawa, show that Apophis' surface is minimally affected even at its expected closest distance of 20000 miles. On the other hand, Itokawa's closest approach to Earth is much greater at 42 million miles [1]. At the same time, Itokawa is believed to have originated in the main belt, where it would not have been tidally affected by the giant planets. Together this suggests the implausibility of resurfacing on Itokawa by tidal interactions, which do not appear to have been strong enough to mobilize grains. Having said this, we recognize that the possibility and extent of Itokawa's close encounters with Mars have not yet been studied.
Radiation torque or YORP is a slow process that modifies the spin of asteroids over long periods of time. For Itokawa, [6] claim that YORP may have decelerated it from 6.5 h to its current spin state of 12.132 h in 50 - 90 thousand years. At the same time, [8] report that surface material failure is generally initiated by YORP-driven spin _up_, leading us to contend that YORP is unlikely to have resurfaced a slow spinner like Itokawa. Separately, we note that YORP is extremely sensitive to topography, with minor changes causing major deviations, even changes in spin direction [10]. Including YORP in any resurfacing model thus requires closely following the surface's evolution.
The possibility of electrostatic interactions of surface grains with charged plasma being able to levitate the finer-sized entities and transport them elsewhere has been reported by [11, 12]. There is not much knowledge about
Figure 1: Surfaces of _(left)_ Itokawa as seen by the Hayabusa spacecraft _(middle)_ Bennu near the equatorial region as captured by OSIRIS-REx, and _(right)_ Ryugu as observed by Hayabusa2 (redeits: ISAS-JAXA/NASA Goddard).
the extent and distribution of charged grains on Itokawa, or the plasma field in its vicinity. There is also no direct observational evidence of dust levitation on Itokawa. Moreover, it is expected that at leading order the redistribution of grains due to electrostatic interactions will be agnostic to the asteroid's topography, which would then fail to explain the concentration of finer grains at topographic lows, such as Muses C on Itokawa in Fig. 1 (_left_). Due to these aspects, we neglect the role of electrostatic levitation on segregation.
In contrast to the above processes that are either infrequent and/or not energetic enough and/or not well understood, impacts on a typical small asteroid are frequent, supply sufficient energy, and impact cratering is a fairly mature subject [14]. Indeed, impacts are an attractive mechanism for resurfacing small bodies whose size allows seismic energy concentration to remain high for a long time after the impact [15]. A typical NEA experiences a large number of impacts during its lifetime, e.g. we estimate in Sec. 3(b) that Itokawa experienced about 150,000 small and big impacts over 10 MYr. This is the typical lifetime of an NEA [16] which, for Itokawa, includes the conjunction of its head and body [17] and its insertion into a near-Earth orbit from the main belt [18]. Finally, due to their low gravity, even gentle seismic activity is able to mobilize surface material on a small asteroid [9]. For these reasons, we highlight impacts as the most important primary mechanism in Fig. 2 and focus here on regolith segregation due to impact-induced seismic shaking.
Impacts lead to local shaking and may also induce landslides. On bodies with granular regolith, the former may lead to segregation through the Brazil-nut effect (BNE) [19], while kinetic sieving in landslides [20] separates small and large grains. In terrestrial conditions, BNE has been investigated at length [21], but less so in low-gravity environments, where questions remain about both its presence and efficacy. Employing discrete element (DE) simulations [22], BNE was shown by [23, 24] to occur even when surface gravity was \(10^{-4}g_{{}_{\oplus}}\), where \(g_{{}_{\oplus}}\) is the gravitational acceleration on the Earth's surface. However, their simulations were greatly simplified in terms of the number and size distribution of grains, and the manner in which energy was supplied to them. This work was improved upon by [25] who surveyed the dependence of BNE upon the grains' mechanical properties, e.g. inter-grain friction and restitution coefficient. All of these studies utilized an arbitrary amplitude and frequency of shaking, unrelated to impact-induced seismic activity. Grain separation due to impact-induced landslides on small bodies has been explored even less, although [26] probed this possibility in a two-dimensional setting.
The above discussion motivates the present work, wherein we first formulate a framework to model impacts over a small rubble-pile asteroid over its lifetime, incorporate the induced seismic activity, and then link this to the segregation of surface regolith due to localized seismic shaking through a BNE-like process. This framework is then applied to an _ellipsoidal_ NEA with an aspect ratio similar to that of Itokawa to investigate possible resurfacing outcomes. Although local seismic shaking may in some instances - depending upon the availability of the seismic energy and the presence of gradients in the effective surface gravity - release enough regolith to cause a landslide, we will ignore this possibility here. These may be included after appraising the outcomes of the present exercise.
We will describe the collision history of our ellipsoidal rubble pile NEA stochastically. To this end, we take its spin state and age similar to that of Itokawa and follow the methodology of [27] to determine a size distribution of impacts experienced by the asteroid over its lifetime. From this, we estimate the number of impacts that are substantial enough to initiate global seismic shaking but are not so energetic that they disrupt the asteroid. These impacts are further distinguished between small and large impacts and are then allowed to occur at random time intervals and at random locations over the asteroid's surface, thereby creating several possible collisional histories.
For each sample collisional history, we model the transmission of seismic energy within the asteroid's volume for every impact in the manner of [28]. Seismic properties like quality factor, seismic diffusivity, and seismic frequency
Figure 2: Different routes by which regolith on small bodies may be segregated. Primary mechanisms are shaded in proportion to their relative importance in mobilizing regolith on an Itokawa-like body.
that govern this transmission are often not well-determined for rubble-pile bodies. Here, we improve upon the estimates of [29] by accounting for seismic energy transfer through the rubble-pile asteroid's granular and, therefore, highly dissipative interior. The post-impact seismic energy distribution over the asteroid's surface is then employed to drive local seismic shaking that is studied through DE simulations. In the latter, surface regolith - modeled as a granular mixture - is vibrated at an amplitude and frequency that is related to the impact and associated seismic activity; cf. Sec. 4(d). When performed for every impact in a sample collisional history we obtain one possible outcome for impact-driven resurfacing of the rubble-pile asteroid. This process is then repeated for several collisional histories. Our simulations indicate that final outcome of impact-induced seismic shaking on the surface of any small body is a complex process that depends on several factors, few of which are addressed in this work. Firstly, we demonstrate that it would suffice to only model the large impacts out of all the impacts capable of causing impact-induced seismic shaking to obtain segregation in a granular mixture. Next, we study how the degree of segregation varies at different locations of a non-spherical target body due to the different amounts of seismic energy received from impacts over its collisional history. We also study the outcome of a granular system subjected to seismic shaking for different initial grain configurations and also different collisional histories and comment on the sensitivity. Finally, in this work, we demonstrate how the seismic properties that govern energy transmission in a small body affect the process of segregation.
The rest of the paper is organized as follows. Section 2 models seismic energy transmission from the point of impact to different surface locations of an ellipsoidal NEA. The seismic properties that govern this transmission of energy are discussed in detail in the context of small rubble bodies. Section 3 discusses the stochastic modeling of an asteroid's impact history. In Sec. 4 we couple seismic energy received at a location to DE simulations of vertically vibrated surface grains at that location. Regolith segregation over the NEA's lifetime for several collisional histories is then collated and discussed in Sec. 5, before we conclude in Sec. 6.
## 2 Impact-induced seismic activity
Impacts occur on planetary bodies frequently. Unlike the Earth, which has an atmosphere, all objects that cross the orbits of small bodies, such as asteroids, reach their surface thereby modifying it significantly over its lifetime. In this section, we discuss the distribution of impact energy across the target body and related seismic activity. For this we will extend the methodology of [28] to a rubble asteroid, i.e. an asteroid that is comprised of grains held together largely by self-gravity. The asteroid is taken to be a triaxial ellipsoid rotating about its shortest principal axis, as shown in Fig. 3a. We limit ourselves to studying the impact-driven resurfacing of an asteroid so that impactors are assumed to not be large enough to disrupt the asteroid or alter its shape significantly.
### Seismic energy transmission
Out of the total kinetic energy deposited by an impactor, only a small fraction, typically much less than 1%, called _seismic energy_, is propagated as waves through the body [14]. The energy in these _seismic waves_ subsequently induces seismic shaking that may mobilize regolith. Following [28], the transmission of seismic energy through the impacted body is modeled as a diffusive process, analogous to heat conduction:
\[\frac{\partial\epsilon_{s}}{\partial t}=K_{s}\nabla^{2}\epsilon_{s}-\frac{2 \pi f}{Q}\epsilon_{s}, \tag{2.1}\]
where \(\epsilon_{s}\) is the _normalised seismic energy density_, \(t\) is time, \(K_{s}\) is the _seismic diffusivity_, \(f\) is the _seismic frequency_, and \(Q\) is the _seismic quality factor_. The diffusivity \(K_{s}\) regulates how fast the seismic energy diffuses through the medium. Further, seismic energy at a given location is dissipated during vibrations initiated there by seismic waves, and this temporal decay is controlled by the quality factor \(Q\), assuming that energy is lost mostly to viscous processes. This dissipation enters on the right-hand side of (2.1). The quality factor is defined [30] as the inverse of the fraction of energy lost per cycle in a viscously damped harmonic motion:
\[\frac{\text{Total energy stored in the oscillating body}}{\text{Energy lost per cycle}}=\frac{Q}{2\pi}=\frac{E}{\delta E}\implies\delta E=\frac{E}{2\pi Q}. \tag{2.2}\]
Solving (2.1) over a complex three-dimensional shape, even a triaxial ellipsoid, is analytically cumbersome. Thus, we replace the impacted asteroid by its best-fit cuboid with dimensions \(L\times W\times H\), and impose the following initial
and boundary conditions:
\[\epsilon_{s}(x_{0},y_{0},z_{0},t=0)=\delta(x-x_{0},y-y_{0},z-z_{0})\] (3a) and, for all \[t,\] \[\frac{\partial\epsilon_{s}}{\partial x}=0\text{ at }x=0,L;\ \frac{ \partial\epsilon_{s}}{\partial y}=0\text{ at }y=0,W;\ \frac{\partial\epsilon_{s}}{\partial z}=0\text{ at }z=0,H; \tag{3b}\]
where the impact is taken to occur at the location \((x_{0},y_{0},z_{0})\) and \(\delta(\cdot)\) represents the delta distribution. The initial condition in (3a) assumes that the seismic energy imparted to the target body may be modeled as a delta distribution, because the target body is much larger than the impactor, as mentioned above. The boundary condition (3b) reflects the fact that seismic energy cannot flow across the body's free surface. The solution for the seismic energy density may now be obtained in closed form as
\[\epsilon_{s}(x,y,z,t)=e^{-2\pi(ft/Q)}\Bigg{(}1+2\sum_{n=1}^{\infty }\cos\frac{n\pi x_{o}}{L}\cos\frac{n\pi x}{L}e^{-K_{s}n^{2}\pi^{2}t/L^{2}} \Bigg{)}\Bigg{(}1+\ldots\\ \ldots 2\sum_{n=1}^{\infty}\cos\frac{n\pi y_{o}}{W}\cos\frac{n\pi y }{W}e^{-K_{s}n^{2}\pi^{2}t/W^{2}}\Bigg{)}\Bigg{(}1+2\sum_{n=1}^{\infty}\cos \frac{n\pi z_{o}}{H}\cos\frac{n\pi z}{H}e^{-K_{s}n^{2}\pi^{2}t/H^{2}}\Bigg{)}, \tag{4}\]
where \(n\) may be identified as the _wave number_.
Although approximate, the energy distribution in the best-fit cuboid of the asteroid provides a reasonable first estimate of how seismic energy is dispersed through the asteroid's volume. Figure 3b reports the temporal variation in the seismic energy received at the equatorial point \(X\) in Fig. 3a when an impact occurs 50m to its East. In Fig. 3a we employ a non-dimensional time \(\tau\) obtained by scaling time by the time it takes for a \(P\)-wave to travel 50m in basalt; this turns out to be 0.0167s, as the \(P\)-wave speed in basalt is 3 km-s\({}^{-1}\). From Fig. 3b, we observe that there is an initial rise or _build-up_ of seismic energy when the seismic wave reaches point \(X\). This is followed by a gradual decay due to both local dissipation and diffusion away. The efficiency of the latter two processes may be gauged from Fig. 3b by comparing the evolution for \(\epsilon_{s}\) when \(Q=1500\) with when \(Q\to\infty\), at which limit there is no viscous dissipation.
We now relate the manner in which the surface shakes in response to the seismic energy available at that location. Local surface vibrations will vary in frequency and amplitude depending upon material properties such as stiffness
Figure 3: (a) A schematic of the ellipsoidal NEA that we investigate. The dimensions, density, and spin of the asteroid are similar to that of Itokawa. A location \(X\) at the equator along with a local “patch” is indicated. (b) Variation of seismic energy density \(\epsilon_{s}\) with non-dimensional time \(\tau\) at point \(X\) due to an impact 50m away along the equator as shown in (a) for two values of the quality factor \(Q\). The seismic properties employed are discussed in Sec. 2(b).
and dissipation, as well as the available seismic energy and its spectral profile. This is a complex dependency that is compounded by the lack of detailed information about the body's properties. As a first step, we assume that the energy received through seismic waves at any surface location of the asteroid excites local harmonic vibrations at an amplitude \(A\) and _seismic frequency_\(f=\omega/2\pi\). The energy density \(\epsilon_{d}\) stored in such a vibration is
\[\epsilon_{d}=\frac{1}{2}\rho_{a}\omega^{2}A^{2}=\frac{\rho_{a}a_{\max}^{2}}{8 \pi^{2}f^{2}}, \tag{5}\]
where \(\rho_{a}\) is the local surface density and \(a_{\max}=A\omega^{2}\) is the maximum vibrational acceleration. Next, with \(\epsilon_{s}\) given by (4), we define \(\epsilon_{s}^{p}=\max_{t\in(0,\infty)}\epsilon_{s}(x,y,z,t)\) as the maximum energy density experienced post-impact at a given surface location \((x,y,z)\); this is indicated in Fig. 3. Assuming that most of the available seismic energy is transferred to surface vibrations, we equate \(e_{s}^{p}\) with \(\epsilon_{d}\) in (5) to obtain \(a_{\max}\) at a particular \((x,y,z)\). This is then non-dimensionalized by the local surface gravity \(g_{a}\) to yield the non-dimensional _peak surface acceleration_
\[\varGamma=\frac{a_{\max}}{g_{a}}=\frac{A\omega^{2}}{g_{a}}. \tag{6}\]
Note that \(\varGamma\) varies spatially because both \(g_{a}\) and, potentially, \(\omega\) and \(A\) vary over the asteroid's surface. The parameter \(\varGamma\) characterizes the intensity of the vibration, or _seismic shaking_, at point \(X\), and will be utilized extensively.
### Selection of seismic parameters
The transport of the seismic energy \(\epsilon_{s}\) through the body depends upon its seismic properties; cf. (4). Table 1 summarizes the seismic properties that we employ for a rubble-pile asteroid. We now discuss these choices in detail.
We first consider the impact velocity \(v_{i}\). Typical velocities of near-Earth objects (NEOs) are about 10-15 km-s\({}^{-1}\), which is much higher than the 5 km-s\({}^{-1}\) of main-belt objects (MBOs) [31]. At the same time, NEAs have their aphelion in the main belt due to their highly eccentric orbits. Impacts between an NEO and an MBO would therefore be at about 10 km-s\({}^{-1}\), assuming orbital motion in the same direction. Thus, we take \(v_{i}=10\) km-s\({}^{-1}\) to represent impacts on our Itokawa-like asteroid.
The last term in (1) contains the quality factor \(Q\) defined in (2), for which we now identify an appropriate value. The size-frequency distribution (SFD) of the craters on Itokawa's surface was modeled by [29] through _Small Body Crater Terrain Evolution Model_ (SBCTEM) simulations. In these simulations, impactors struck a monolithic bedrock layer stochastically, and the evolution of the terrain was observed while varying its \(Q\) from 1000 to 2500. The simulations that best fit Itokawa's crater count and size distribution corresponded to \(Q=1500\). We employ this value for \(Q\) in our work.
The diffusion rate of seismic energy is governed by the diffusivity \(K_{s}\), with a low \(K_{s}\) indicating a less well-connected body. To estimate \(K_{s}\) we again consider the SBCTEM simulations of [29]. Varying \(K_{s}\) from \(0.001-0.250\) km\({}^{2}\)s\({}^{-1}\) it was found that \(K_{s}\) = 0.002 km\({}^{2}\)-s\({}^{-1}\) matched Itokawa's crater distribution best, and we will thus employ this value for \(K_{s}\). A low value for \(K_{s}\) is compatible with our supposition that Itokawa's interior is granular.
We now verify the consistency of the above estimate of \(K_{s}\) for a granular body. Following [28], and in analogy with three-dimensional thermal diffusion [32], we may define
\[K_{s}=v_{p}l_{s}/3, \tag{7}\]
where \(v_{p}\) is the mean seismic velocity and \(l_{s}\) is a'mean free path', in that it is the average distance over which the seismic energy reduces to \(e^{-1}\) of its initial value [33, 34]: \(l_{s}\) is taken to be the average asteroid diameter. The low value of \(K_{s}=0.002\) km\({}^{2}\)-s\({}^{-1}\) estimated above leads from (7) to a \(P\)-wave speed of about 20 m-s\({}^{-1}\). Such a low value may not be unexpected for gently held granular aggregates, like rubble asteroids. Indeed, we recall [35] that in an isotropic and homogeneous solid the \(P\)-wave speed \(v_{p}\) is given by
\[v_{p}=\sqrt{Y\Big{/}\rho}\,, \tag{8}\]
where \(Y\) is the solid's Young's modulus and \(\rho\) is its density. A porous-granular medium [36, 37], such as our rubble asteroid, would have a low effective elastic modulus compared to a coherent medium due to the presence of internal voids. This would suggest a \(P\)-wave speed similar in magnitude to the low value obtained above from (7). However, \(P\)-wave speeds in self-gravitating rubble-piles are not known. Consequently, lunar [38] and Martian values [39, 40] are sometimes adopted, e.g. [41, 42]. Others, e.g. [28, 29], utilize the seismic frequency \(f\) for small bodies from seismic experiments conducted on the Earth and the Moon and relate it to \(v_{p}\), as is done further below. Here, we find the \(P\)-wave velocity in a granular aggregate from a consideration of their response as discrete media.
We saw in (8) that the \(P\)-wave speed in a linear elastic material is given in terms of its bulk stiffness. The effective stiffness at any point within a granular aggregate depends upon the local confining pressure [42, 43]. In a geophysical context, the confining pressure at a given depth may be estimated as the lithostatic pressure of the overburden. Thus, the confining pressure at a depth \(d\) below the surface is \(\rho gd\), where \(\rho\) is the bulk density and \(g\) is the surface gravitational acceleration, both averaged over the depth \(d\). In rubble asteroids, due to their small size, gravity is low, and the confining pressure is of the order of a few pascals. For example, considering the density of chondrite to be 3200 kg-m\({}^{-3}\), Itokawa's bulk porosity as 40% [44] and its averaged surface gravitational acceleration to be \(8.5\times 10^{-5}\)[45], the confining pressure at a depth of 1m on Itokawa may be estimated as
\[p_{c}\approx\rho gd=(0.4\times 3200)\times 8.5\times 10^{-5}\times 1=0.108\, \text{Pa}. \tag{9}\]
We now estimate wave speeds in granular media at such low confining pressures.
Compression waves in randomly packed dry Ottawa sand were experimentally studied at low confining pressures by [36]. Fitting their experimental data provides an empirical relation for the elastic modulus \(Y\):
\[Y=B(e_{c}-e)^{2}/(1+e)p^{1/2}, \tag{10}\]
where \(e\) is the void ratio, and \(e_{c}\) and \(B\) are constants that depend upon the grain type. The above fit indicates that \(Y\) varies as \(p^{1/2}\), so that from (8) we find that the \(P\)-wave speed \(v_{p}\propto p^{1/4}\). This matches the theoretical estimate of [43] and is consistent with the \(v_{p}\propto p^{0.19}\) obtained by [46] through DE simulations. We note that we employ results obtained for Ottawa sand because it is siliceous and has a density comparable to chondritic grains found in the regolith of asteroids such as Itokawa, Ryugu and Bennu [17].
Now the experiments of [36] were performed at confining pressures of the order of kilopascals which, while low in a terrestrial context, is still much higher than those encountered on small bodies. Nevertheless, we extrapolate (10) and utilize the \(p^{1/4}\) variation of the \(P\)-wave speed with pressure to estimate \(v_{p}\) in a typical rubble asteroid. This is done in Fig. 4 which reports the \(P\)-wave speed as a function of the confining pressure. We find that \(v_{p}\) is about 8 m-s\({}^{-1}\) for the value of confining pressure such as that found on a rubble pile asteroid like Itokawa. This is comparable to the value of 20 m-s\({}^{-1}\) estimated from (7) with \(K_{s}=0.002\) km\({}^{2}\)-s\({}^{-1}\). At the same time, utilizing the low \(P\)-wave speed of 8 m-s\({}^{-1}\) in (8), yields a very small effective elastic modulus \(Y\) of around 80 kPa for Itokawa. This estimate is comparable in scale with that of [29], whose SBCTEM simulations predicted a value between 10-20 kPa. Crucially, both our and [29]'s assessments of \(Y\) are several orders of magnitude lower than that of the lunar regolith, which is reported to be of the order of MPa [34, 38].
As already mentioned, not all of the impactor's kinetic energy is made available as seismic energy. Energy is consumed in mobilizing ejecta and in heating and plastically deforming both target and impactor bodies [28]. This is characterized through a seismic efficiency \(\eta\), which is the fraction of the impact energy that survives as seismic
energy \(E_{i}\); thus,
\[E_{i}=\eta E_{k}=\eta\frac{1}{2}m_{i}v_{i}^{2}, \tag{11}\]
where \(E_{k},m_{i}\), and \(v_{i}\) are, respectively, the kinetic energy, mass, and velocity of the impactor. In their SBCTEM simulations, [29] varied \(\eta\) from \(1\times 10^{-8}-1\times 10^{-6}\) and the simulations that best reproduced the SFD of craters on Itokawa were for \(\eta=1\times 10^{-7}\). This, therefore, is the value of \(\eta\) that we will employ.
The local seismic frequency \(f\) occurring in (5) and (6) is another poorly constrained parameter. This, in general, will depend upon the asteroid's material parameters, especially in the vicinity of the surface location of interest. These details are not available at present. We proceed here as follows. We first note that we may ignore energy transfer by surface waves because the local elastic modulus of a granular aggregate vanishes as we approach its surface where the confining pressure drops to zero. At the same time, shear waves are slower than compressional waves. Thus, we may limit ourselves to energy propagation through \(P\)-waves. Next, appealing to the small size of rubble asteroids, we assume that following an impact the asteroid shakes as a whole, primarily in its _first_ mode of vibration, which has frequency \(f_{0}\); thus, the local seismic frequency \(f=f_{0}\). We now estimate \(f_{0}\) for our rubble-pile asteroid.
We ignore the asteroid's ellipsoidal shape and consider the vibration of a sphere of equivalent size in its first mode - also called the _breathing_ mode - in which material moves radially. The expectation here is then that impacts preferentially excite radial displacements. The frequency of the breathing mode for a homogeneous linear-elastic sphere with Poisson's ratio \(\nu=1/4\) may be expressed as [41]:
\[f_{0}=0.41f_{char}, \tag{12}\]
where \(f_{char}\) is the sphere's characteristic frequency, defined as the inverse of the time taken by the \(P\)-wave to travel across a sphere of radius \(R_{s}\), so that \(f_{char}=v_{p}/R_{s}\). We will set \(R_{s}=162\)m, which is comparable to the mean radius of Itokawa [1] but, rather than employing \(v_{p}\) in linear-elastic materials, we now employ the value \(v_{p}=8\)m-s\({}^{-1}\) obtained above for a granular aggregate. Once \(f_{char}\) is known, we find \(f_{0}\) from (12) and, hence, the seismic shaking frequency \(f\) and circular frequency \(\omega\) of the rubble asteroid:
\[f=f_{0}=0.0197\;\text{Hz}\implies\omega=2\pi f=0.124\;\text{rad-s}^{-1}. \tag{13}\]
We now have estimates of the various parameters governing seismic shaking, and these are collated in table 1. We next describe how we model the collisional history of an NEA such as Itokawa.
## 3 Collisional history
To investigate texturing at patch \(X\) in Fig. 2(a) we need to quantify the total energy received there due to impacts experienced by the asteroid during its lifetime which, following [16], we take here to be around 10 MYr for NEAs. For this, we first need to estimate the number of impacts that the asteroid may have experienced as a function of the impactor diameter. To this end, we discretize the range of impactor sizes and segregate the impactor population into groups or 'bins'. We then follow the stochastic methodology of [27] to count the number of impacts, assuming that the impactor population remains steady with time. We acknowledge that the estimates here will vary for individual objects, as each asteroid has a different shape and size, and a unique dynamical evolution and collisional history.
We are interested in only those impacts that are energetic enough to cause resurfacing through global seismic shaking but do not lead to catastrophic disruption. The first step then is to identify the appropriate limits on the impactor sizes. Whenever required, we will take the seismic parameters listed in table 1.
### Limits on impactor size
Impactors range from microscopic dust particles to other asteroids. To estimate the minimum impactor diameter \(D_{i,\min}\) that may initiate global seismic shaking, we set \(\Gamma=1\) in (6) at the location farthest from the point of impact. For a spherical asteroid, this is located diametrically opposite to the impact point. In other words, an impactor of diameter \(D_{i,\min}\) will initiate accelerations equal to the gravitational acceleration at the asteroid's surface location most distant from where the impact took place. Thus, for impactors with sizes greater than \(D_{i,\min}\) we expect surface motion everywhere on the asteroid. Inserting \(\Gamma=1\) in (6) and employing the estimate for the frequency \(\omega\) and the amplitude \(A\) from, respectively, (13) and (5), and setting \(g=8.5\times 10^{-5}\) m-s\({}^{-2}\) to match that of Itokawa (cf. Sec. 4(a)), we find \(D_{i,\min}\approx 0.015\) m for a rubble asteroid with properties given in table 1. This result substantiates [47]'s claim that even centimeter-sized impactors may cause global seismic shaking on asteroids like Itokawa.
The mass loss due to impact ejecta increases with the impactor diameter. Mass loss occurs when the velocity of the ejecta exceeds the escape velocity. To determine the maximum impactor diameter \(D_{i,\max}\) that may cause global seismic shaking with mass loss not amounting to catastrophic disruption, we utilize the commonly invoked parameter \(Q^{*}\)[48, 49], which is the _threshold specific kinetic energy_ of the impactor that will precipitate greater than 50% mass loss. Due to the unavailability of \(Q^{*}\) for a typical asteroid, we take \(Q^{*}\) corresponding to a target composed of basalt [48], which has a density similar to that of chondrite. We recognize that for granular targets \(Q^{*}\) may be lower. To determine the \(Q^{*}\) that corresponds to non-destructive impacts, we utilize the scaling laws of [14, 48] and limit mass loss to within 1% by setting \(Q^{*}\) to \(Q^{*}/1000\). Limiting mass loss to within 1% will avoid introducing complexities due to changes in spin and gravity field; these aspects may be introduced in a more elaborate model in the future. With this, for an asteroid of Itokawa's size, we find that the maximal impactor diameter \(D_{i,\max}\) is roughly 1.43m.
The impactor diameters \(D\) are thus constrained to lie in the range
\[D_{i,\min}=0.015\,\text{m}\leqslant D\leqslant 1.43\,\text{m}=D_{i,\max}. \tag{3.1}\]
### Stochastic collisional history
To describe the asteroid's collisional history stochastically we need to define the size distribution of the impactors, as well as the distributions in impact time and in the impact location over the asteroid's surface during its lifetime.
Consider first the size distribution. In Sec. 2(b), we considered a typical impact to be that between a rubble NEA and an MBO, since most of the NEAs originate in the main belt and spent the majority of their lifetime there before migrating to near-Earth orbits [50]. We employ the estimate of [51] to obtain the number of potential impactors over the 10 MYr lifetime of an NEA. Now, a power-law distribution to estimate the total number of MBOs having a diameter greater than \(d\) (in km), denoted by \(N_{>d}\), is provided by [51] as
\[N_{>d}=C_{d}d^{-2.2}, \tag{3.2}\]
where the constant \(C_{d}=2.8\times 10^{6}\). Only a fraction of the MBOs with sizes between \(D_{i,\min}\) and \(D_{i,\max}\) will collide with a given asteroid. We estimate this fraction as follows. Suppose the total volume of the main belt is \(W\) and \(N_{>d}\) objects of diameter greater than \(d\) exist there. This provides a number density of \(N_{>d}/W\). A target asteroid of mean radius \(R\) moving with a velocity \(U\) will sweep a volume of \(\pi R^{2}Ut\) in time \(t\). Hence, the number of objects \(N_{h}\) that will collide with this asteroid in time \(t\) is given by
\[N_{h}=\pi R^{2}UtN_{>d}/W=P_{i}R^{2}N_{>d}t, \tag{3.3}\]
where \(P_{i}=\pi U/W\) is the _intrinsic collisional probability_[52], which is \(2.18\times 10^{-18}\) km\({}^{-2}\) yr\({}^{-1}\) for collisions between NEOs and MBOs [51]. From (3.2) and (3.3) we then estimate the number of impacts experienced by an asteroid of Itokawa's size and age over its lifetime \(t\) of 10 MYr within the size range mentioned in (3.1) to be about 150,000.
We now divide the impactor population into two groups or'simulation bins' as shown in table 2. _Small_ impactors in bin 1 have diameters in the range \(D_{i,\min}\leqslant D\leqslant D_{c}\) while _large_ impactors in bin 2 satisfy \(D_{c}\leqslant D\leqslant D_{i,\max}\), where the cut-off size \(D_{c}=0.1\) m is obtained by setting \(\Gamma\) at the point of impact to be 100, which corresponds to the modal value of a typical collisional history experienced by our rubble asteroid as seen later in table 4.
Now impactor size will vary within a bin, to describe which we need a size-frequency distribution. Recently, [27] employed a uniform random distribution of impactor sizes within a bin. Here we utilize an exponential random distribution [53], as we expect the impactor population to reduce with increasing impactor diameter. Figure 5 displays the size distribution we employ for impactors in bin 1 and bin 2.
The surface distribution of impacts is implemented as follows. We generate a uniform grid over the ellipsoidal asteroid's surface, as in Fig. 2(a), and then allocate impact sites randomly at the vertices. Similarly, impacts are assumed to have occurred randomly during the asteroid's lifetime, which is equivalent to taking the impactor flux constant. Finally, combining the size, spatial, and temporal distributions we arrive at the necessary stochastic description.
### Seismic activity at a given point
We are now ready to quantify the seismic activity at patch \(X\) due to impacts. For this, we will focus on the patch's center, point \(X\). Seismic energy is transferred to point \(X\) post-impact, and this transfer is modeled through (2.4). Utilizing this for each impact in a sample collisional history we compute the seismic energy that is received at \(X\) during that particular sequence of impacts. The delivery of peak seismic energy density \(\varepsilon_{s}^{p}\) at patch \(X\) for three different collisional histories over a time span of \(10^{3}\) years is reported in Fig. 7. For each impact, we scale \(\varepsilon_{s}^{p}\) by
\(\varepsilon_{s0max}^{p}\) - the energy density of the largest impact in the collisional history containing the impact, as well as by \(\varepsilon_{s0}^{p}\) - the energy density at the point of impact. The impactor's size and distance \(\Delta x\) from \(X\) are indicated in Fig. 7.
Because none of the impacts in any of the three histories in Fig. 7 have \(\varepsilon_{s}^{p}/\varepsilon_{s0max}^{p}=1\), it is clear that the strongest impact in each history occurred outside the \(10^{3}\) year time window considered. We also find that large impacts, i.e. impacts from objects lying in bin 2 in table 2, are rare, with only one occurring over \(10^{3}\) years in collisional history 3. Comparing \(\varepsilon_{s}^{p}/\varepsilon_{s0max}^{p}\) across the three histories we find that the distribution of seismic energy density is different, with collision history 2 receiving the least fraction of the maximum \(\varepsilon_{s}\) deposited over the \(10^{3}\) year span. Furthermore, as expected, impactors closer to \(X\) and/or larger in size deliver greater \(\varepsilon_{s}^{p}\), as they input greater kinetic energy and/or lose less energy during seismic wave transmission to patch \(X\). Indeed, by considering \(\varepsilon_{s}^{p}/\varepsilon_{s0}^{p}\) in collisional history 1 in Fig. 7 we find that the closest impact to point \(X\) was only 28m away so that 94.87% of the seismic energy deposited at the impact location was received at \(X\). At the same time, in the same history, the farthest impact from \(X\) was 313m away, so only 50.55 % of the impact energy was made available at \(X\), with the rest getting distributed and dissipated in the asteroid's interior. Finally, we note that, although Fig. 7 shows different evolutions of \(\varepsilon_{s}^{p}\) over \(10^{3}\) years, the _total_ seismic energy delivered remains roughly the same for all samples.
We now investigate the non-dimensionalized ground acceleration \(\Gamma\) at patch \(X\) due to seismic shaking there employing the procedure discussed in Sec. 2. In Fig. 7(a) we plot \(\Gamma\) observed at \(X\) due to large impacts against the distance \(\Delta_{X}\) of the impact's location from \(X\). As expected, \(\Gamma\) at point \(X\) depends upon both \(\Delta_{X}\) and the impactor's diameter \(D_{i}\). As with \(\varepsilon_{s}^{p}\) in Fig. 7, impactors that are big in size and/or close to \(X\) produce a high \(\Gamma\). In Fig. 7(a), we observe a concentration of impactors lying within 200m-250m of point \(X\), which reflects the central location of patch \(X\) on the asteroid; see Fig. 2(a). Were \(X\) located at the end of the asteroid's long axis, \(\Delta_{X}\) on average would be
\begin{table}
\begin{tabular}{c c c} \hline \hline & Bin 1 & Bin 2 \\ \hline Diameter range (m) & 0.015-0.1 & 0.1-1.43 \\ Number of MBOs & 2.80 \(\times 10^{17}\) & 1.76 \(\times 10^{15}\) \\ Number of impacts & 148566 & 938 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Number of small (bin 1) and large (bin 2) MBOs, along with the number of expected impacts. Figure 5: Exponential random distribution of impactors in bin 1 and 2.
much higher for a uniform random distribution of impacts on the surface; the consequences of this are discussed in Sec. 5(b).
Figure 2(b) demonstrated that, in time, seismic energy received at a location attenuates to zero asymptotically. In reality, due to the presence of rate-independent energy dissipation mechanisms, e.g. dry frictional sliding between grains, energy will die out in finite time. Such dissipation mechanisms are not present in (4), which assumes that the energy loss is due mainly to viscous effects. We nevertheless incorporate the finite-time dissipation of seismic energy in our simulations - cf. Sec. 4(b) - by introducing a _full-off time_\(t_{\text{f}}\), which is defined by the property that
\[E(t_{\text{f}})=10^{-4}E_{\max}\,, \tag{3}\]
where \(E_{\max}\) represents the peak seismic energy experienced at \(X\) due to an impact event. The fall-off time thus represents the time taken for the seismic energy to decay from its peak value to a small number. In Fig. 7(b) we display the variation of \(t_{\text{f}}\) with both the impactor's diameter and its distance \(\Delta_{X}\) from point \(X\). As expected, a large impactor close to \(X\) will have the longest \(t_{\text{f}}\), while a small impactor far away from \(X\) will have the shortest \(t_{\text{f}}\).
Figure 7: Scaled peak seismic energy densities \(\varepsilon_{s}^{p}/\varepsilon_{s0max}^{p}\) (_filled bar_) and \(\varepsilon_{s}^{p}/\varepsilon_{s0}^{p}\) (_empty bar_) received at patch \(X\) over \(10^{3}\) years for three sample collisional histories. Each bar represents an impact at an arbitrarily chosen surface location on the asteroid by an object randomly selected from either bin 1 or bin 2, with a star identifying the latter. The two values in the parentheses along the abscissa represent, respectively, impactor diameter and distance from point \(X\). The horizontal separation between two bars represents the time between two impacts.
## 4 Methodology
It is too computationally expensive to investigate regolith motion and segregation over the entire surface of an asteroid through discrete element (DE) simulations. Thus, we study regolith motion within a region that is large enough to display the segregated state that interests us - for example, around the Muses Sea region on Itokawa as shown in Fig. 1 - but is small enough to be computationally tractable. To this end, we work, as in the foregoing sections, with patch \(X\) on the equatorial region of the asteroid in Fig. 2(a). The patch has dimensions of 20m \(\times\) 20m along the \(x\) and \(y\) directions, respectively, and is taken to be 5m deep in the \(z\)-direction, consistent with the mean regolith bed depth estimated by the aforementioned SBCTEM simulations of [29].
### Conditions at patch \(X\)
To initialize our simulations we need the gravitational field and the grain size distribution at patch \(X\) in Fig. 2(a). Consider first the magnitude of the gravitational acceleration. We set the asteroid's density and rotation period to match Itokawa's values, viz. 1.9 g-cm\({}^{-3}\) and 12.132 h, respectively. Following the methodology of [54] for estimating a polyhedron's gravity field yields \(g_{A}\approx\) \(8.5\times 10^{-5}\) m-s\({}^{-2}\) as the gravitational acceleration at point \(X\). We take the gravitational field over patch \(X\) to be this constant value. The centrifugal acceleration at the assumed average spin rate of \(1.44\times 10^{-4}\) rad-s\({}^{-1}\) is approximately \(2.42\times 10^{-6}\) m-s\({}^{-2}\). This is about 3% of the gravitational acceleration at \(X\), so we ignore centrifugal effects henceforth.
We note that, because of the location of patch \(X\) on the asteroid in Fig. 2(a), gravity acts normally on the surface. This will not be so for less regular asteroid shapes or even at other latitudes on an ellipsoidal one. At the same time, the local slope - defined as the angle \(\theta\) between the gravity vector and surface normal - is less than \(5^{\circ}\) in the vicinity of the Muses C on Itokawa [45]. Thus, the vanishing of \(\theta\) in our model is representative and not atypical. In fact, when \(\theta\) is so much smaller than the cohesionless angle of repose of the regolith, which for typical geological materials lies between \(30^{\circ}-40^{\circ}\)[55], or even higher in low-gravity environments [56], a triggering mechanism, such as seismic shaking, will be required to mobilize the grains.
We turn next to the grain size distribution in patch \(X\). This is typically non-uniform on asteroids; see, e.g. Fig. 1. There are ongoing efforts to model boulder distribution on asteroids, and [57] provides one such model for Itokawa obtained by processing raw images procured by Hayabusa. The prediction of [57]'s model for the boulder distribution at Itokawa's eastern face helps set the grain size distribution at patch \(X\) in our simulations. We consider three categories of grain sizes in increasing order of their diameters, viz. _pebbles_ (diameter < 0.1m, labeled 'S' for _small_), _cobbles_ (diameter \(0.1\)m - \(1\)m, labeled 'M' for _medium_), and _boulders_ (diameter > 1m, labeled 'L' for _large_), following the terminology of [9]. In the subsequent sections we refer to cobbles and boulders as \(M\) and \(L\)_boulders_.
Figure 8: (a) Peak ground acceleration \(\varGamma\) at patch \(X\) as a function of the distance \(\Delta_{X}\) of the impact from \(X\) for large impactors, i.e. objects belonging to bin 2. (b) Fall-off time \(t_{\text{f}}\) of the seismic energy at point \(X\) as a function of \(\Delta_{X}\) for large impacts. Dashed lines represent lines of constant \(t_{\text{f}}\). Dots are colored as per the indicated scale.
Because of their large numbers, it is computationally infeasible to simulate all the smallest grains, also called _fines_. The total number of fines is, therefore, limited to 50000, and we fix their mean diameter to be 0.25 m. This allows us to achieve the desired regolith depth at patch \(X\) but keeps computational times reasonable. Table 3 displays the grain size distribution that we adopt, with grain sizes distributed uniformly and randomly within 10% of the given mean diameters. We ignore the presence of very large boulders, i.e. those with diameters greater than 10m; [58] estimates that there are very few such boulders on Itokawa.
### Discrete element simulations
The motion of grains in patch \(X\) is followed through DE simulations performed on the open-source package Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) [60]. In LAMMPS the grains are taken to be rigid and cohesionless spheres of the same density \(\rho=3200\) kg-m\({}^{-3}\) corresponding to that of typical LL chondrite, which is believed to be the composition of the regolith on Itokawa [17]. The collisions between grains, or between grains and the walls of the simulation box are inelastic and are described through interaction models in the normal and tangential directions. These models consist of a non-linear elastic spring placed in parallel to a linear viscous damper, together connected in series with a Coulomb frictional slider; here we suppress the slider in the normal direction.
The parameters of the elements in the interaction model are tuned to match the overall grain properties as follows. From Hertzian contact mechanics, the stiffness of the normal and tangential are obtained as, respectively, \(k_{n}=5\times 10^{4}\)kg-s\({}^{-2}\) and \(k_{t}=2k_{n}/7\)[25]. The viscosity of the damper in the normal direction \(\gamma_{n}\) is tuned by the normal coefficient of restitution \(e\), which is set to 0.5, yielding \(\gamma_{n}=493\)kg-s\({}^{-1}\)[61], while the viscosity in the tangential direction \(\gamma_{t}=\gamma_{n}/2\), which is the default value adopted in LAMMPS. We note that \(\gamma_{n}\) and \(\gamma_{t}\) are obtained using a linear spring-dashpot interaction model, but it provides a reasonable first estimate to calibrate our simulations. The angle of friction of the Coulomb slider is taken to be \(\varPhi=35^{\circ}\), which corresponds to the typical angle of repose reported for grains on the asteroid Itokawa [1]. Lattice formation is prevented as we allow grain sizes to vary by 10% about a mean diameter, as noted above.
As mentioned at the section's beginning, we will simulate the dynamics of the grains occupying patch \(X\) which, as shown in Fig. 9, is modeled as a cuboidal'simulation box' of dimension \(20\)m \(\times\)\(20\)m \(\times\)\(50\)m. Gravity \(g_{A}\approx 8.5\times 10^{-5}\) m-s\({}^{-2}\) acts along the negative \(z\)-direction. The bottom face in the \(z\)-direction is a fixed and impenetrable wall and represents either confined bulk grains or the bedrock on top of which the regolith lies. The top of the simulation box in the \(z\)-direction is open and unbounded. Finally, we impose periodic boundary conditions (PBCs) in the lateral \(x\) and \(y\) directions. The choice is prompted by both the gravity being nearly normal to the surface in the vicinity of patch \(X\) and our restricting seismic activity to cause shaking along \(z\) alone. Thus, patches abutting patch \(X\) may be assumed to respond similarly, at least to the leading order of approximation. Employing PBCs also compensates for the loss of grains that are launched from patch \(X\) to neighboring patches due to impacts, by replacing them with grains that are gained from neighboring patches through a similar process. Assumptions about patch \(X\) will have to be revisited at those locations of an asteroid that experience gravity and/or seismic shaking that are not normal to the local topography, and this will be the focus of subsequent work.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Type & Mean & Number Volume \% \\ & diameter (m) & & \\ \hline Pebbles / S fines & 0.25 & 50000 & 95.37 \\ Cobbles / M boulders & 0.75 & 50 & 2.57 \\ Boulders / L boulders & 1.5 & 5 & 2.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Grain distribution at patch \(X\) in simulations.
To prepare the initial conditions for the simulation, we randomly generate grains inside the simulation box as per the grain size distribution in table 3. These are then allowed to settle under gravity, which is temporarily set equal to the Earth's value for faster deposition. The settling time is kept long enough to allow the grains to attain equilibrium. The entire simulation box is then spun about its \(y\)-axis and then allowed to resettle to ensure a random initial configuration and to prevent any lingering artifacts from the grain generation process in LAMMPS.
We now investigate the evolution of this mixed collection of grains at patch \(X\) as the asteroid is subjected to collisions over its lifetime. To simulate this the base of the simulation box is made to shake sinusoidally after each impact in a sample collisional history. The walls of the simulation box are smooth and act only to confine the grains laterally. As already mentioned, we take seismic shaking to be normal to the topography, which is consistent with our restriction of seismic energy transfer to patch \(X\) through only compressional \(P\)-waves. This is a first step, as actual seismic shaking may also involve motion parallel to the surface, which will be incorporated in future work.
To simulate seismic shaking at patch \(X\) in the aftermath of the \(i^{\text{th}}\) impact we need to provide the associated shaking's duration, frequency and amplitude. The duration is set by the fall-off time \(t_{\text{f},i}\), defined by (11) in terms of the \(i^{\text{th}}\) impact's peak seismic energy \(E_{\max,i}\) received at patch \(X\). Figure (b)b indicates that the longest fall-off time is about 120s. Then, as discussed at the end of Sec. 2(b), we set \(f_{i}=f_{0}\), the frequency of the first radial mode of vibration estimated in (13). Finally, the shaking amplitude is found as follows. The ground shaking due to the \(i^{\text{th}}\) impact is characterized through the non-dimensional surface acceleration \(\Gamma_{i}\) observed at the patch's center \(X\); cf. (4)-(6). Employing \(\Gamma_{i}\) as \(\Gamma\) in (6) and setting \(\omega_{i}=2\pi f_{i}\) we finally compute the amplitude \(A_{i}\) for the \(i^{\text{th}}\) impact.
The effect of any impact on the regolith at patch \(X\) may now be found by simulating in LAMMPS the shaking of the granular mixture kept in the simulation box - described above and shown in Fig. 9 - at the impact's associated amplitude and frequency, and over the corresponding fall-off time. Repeating this process for _every_ impact in _any_ sample collisional history of the asteroid then allows us to investigate impact-driven segregation at patch \(X\) over the asteroid's lifetime. Figure 10 summarizes the entire modeling process in a flowchart.
As mentioned, post-impact shaking at patch \(X\) may last at most 120s. There is, therefore, a long period of time between successive impacts during which the grains in patch \(X\) remain static. It is possible that grains during such periods may experience creeping flow [62] or be subjected to perturbations due to internal or external events, like degassing, quakes, or radiation torques. We do not take into account these possibilities here. This permits us to collapse the impact-driven grain evolution at patch \(X\) into a concatenated sequence of shaking events, whose chronology is set by the sample collisional history under consideration. Thus, shaking at patch \(X\) due to the \((i+1)^{\text{th}}\) impact is initiated as soon as the simulation associated with the \(i^{\text{th}}\) impact terminates. Figure 11(a) displays a schematic of how a collisional history would ideally be simulated; we discuss the figure further below.
### Energy equivalent simulations
Because of the large number of impacts (\(\sim\!150,000\)), simulating even one collisional history requires excessive computational time. We overcome this in three ways. In the first, we perform one _energy-equivalent_ (EE) simulation for each collisional history. The equivalence is established on the basis of the _total_ seismic energy received at patch \(X\) from all impacts in a sample collisional history of the asteroid. The seismic energy delivered to the center of patch \(X\) by an impact is the area under the associated seismic energy curve, such as the one in Fig. (b)b. The idea of EE simulations is further explained in Fig. 11. Figure 11(a) represents an ideal simulation in which each collision - small or large - is resolved. Collisions with bin 1 (small) and 2 (large) impactors are colored blue and red, respectively. Figure 11(b) is an EE simulation in which energy is delivered to patch \(X\) through an equivalent sinusoidal shaking with a _fixed_ amplitude \(A_{E}\) and a _constant_ frequency \(f_{E}=f=f_{0}\) for a given length of time \(\Delta t_{E}\). The energy transfer is thus represented as a straight line in Fig. 11(b), the area under which equals the total seismic energy received in Fig. 11(a). Sections of the area in Fig. 11(b) are colored differently to identify energy received from small and large simulations separately; the areas of which equal the total areas under the corresponding curves in Fig. 11(a).
Figure 10: Modeling flowchart for segregation from impact-induced seismic shaking at patch \(X\) on a rubble asteroid.
To implement an EE simulation we require the duration \(\Delta t_{E}\) and amplitude \(A_{E}\) of the shaking, which we now estimate. The energy transfer rate per unit mass - i.e. the specific power - from the equivalent sinusoidal motion is
\[\zeta_{E}=4\pi^{2}A_{E}^{2}f_{E}^{3},\] (4.1a) so that we must have \[\zeta_{E}\Delta t_{E}=\sum_{i=1}^{N}\int_{0}^{t_{f,i}}\varepsilon_{s,i}\mathrm{ d}t, \tag{4.1b}\]
where \(\varepsilon_{s,i}\) represents the seismic energy density at patch \(X\) following the \(i^{\text{th}}\) impact with fall-off time \(t_{f,i}\). Now, recall that the frequency \(f_{E}\) is fixed at that of the body's first mode of vibration, i.e. \(f_{0}\). We may then obtain \(A_{E}\) for any collisional history in terms of an associated equivalent scaled surface acceleration \(\varGamma_{\text{eq}}\) by utilizing definition (2.6); we discuss below how \(\varGamma_{\text{eq}}\) is estimated. Finally, we utilize (4.1a) to find the specific power \(\zeta_{E}\), and then (4.1b) to compute \(\Delta t_{E}\) from a given collisional history. All past work on segregation on asteroids due to shaking utilize, in effect, EE simulations [23, 25, 63], although no effort is made to relate the shaking parameters to a geophysical process such as impacting, as done here.
We now return to estimating the equivalent surface acceleration \(\varGamma_{\text{eq}}\) for a collisional history, which was required in the preceding paragraph. Table 4 lists some choices of \(\varGamma_{\text{eq}}\) and the corresponding \(\Delta t_{E}\) and \(A_{E}\). The first choice in table 4 is the average of the peaks of the scaled surface accelerations \(\varGamma_{i}\) experienced at patch \(X\) due to all the impacts in a collisional history, i.e. \(\varGamma_{1}=\sum_{i=1}^{N}\varGamma_{i}/N\), where \(N\) is the total number of impacts. The second choice for \(\varGamma_{\text{eq}}\) in table 4 is \(\varGamma_{2}=\left(\max_{i}\varGamma_{i}+\min_{i}\varGamma_{i}\right)/2\), where \(i\) ranges over all impacts in the collisional history. The third choice \(\varGamma_{3}\) in table 4 is the average over ten collisional histories of the modal values of the peak accelerations \(\varGamma_{i}\) of each his. For reasons discussed in Sec. 5(a) we we will set \(\varGamma_{\text{eq}}=\varGamma_{3}\), for which the amplitude \(A_{E}\) is about twice the smallest grain diameter.
Unfortunately, the influence of the stochasticity of the collisional history is lost when we perform an EE simulation. Furthermore, for any sample collisional history, only about 15% of the total seismic energy available is contributed by small impacts with objects from bin 1, despite their much greater numbers; cf. table 2. Figures 7(a) and 7(b) show that both the peak value of the surface acceleration and the time for which the shaking is sustained are
Figure 11: (a) Schematic showing the sequence of seismic energy curves representing energy transfer to the center of patch \(X\) in a collision history, with blue and red corresponding to small (bin 1) and large (bin 2) impacts, respectively. (b) _EE simulations_: energy transfer from both large and small impacts are smeared out. (c) _SEE simulations_: energy transfer from large impacts is modeled individually, while that from small impacts is smeared out and superimposed. (d) _LI simulations_: Same as SEE, but with small impacts ignored. See the main text for more details. The figure is not to scale.
higher for large impacts. This points to the need to incorporate the role of large impacts more carefully in simulations. To correct to an extent for the loss of stochasticity in EE simulations, and to better model large impacts, we pursue two other types of simulations, viz. _semi-energy-equivalent_ (SEE) and _large-impact_ (LI) simulations.
In SEE simulations we approximate the energy received by the small impacts, i.e. those from bin 1 impactors in table 2, in terms of an energy-equivalent sinusoidal shaking. However, each large impact from objects in bin 2 in table 2 is incorporated individually. The energy-equivalent simulations for the small impacts run in the background as shown in Fig. 11(c): a gentle continuous tapping providing energy equal to all small impacts in a collisional history - indicated by the blue rectangular box - is superimposed on seismic shaking from each large impact in that history. Although computationally more expensive, SEE simulations will help us understand the role and importance of the randomness of the impacts. Further, we will better appreciate the significance of fewer but larger impacts to segregation on an asteroid's surface relative to the greater number of small impacts.
Finally, in LI simulations we ignore small impacts altogether but consider each large impact in any collisional history individually.
## 5 Results and discussion
We now present the results of our discrete element (DE) simulations that investigate segregation at patch \(X\) due to impact-driven seismic shaking over the course of the past 10 Myr of a rubble NEO like Itokawa. Specifically, we consider the triaxial ellipsoid in Fig. 3a with dimensions 535m \(\times\) 294m \(\times\) 209m with patch \(X\) located as shown. For this, we bring together various aspects of the process described in the preceding sections and displayed in the flowchart in Fig. 10, viz. a stochastic collisional history, impact modeling on the asteroid, seismic energy transmission to patch \(X\), grain distribution at patch \(X\) and, finally, simulation strategy and parameters. The degree of segregation is quantified by counting the number of medium (M) and large (L) boulders that rise to the top relative to their total numbers. Complete segregation is said to have occurred when all M and L boulders come to the top of the mixture.
### Effect of small and large impacts
Consider first the energy-equivalent (EE) simulations. The choice of the equivalent surface acceleration \(\Gamma_{\text{eq}}\) in EE simulations plays a crucial role in determining whether or not segregation occurs. For example, when we set \(\Gamma_{\text{eq}}=\Gamma_{1}\) given in table 4, the shaking amplitude comes out to be much lower than the smallest grain diameter \(d\), as there are far more small impacts than large ones - cf. table 2 and Fig. 8a. On the other hand, when we set \(\Gamma_{\text{eq}}=\Gamma_{2}\), the shaking is too violent and grains fly away from the simulations box. However, for \(\Gamma_{\text{eq}}=\Gamma_{3}\), grains are displaced by an amount of the order of or greater than \(d\), and we observe that big grains rise up imitating the Brazil-nut effect (BNE), and similar to images in Fig. 12a. This agrees with the findings of [25] that BNE only occurs when the amplitude of vibration is at least of the order \(d\) and confirms the sensitivity of segregation to the amplitude of seismic shaking.
We next perform semi-energy-equivalent (SEE) and large-impact (LI) simulations and the final results of these for a sample collisional history are shown in Fig. 12a. As mentioned in table 3, initially there are 50 medium and 5 large boulders in the mixture. At the end of the seismic shaking process, the bar graph in Fig. 12b reports that both M and L boulders predominantly populate the upper layers (UL) at patch \(X\); the UL is taken to span the first 3m from the surface, i.e. about twice the large particle diameter. The almost similar outcomes of SEE and LI simulations confirm that large impacts are the primary drivers of segregation. This is expected because, as discussed above, large impacts deliver most of the seismic energy to patch \(X\) and result in higher amplitudes of ground vibration that last longer as the fall-off time \(t_{t}\) is greater. Furthermore, because the difference in the outputs of SEE and LI simulations is minor, it suggests that the effect of small impacts on the segregation process is minimal and may be ignored, so we may limit ourselves to LI simulations. Nevertheless, we will here continue with SEE simulations to model impact-induced seismic shaking and grain segregation processes.
\begin{table}
\begin{tabular}{c c c c} \hline \(\Gamma_{\text{eq}}\) & \(\Delta t_{E}\) (years) & \(f_{E}=f_{0}\) (Hz) & \(A_{E}/d\) \\ \hline \(\Gamma_{1}=4.20\) & \(1.70\times 10^{4}\) & 0.0197 & 0.09 \\ \(\Gamma_{2}=330.02\) & 2.77 & 0.0197 & 7.32 \\ \(\Gamma_{3}=101.35\) & 26.15 & 0.0197 & 2.25 \\ \hline \end{tabular}
\end{table}
Table 4: Various choices of \(\Gamma_{\text{eq}}\) for EE simulations and the corresponding shaking time \(\Delta t_{E}\), frequency \(f_{E}\) and amplitude \(A_{E}\) in terms of small grain diameter.
### Effect of location of patch \(X\)
The location on the asteroid of patch \(X\) will affect the segregation there. This is because, over the asteroid's lifetime, the spatial distribution of impacts relative to patch \(X\) depends upon patch \(X\)'s location, and this, in turn, will modify the seismic energy received at \(X\). To investigate this we compute the total seismic energy received at patch \(X\) due to the _same_ collisional history for three different locations of patch \(X\) on the ellipsoid asteroid in Fig. 2(a). These locations correspond to the extremes of the three principal axes, so that patch \(X\)'s center coincides with \(X_{1}:(535/2,0,0),X_{2}:(0,294/2,0)\) and \(X_{3}:(0,0,209/2)\).
As shown in table 5, for a given collisional history, the seismic energy received is most when the patch is more centrally located, i.e. at the end of the shortest principal axis (\(X_{3}\)), while the patch at the end of the longest principal axis (\(X_{1}\)) is delivered the least seismic energy. This is expected because seismic waves have to, on average, travel a smaller distance to reach \(X_{3}\) and, consequently, dissipate less. We may quantify the travel extent as the weighted average distance in a collisional history of the large impactors from a patch, which we define by \(d_{I}=\sum\Delta x_{i}D_{i}/\sum D_{i}\), where \(D_{i}\) is the impactor diameter which strikes at a distance \(\Delta x_{i}\) from the patch. We find that for \(X_{1},d_{I}=298\)m, at \(X_{2},d_{I}=218\)m and for \(X_{3},d_{i}=190\)m. Correspondingly, the seismic energy received at central locations on the rubble asteroid due to impacts is higher than in peripheral areas. Consequently, in SEE simulations conducted for the same large impact history, and employing the _same_ equivalent scaled surface acceleration \(\Gamma_{3}\) for the superimposed small impacts, we observe that greater numbers of M and L boulders rise to the top when the patch is at the end of the shorter principal axis, i.e. at \(X_{3}\), as shown in Fig. 13.
The above result has important implications for elongated asteroids, such as Itokawa, where we would then expect higher segregation at central locations than at the far ends. However, for more spherical asteroids like Bennu and Ryugu, we predict similar texturing everywhere over the surface.
### Effect of initial conditions and collisional history
We first investigate whether the final outcome is affected by the initial state of the grains in patch \(X\). For this, we carried out five SEE simulations for the _same_ collisional history, but with different initial mixture configurations. We find that in all these simulations, although the initial normalized vertical location of the center-of-mass (z-COM) for the M boulders differs across configurations, as shown in Fig. 13(a), the M and L boulders rise to the UL when seismically shaken; cf. Fig. 13(b). This confirms the repeatability of segregation and its independence from variability in initial conditions.
Figure 12: The final configuration at the end of impact-driven shaking for a sample collision history. Side view of the simulation box for (a) SEE simulations and LI simulations. Only medium (blue) and large (yellow) boulders are shown. (b) The distribution of medium and large boulders present in the upper layer in SEE (unhatched) and LI (hatched) simulations.
We next study how segregation in patch \(X\) is affected by stochasticity in the large impacts through SEE simulations. Figure 14(a) reports the outcome from ten different collision histories for the _same_ initial state of grains. From these simulations, we observe that the temporal evolution of the z-COM of the M boulders in Fig. 14(a) depends upon the history of large impacts. However, as reported in Fig. 14(b), the final segregated state does not change much: the UL is dominated by M and L boulders, although a few histories are segregated to better (histories 3 and 8) or lesser (history 2) extent. This result shows that grain segregation depends primarily upon the total seismic energy delivered by the large impacts, and is largely independent of their collisional history. Having said that, because the evolution of the boulders does vary with the details of the collisional history, it is possible that in the presence of other processes sensitive to surface features, such as YORP, the outcome could depend more significantly on stochasticity in the collisional history. Furthermore, robustness to the choice of distribution functions that enter into the stochastic description has yet to be confirmed.
Figure 14: Outcomes of five simulations with different initial grain configurations, but the same collisional history. (a) Temporal evolution of the vertical location of the center-of-mass (z-COM) of M boulders normalized by regolith depth, with \(T_{s}\) being the total simulation time. (b) The final number of M (blue) and L (yellow) boulders in the UL.
\begin{table}
\begin{tabular}{c c c} \hline & \(\int_{i}\epsilon_{s}\mathrm{d}t\) (J-s-kg\({}^{-1}\)) & \\ \hline \(X_{1}\) & \(X_{2}\) & \(X_{3}\) \\ \hline Bin 1 & \(5.31\times 10^{5}\) & \(6.13\times 10^{5}\) & \(6.35\times 10^{5}\) \\ Bin 2 & \(3.38\times 10^{6}\) & \(3.87\times 10^{6}\) & \(3.98\times 10^{6}\) \\ \hline \end{tabular}
\end{table}
Table 5: The seismic energy received at three different locations of patch \(X\) for the same collisional history.
### Effect of asteroid properties
We now perform SEE simulations to understand the effect of bulk properties of an asteroid on seismic energy propagation and, consequently, on segregation at patch \(X\). For this, we compare the present results with that of another asteroid, which is exactly the same in all respects, except that it has a seismic diffusivity \(K_{s}=0.5\) km\({}^{2}\) s\({}^{-1}\), equal to that of Eros [29]. This higher value of \(K_{s}\) corresponds to an interior that imitates a fractured monolith, rather than the much more dissipative rubble pile that we have focussed upon so far. When varying \(K_{s}\) we also need to modify \(v_{p}\) accordingly. The relationship between \(K_{s}\) and \(v_{p}\) for a medium that is not completely monolithic is presently not known so we rely on the analysis in Sec. 2(a) to estimate \(v_{p}\). In Sec. 2(a) we found that \(v_{p}\) is about 50% of the value obtained from (2.7); this then yields \(v_{p}=2.5\) km/s for \(K_{s}=0.5\) km\({}^{2}\) s\({}^{-1}\). This should be compared with \(v_{p}=8\) m-s\({}^{-1}\) computed for \(K_{s}=0.0002\) km\({}^{2}\) s\({}^{-1}\) in Sec. 2(a). When \(K_{s}\) is greater a larger amount of seismic energy is received at patch \(X\), and we expect that this significantly reduces the time taken for segregation, as well as improves its quality. This indeed is what we observe in Fig. 16 that compares the number of boulders in the UL.
Figure 16: The number of M (left, blue) and L (right, yellow) boulders in the UL of patch \(X\) at different simulation time-steps \(T_{s}\) for \(K_{s}=0.002\) km\({}^{2}\) s\({}^{-1}\) (dark shade) and \(K_{s}=0.5\) km\({}^{2}\) s\({}^{-1}\) (light shade).
Figure 15: Outcomes of ten simulations with different collisional histories, but same initial grain configurations. (a) Temporal evolution of the vertical location of the center-of-mass (z-COM) of M boulders normalized by regolith depth, with \(T_{s}\) being the total simulation time. (b) The final number of M (blue) and L (yellow) boulders in the UL.
Nearly all the M and L boulders occupy the UL when the interior is the less dissipative fractured monolith, which reduces their population when the interior is rubble.
To further investigate how the asteroid's interior affects segregation, we performed SEE simulations with asteroid interiors that range from fractured monolith to granular. We characterize the latter two by different P-wave velocities: \(v_{p}\!=\!1\)km-s\({}^{-1}\) for a fractured monolith and \(v_{p}\!=\!10\)m-s\({}^{-1}\) for a rubble interior. Additionally, we investigate an asteroid with an interior whose \(v_{p}\!=\!100\)m-s\({}^{-1}\), an intermediate value. The associated seismic frequencies estimated in Sec. 2 will also be different for the three cases. Other seismic parameters, the asteroid's size, its collisional history, and initial conditions are kept the same. For all three types of asteroids, Fig. 17a reports that most of the M and L boulders lie in the UL at the end of the collision-driven evolution. Further, the degree of segregation in a fractured monolith asteroid is more than in a rubble-pile asteroid. However, much difference is not observed between asteroids with a fractured monolith and intermediate interiors. This is related to the fact that our asteroid was taken similar in size to Itokawa. Consequently, seismic energy from even the furthest impact was able to reach patch \(X\), notwithstanding the difference between a fractured monolithic or intermediate interior. This may not happen for a larger asteroid, in which the difference between these two interiors may be starker. This aspect leads us to consider briefly the effect of the asteroid's size on segregation.
Increasing the asteroid's size would require us to recalculate the number of potential impactors, their sizes, and the mean collisional lifetime, and then repeat the entire methodology outlined in this work. For example, following Sec. 3 we may estimate the minimum (\(D_{i,\min}\)) and maximum (\(D_{i,\max}\)) permissible impactor diameters as a function of the asteroid's size. As Fig. 17b reports both \(D_{i,\min}\) and \(D_{i,\max}\) rise with the size of the asteroid. This, in turn, will affect the number of impacts that the asteroid will experience over its lifetime: while a larger asteroid will attract more impacts, the number of consequential impacts - i.e. those that are large enough to initiate seismic shaking - will reduce. Table 6 estimates these numbers for some asteroids. We find that there is a large variability in the number of consequential impacts. This, and the discussion in subsection 2 above, suggests that regolith motion driven by impact-induced seismic activity may need to be investigated with greater attention to the geometric details of an asteroid. These issues will be investigated in a future work utilizing the broad framework presented here.
## 6 Conclusion
In this work, we have presented a framework to model segregation on the surfaces of rubble asteroids over their lifetime due to impact-induced seismic shaking. For clarity of presentation, the model was developed and investigated in the context of an ellipsoidal rubble asteroid with the size and spin of Itokawa. However, the framework is easily adapted to any other asteroid.
Figure 17: (a) The number of medium (M, blue) and large (L, yellow) boulders in the UL for asteroids with interiors characterized by different \(P\)-wave velocities. (b) Variation of \(D_{i,\min}\) and \(D_{i,\max}\) for different target diameters.
For this, we first estimated the seismic properties relevant to rubble bodies, modeled the manner in which seismic energy spreads through their interiors following an impact, and, finally, related the seismic energy received at a point to the surface vibrations there. Thus, for any given impact on a rubble body, we were able to estimate the amplitude and frequency of surface vibrations induced at any other location on that body.
To follow the regolith's dynamical evolution over the lifetime of a near-Earth asteroid (NEA), we need to know the number, frequency, size, and location of impacts that the NEA experiences. This is done stochastically. To this end, we first created several possible collisional histories by estimating the number of impacts that an NEA such as Itokawa may undergo over \(10^{6}\) years and the size distribution of these impactors. Impacts could then take place at randomly distributed time intervals and surface locations.
Finally, we set up discrete element simulations to numerically investigate the response of regolith on a rubble body to impact-induced seismic vibrations during its lifetime. In these simulations, we vertically vibrated a bin filled with a mixture of different sized grains every time an impact took place in any sample collisional history. The results were then averaged over several collisional histories. The bin was located at the region of interest on the rubble body. The frequency and amplitude of the vibrations were related to the impact's magnitude and location, as mentioned above. The grain size distribution was taken as close as possible to the reported distribution for Itokawa at the bin's location.
We then employed our framework to investigate segregation on our rubble ellipsoidal asteroid. We also probed the effect of various asteroidal properties on the segregation process. We found that seismic activity due to impacts is sufficient to drive size segregation on small rubble asteroids. Medium and large boulders always rise to the top of the regolith. The end outcome is largely dependent upon the total energy supplied by large impact events in a collisional history, and not on the manner in which the impactors' size, frequency, and location were distributed. The degree of segregation, however, does vary with location on the asteroid. Most segregation is observed at centrally located regions and least at the most remote zones, i.e. segregation is best at the ends of the smallest principal axis of our ellipsoidal asteroid and worst at the ends of its longest axis. The quality of segregation is also affected by the wave speed and seismic diffusivity of the asteroid's interior, with low values of either leading to a smaller fraction of medium and large boulders rising to the surface. These observations were explained on the basis of the travel times of the seismic waves and their dissipation in the asteroid's bulk.
Our model presents a systematic way to model seismic shaking on rubble asteroids and relate it to seismic activity and surface texturing. Such a framework may be utilized to explain the presence of large boulders on the surface of asteroids like Itokawa. Other asteroids, such as Bennu and Ryugu, may also be modeled with minor changes in the framework to account for their different geometry, rotation, and other parameters. Having said that, smooth regions like Muses C on Itokawa will require further investigation. Were only size segregation processes of the kind studied here active, then such regions cannot exist. We believe that explaining such observations requires attention to the surface topography and the local gravity field, which need to be included within the present framework. This is the focus of ongoing work.
The data that support the findings of this study are available from the corresponding author.
We declare we have no competing interests.
We declare we have no competing interests.
Funding. S.G. would like to acknowledge the financial support during his M. Tech. from the Ministry for Education, Govt. of India, when this work was done.
The authors would like to thank the high-performance computing facility at the Indian Institute of Technology, Kanpur.
|
2309.07315 | Traveling Words: A Geometric Interpretation of Transformers | Transformers have significantly advanced the field of natural language
processing, but comprehending their internal mechanisms remains a challenge. In
this paper, we introduce a novel geometric perspective that elucidates the
inner mechanisms of transformer operations. Our primary contribution is
illustrating how layer normalization confines the latent features to a
hyper-sphere, subsequently enabling attention to mold the semantic
representation of words on this surface. This geometric viewpoint seamlessly
connects established properties such as iterative refinement and contextual
embeddings. We validate our insights by probing a pre-trained 124M parameter
GPT-2 model. Our findings reveal clear query-key attention patterns in early
layers and build upon prior observations regarding the subject-specific nature
of attention heads at deeper layers. Harnessing these geometric insights, we
present an intuitive understanding of transformers, depicting them as processes
that model the trajectory of word particles along the hyper-sphere. | Raul Molina | 2023-09-13T21:01:03Z | http://arxiv.org/abs/2309.07315v2 | # Traveling Words: A Geometric Interpretation of Transformers
## 1 Abstract
Transformers have significantly advanced the field of natural language processing, but comprehending their internal mechanisms remains a challenge. In this paper, we introduce a novel geometric perspective that elucidates the inner mechanisms of transformer operations. Our primary contribution is illustrating how layer normalization confines the latent features to a hyper-sphere, subsequently enabling attention to mold the semantic representation of words on this surface. This geometric viewpoint seamlessly connects established properties such as iterative refinement and contextual embeddings. We validate our insights by probing a pre-trained 124M parameter GPT-2 model. Our findings reveal clear query-key attention patterns in early layers and build upon prior observations regarding the subject-specific nature of attention heads at deeper layers. Harnessing these geometric insights, we present an intuitive understanding of transformers, depicting them as processes that model the trajectory of word particles along the hyper-sphere.
## 2 Introduction
The transformer architecture (Vaswani et al., 2017) has sparked a significant shift in Artificial Intelligence (AI). It is the central component behind some of the most advanced conversational AI systems (Brown et al., 2020, Thoppilan et al., 2022, Bai et al., 2022), and has been established as state-of-the-art for Natural Language Processing (NLP), Computer Vision (CV) and Robotics applications, and many others (OpenAI, 2023, Google, 2023, Chen et al., 2023, Zong et al., 2022, Driess et al., 2023).
Recent work on the interpretability of the transformer architecture has focused on analyzing weights in relation to the word embedding space used in its input and output layers Dar et al. (2022), Elhage et al. (2021), Geva et al. (2022), Brody et al. (2023), Windsor (2022), Millidge and Black (2022). Elhage et al.
[2021] introduces "Transformer Circuits", a theoretical framework that decomposes the transformer computation into two main components: a residual stream that carries information from input to output layers and attention/feed-forward updates that modify the information flowing in the residual stream. A key development from their work is the grouping of the \(W_{Q}W_{K}^{T}\) and \(W_{O}W_{V}^{T}\) matrices from the attention mechanism, representing low-rank approximations of virtual matrices \(W_{QK}\) and \(W_{OV}\), respectively. These virtual matrices define interactions between different words in the input sequence \(X\) within a canonical feature space \(E\) given by the word embedding matrix \(W_{E}\). The resulting values from these interactions are used to update the information carried throughout the residual stream. Geva et al. [2022] further decompose the operations within the Transformer, demonstrating that the updates from the feed-forward module can be decomposed into a linear combination of sub-updates given by the weight matrix of the feed-forward module's second layer \(W_{2}\). The matrix \(W_{2}\) also interacts within the canonical space \(E\) and allows the authors to measure the impact of each sub-update on the model's final prediction using the matrix \(W_{E}\) as a probe. Dar et al. [2022] incorporate these ideas to show that it is not only possible to interpret the outcomes of each Transformer operation in relation to the canonical space \(E\) but also the weights themselves, enabling them to do zero-shot model stitching by "translating" between the canonical spaces of different language models. Finally, Millidge and Black [2022] note that analysis on the singular vectors of the \(W_{OV}\) matrices provides better practical results when compared to analysis of its row and column weights.
A complimentary perspective to the line of work on Transformer Circuits comes from the geometric interpretation of layer normalization [Ba et al., 2016] by Brody et al. [2023]. The authors prove that layer normalization is equivalent to projecting features onto the hyperplane defined by the \(\overrightarrow{1}\) vector and then scaling the projection by \(\sqrt{d}\). They show that these properties are crucial for the attention mechanism to either attend to all keys equally or to avoid the problem of having "unselectable" keys (relevant keys within the convex hull of a set of non-relevant keys). Windsor [2022] provides further evidence of the representational power of layer normalization, visualizing the highly non-linear behavior that arises from this operation. The authors demonstrate that, when used as an activation function within a neural network, layer normalization is capable of solving complex classification tasks.
In this work, we connect these ideas under a single interpretation: word particles traveling around the surface of a hyper-sphere, completing a journey that goes from a previous word to the next and transforming their meaning throughout this process. An illustrated summary of this interpretation is given in Figure 1.
## 3 Transformers as a Composition of Geometric Primitives
In this section, we analyze each of the transformer's components from a geometric perspective, leveraging the interpretation of one component to analyze the next. We begin with the layer normalization function, for which we demonstrate that it constrains \(d\)-dimensional input features to lie within the surface of a \((d-1)\) dimensional hyper-sphere. Then we consider the role of the \(W_{QK}\) matrix in terms of geometric transformations on said hyper-sphere, and \(W_{VO}\) as a key-value mapping from the hyper-sphere back to \(\mathbb{R}^{d}\). Additionally, we review the key-value interpretation of the feed-forward module proposed by Geva et al. (2021). Finally, we discuss the role of the embedding matrix \(W_{E}\) on the transformer's output probabilities.
### Layer Normalization
In its original formulation (Ba et al., 2016), layer normalization is introduced in terms of the mean \(\mu\) and standard deviation \(\sigma\) of an input feature vector \(X\in\mathbb{R}^{d}\):
\[\text{LayerNorm}(X)=\frac{X-\mu}{\sigma} \tag{1}\]
Figure 1: Overview of the proposed geometric interpretation of Transformers. In it, the phrase “Traveling Words” is to be completed by a given transformer model. The input token “Traveling ” is embedded as a word particle using an embedding matrix \(W_{E}\) and projected onto a hyper-sphere using layer normalization. Each subsequent layer in the transformer determines the path that the particle will follow along the surface of the hyper-sphere, culminating on the region closest to the next token: “Words”.
Where both the mean and standard deviation are taken along the feature dimension \(d\) such that:
\[\mu =\frac{1}{d}\sum_{i}^{d}x_{i}\] \[\sigma =\sqrt{\frac{1}{d}\sum_{i}^{d}(x_{i}-\mu)^{2}}\]
Brody et al. (2023) note that the numerator in Equation 1 is itself an operation between \(X\) and the vector \(\boldsymbol{\mu}\) defined as:
\[\boldsymbol{\mu}=[\mu,\mu,\ldots,\mu]\in\mathbb{R}^{d}\]
The resulting vector \((X-\boldsymbol{\mu})\) is shown to be orthogonal to the \(\overrightarrow{1}\) vector, and thus layer normalization can be interpreted as a projection of \(X\) onto the hyperplane defined by the normal vector \(\overrightarrow{1}\). Brody et al. (2023) also show that the division by \(\sigma\) acts as a scaling factor that modifies the norm of \((X-\boldsymbol{\mu})\) to be \(\sqrt{d}\):
\[\sigma =\sqrt{\frac{1}{d}\sum_{i}^{d}(x_{i}-\mu)^{2}} \tag{2}\] \[=\frac{1}{\sqrt{d}}\sqrt{\sum_{i}^{d}(x_{i}-\mu)^{2}}\] \[=\frac{1}{\sqrt{d}}||X-\boldsymbol{\mu}||_{2}\]
We note that, if we consider unit-norm vector \(\frac{1}{\sqrt{d}}\overrightarrow{1}\) instead of \(\overrightarrow{1}\), it can be shown that \(\boldsymbol{\mu}\) is the projection of \(X\) onto \(\frac{1}{\sqrt{d}}\overrightarrow{1}\) (explaining why \(X-\boldsymbol{\mu}\) is
orthogonal to \(\overrightarrow{1}\)):
\[\begin{split}\operatorname{proj}(X,\frac{1}{\sqrt{d}}\overrightarrow{1 })&=\frac{1}{||\frac{1}{\sqrt{d}}\overrightarrow{1}||_{2}} \Bigg{(}\mathbf{x}\cdot\frac{1}{\sqrt{d}}\overrightarrow{1}\Bigg{)}\frac{1} {\sqrt{d}}\overrightarrow{1}\\ &=\Big{(}\frac{\mathbf{x}\cdot\overrightarrow{1}}{d}\Big{)} \overrightarrow{1}\\ &=\Big{(}\frac{1}{d}\sum_{i}^{d}x_{i}\Big{)}\overrightarrow{1} \\ &=\mu\overrightarrow{1}\\ &=\boldsymbol{\mu}\end{split} \tag{3}\]
From this result, it is straightforward to calculate the projection of \(X\) onto the hyperplane \(\mathcal{H}\) defined by \(\frac{1}{\sqrt{d}}\overrightarrow{1}\):
\[\begin{split}\operatorname{proj}_{\mathcal{H}}(X)&=X -\operatorname{proj}(X,\frac{1}{\sqrt{d}}\overrightarrow{1})\\ &=X-\boldsymbol{\mu}\end{split} \tag{4}\]
Finally, we can use the results from Equation 2 and Equation 4 to reformulate layer normalization in geometric terms:
\[\begin{split}\operatorname{LayerNorm}(X)&=\frac{X- \mu}{\sigma}\\ &=\frac{\operatorname{proj}_{\mathcal{H}}(X)}{\frac{1}{\sqrt{d}}|| \operatorname{proj}_{\mathcal{H}}(X)||_{2}}\\ &=\sqrt{d}\ \frac{\operatorname{proj}_{\mathcal{H}}(X)}{|| \operatorname{proj}_{\mathcal{H}}(X)||_{2}}\end{split} \tag{5}\]
Intuitively, layer normalization projects a vector \(X\in\mathbb{R}^{d}\) to the hyperplane \(\mathcal{H}\) perpendicular to \(\overrightarrow{1}\in\mathbb{R}^{d}\), and normalizes the projection such that it lies on the surface of a \(d-1\) dimensional hyper-sphere of radius \(\sqrt{d}\). A visualization of this process for \(d=3\) is shown in Figure 2. In practice, layer normalization includes two additional parameters: a scaling factor \(\gamma\) and a bias term \(\beta\). The parameter \(\gamma\) scales each coordinate axis of \(\mathbb{R}^{d}\) independently, transforming the hyper-sphere into a hyper-ellipsoid, and the bias term \(\beta\) shifts the center of said ellipsoid away from the origin. A 2D representation of the entire process is shown in Figure 3.
Xiong et al. (2020) show that layer normalization should be applied within each block before the attention and feed-forward module upd
before prediction. Doing so scales down the gradient of the feed-forward module's weight parameters and keeps the magnitude of the hidden states bounded with respect to the depth of a given layer, which improves stability during training and removes the need for a warm-up stage.
### Multi-Head Self-Attention
In the previous section, we showed how the layer normalization approach given by Xiong et al. (2020) enforces data within each layer to be constrained on the surface of a hyper-sphere, potentially unique to each layer. However, thanks to the residual nature of transformers, all intermediate layer representations share the same vector space and thus are essentially projecting features onto the same hyper-sphere \(\mathcal{H}_{S}\). Furthermore, given that layer normalization is applied before the classification softmax, the model maximizes the dot-product similarity between a subset of points within \(\mathcal{H}_{S}\) and the word vectors in the embedding matrix \(W_{E}\in\mathbb{R}^{|V|\times d}\) (where \(|V|\) denotes the size of the vocabulary), establishing the meaning of points in \(\mathcal{H}_{S}\) in relation to the words associated with \(W_{E}\). To understand how the geometric intuition behind \(\mathcal{H}_{S}\) allows for interpretability, we will first revisit the self-attention module in transformers (Vaswani et al., 2017).
For a given input sequence \(X\in\mathbb{R}^{s\times d}\) of length \(s\), the self-attention mechanism is defined as follows:
\[\text{SelfAttention}(X,W_{Q},W_{K},W_{V})=\text{softmax}\Big{(}\frac{QK^{T}} {\sqrt{d}}\Big{)}V \tag{6}\]
Figure 2: Layer normalization visualized on 3D data. Left: Original feature space (from randomly sampled data), with each data point color-coded according to its position in space. Right: Feature space after layer normalization, note that all data points lie within the plane perpendicular to the \(\overrightarrow{1}\) vector.
where
\[\begin{split} Q&=XW_{Q}\\ K&=XW_{K}\\ V&=XW_{V}\end{split} \tag{7}\]
Such that \(W_{Q}\in\mathbb{R}^{d\times k}\), \(W_{K}\in\mathbb{R}^{d\times k}\) and \(W_{V}\in\mathbb{R}^{d\times v}\) are projection matrices from the original model dimension \(d\) to intermediate dimension \(k\) and value dimension \(v\), respectively. For multi-head attention, multiple projection matrices \(W_{Q}^{i}\), \(W_{K}^{i}\), \(W_{V}^{i}\) are considered, one for each head \(i\in[1,\dots,h]\) (with \(h\) being the number of heads). In this case, the value dimension \(v\) is commonly set equal to \(k\) and an extra projection matrix \(W_{O}\in\mathbb{R}^{hk\times d}\) is introduced to combine information from all heads as follows [23]:
\[\begin{split}\text{MultiHead}(X)&=\text{Concat}([ \text{head}_{1},\dots,\text{head}_{h}])W_{O}\\ \text{where head}_{i}&=\text{SelfAttention}(X,W_{Q }^{i},W_{K}^{i},W_{V}^{i})\end{split} \tag{8}\]
Given that the concatenation happens along the row dimension of each head, it is possible to re-write multi-head self-attention as follows:
\[\begin{split}\text{MultiHead}(\text{X})&=\sum_{i}^ {h}\text{SelfAttention}(X,W_{Q}^{i},W_{K}^{i},W_{V}^{i})W_{O}^{i}\\ \text{where }W_{O}&=\text{Concat}[W_{O}^{1},\dots,W_{O}^ {h}]\end{split} \tag{9}\]
Such that each \(W_{O}^{i}\in\mathbb{R}^{k\times d}\) denotes an element of the partition of matrix \(W_{O}\) alongside the row dimension. Combining Equation 7 and Equation 9 we obtain a single formula for multi-head self-attention:
\[\begin{split}\text{MultiHead}(\text{X})&=\sum_{i}^ {h}\text{softmax}\Bigg{(}\frac{XW_{Q}^{i}{W_{K}^{i}}^{T}X^{T}}{\sqrt{d}} \Bigg{)}XW_{V}^{i}W_{O}^{i}\\ &=\sum_{i}^{h}\text{softmax}\Bigg{(}\frac{XW_{QK}^{i}X^{T}}{\sqrt {d}}\Bigg{)}XW_{VO}^{i}\end{split} \tag{10}\]
Where \(W_{QK}^{i}\in\mathbb{R}^{d\times d}\) and \(W_{VO}^{i}\in\mathbb{R}^{d\times d}\) are low-rank virtual matrices obtained by grouping \(W_{Q}^{i}W_{K}^{iT}\) and \(W_{V}^{i}W_{O}^{i}\) respectively [11, 13].
### The \(W_{ok}\) Matrix
For any given head, the matrix \(W_{QK}^{i}\) is commonly interpreted as a bi-linear form \(f:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) that represents the relevance between keys and queries. However, it is also possible to consider \(W_{QK}^{i}\) as a linear transformation that
maps inputs to a query representation \(X^{i}_{q}\in\mathbb{R}^{s\times d}\) (similar to that considered in Brody et al. (2023)):
\[XW^{i}_{QK}=X^{i}_{q} \tag{11}\]
With the head's attention score matrix \(A^{i}\in[0,1]^{s\times s}\), for a given sequence length \(s\), obtained as:
\[A^{i}=\text{softmax}\Big{(}\frac{X^{i}_{q}X^{T}}{\sqrt{d}}\Big{)} \tag{12}\]
Alternatively, its transpose can be considered as a transformation that maps inputs to a key representation:
\[XW^{i}_{QK}{}^{T}=X^{i}_{k} \tag{13}\]
With the attention score matrix as follows:
\[A^{i}=\text{softmax}\Big{(}\frac{X(X^{i}_{k})^{T}}{\sqrt{d}}\Big{)} \tag{14}\]
This process is illustrated for normalized inputs in the bottom-right section of Figure 3. Essentially, the role of the \(W_{QK}\) matrix and the layer normalization parameters is to find a transformation over \(\mathcal{H}_{S}\) such that, when superimposed on itself, brings related terms closer together and keeps unrelated terms apart.
It is important to mention that for \(k<d\), the matrix \(W^{i}_{QK}\) cannot be inverted, as it won't have a full rank. This implies (by the rank-nullity theorem) that for each head, there must be a set of \(d-k\) query vectors \(Q^{i}_{null}\subset\mathbb{R}^{d}\) that map to the zero vector and, as a consequence, attend to all keys equally. Conversely, there must also exist a set of \(d-k\) keys \(K^{i}_{null}\subset\mathbb{R}^{d}\) that are attended to by all queries equally, with an attention score of zero.
**Note on bias terms:** In case the projection given by Equation 7 contains bias terms \(\beta_{q}\), \(\beta_{k}\in\mathbb{R}^{1\times k}\), the attention score matrix from Equation 12 is calculated as follows:
\[A^{i}=\text{softmax}\Big{(}\frac{X^{i}_{q}X^{T}+XW^{i}_{Q}\beta^{T}_{k}+\beta_ {q}{W^{i}_{k}}^{T}X^{T}+\beta_{q}\beta^{T}_{k}}{\sqrt{d}}\Big{)} \tag{15}\]
In the bias formulation, three new terms are introduced. First, \(W^{i}_{Q}\beta^{T}_{k}\in\mathbb{R}^{d\times 1}\), which can be thought of as a reference vector for queries, such that queries similar to it get higher attention scores. Given that the same "bias score" will be broadcasted along all the different keys of the same query, the network will ignore this term due to the shift-invariance of the softmax function. More interesting is the second term \(\beta_{q}{W^{i}_{k}}^{T}\in\mathbb{R}^{1\times d}\), which acts as a reference for keys. Given that its bias score is broadcasted along queries, it will result in higher attention scores (in all queries) for keys similar to the reference. Finally, the term \(\beta_{q}\beta^{T}_{k}\in\mathbb{R}\) acts as a global bias and, similar to \(W^{i}_{Q}\beta^{T}_{k}\), will be ignored by the network.
Figure 3: Visualization of the self-attention process for a single head. **Top Left**: Layer normalization projects the input features on the surface of the hyper-sphere \(\mathcal{H}_{S}\). **Top Right**: A scaling parameter \(\gamma\) is commonly applied after normalization; it transforms \(\mathcal{H}_{S}\) into an \((d-1)\)-dimensional ellipsoid. **Bottom Left**: A bias term \(\beta\) is also applied after normalization; it displaces the ellipsoid away from the origin. **Bottom Right**: The input features are transformed to a query representation (in red) using the \(W_{QK}\) matrix and compared against their previous representation to obtain the self-attention scores.
### The \(W_{vo}\) Matrix and the Residual Stream
To understand the role of the \(W_{VO}\) matrix within the transformer, we now consider the update step after the multi-head attention layer:
\[X_{l+1}=X_{l}+\text{MultiHead}(\text{LayerNorm}(X)) \tag{16}\]
Note that by plugging in Equation 10 and Equation 12, the layer update can be re-written as:
\[X_{l+1}=X_{l}+\sum_{i}^{h}A^{i}X_{value}^{i} \tag{17}\]
where
\[X_{value}^{i}=\text{LayerNorm}(X_{l})W_{VO}^{i} \tag{18}\]
It can be seen that the multi-head attention mechanism consists of the sum of \(h\) individual updates, each one given by one of the attention heads. Within each head, all words in the sequence propose an update \(X_{value}^{i}\), and these are aggregated according to their attention scores \(A^{i}\). In Equation 18, the matrix \(W_{OV}^{i}\) acts as a map that takes the normalized inputs in \(\mathcal{H}_{S}\) (adjusting for scale and bias) and outputs a set of updates in the same space as \(W_{E}\), this process is visualised in Figure 4. Furthermore, we propose that the \(W_{VO}^{i}\) matrix is better understood as a second key-value store (Sukhbaatar et al., 2015; Geva et al., 2021) within the attention layer. To see why, consider its Singular Value Decomposition (SVD) (Millidge and Black, 2022):
\[W_{VO}^{i}=U\Sigma V^{T} \tag{19}\]
By substituting in Equation 18, we obtain:
\[X_{value}^{i}=(Q_{VO}{K_{OV}^{i}}^{T})V_{OV}^{i} \tag{20}\]
where
\[Q_{VO} =\text{LayerNorm}(X) \tag{21}\] \[K_{OV}^{i} =(U\Sigma)^{T}\] \[V_{OV}^{i} =V^{T}\]
The left singular vectors, associated with the columns of \(U\Sigma\in\mathbb{R}^{d\times d}\), act as a library of "keys" \(K_{OV}^{i}\) against which the normalized features \(X_{l}\in\mathcal{H}_{S}\) are compared. While the corresponding right singular vectors, associated with rows in \(V^{T}\in\mathbb{R}^{d\times d}\), act as the output values \(V_{OV}^{i}\) that define the direction in which to update the information in the residual stream for a given key. This interpretation is motivated by the results of Millidge and Black (2022), where it is shown
that the right singular vectors \(V^{T}\) of the \(W_{VO}\) matrix tend to have interpretable meanings when decoded using \(W_{E}\), with some of the transformer heads consistently representing a single topic in most of their singular vectors. We would also like to highlight that, similar to the \(W_{QK}\) matrix, the \(W_{OV}\) matrix has at least \(d-k\) singular values equal to zero. This means that multiple queries \(Q_{VO}\) will map to the zero vector and thus won't update the information in the residual stream, allowing the model to skip the update process if necessary.
**Note on bias terms:** If the value projection in Equation 7 contains a bias term \(\beta_{v}\in\mathbb{R}^{1\times k}\), and the output projection in Equation 8 contains a bias term \(\beta_{o}\in\mathbb{R}^{1\times d}\). The layer update in Equation 17 can be re-written as follows:
\[X_{l+1}=X_{l}+\beta_{o}+\sum_{i}^{h}A^{i}X_{value}^{i}+\beta_{v}{W_{O}^{i}}^{T} \tag{22}\]
Here, the term \(\beta_{v}{W_{O}^{i}}^{T}\in\mathbb{R}^{1\times d}\) is a bias on the update direction of head \(i\), while \(\beta_{o}\in\mathbb{R}^{1\times d}\) acts as a bias on the entire layer's update.
### The Feed Forward Module
We use the same interpretation for the feed-forward module as Geva et al. (2022, 2021). In it, the feed-forward module behaves similarly to the \(W_{VO}\) matrix in the sense that it also acts as a key-value store (Geva et al., 2021) that proposes directional updates for features in the residual stream (Geva et al., 2022). Similar to the previous section, we will begin by considering the update step after the feed-forward layer:
\[X_{l+1}=X_{l}+\text{FeedForward(LayerNorm(X))} \tag{23}\]
where
\[\text{FeedForward($X$)}=f(XW_{1})W_{2}+\beta_{f} \tag{24}\]
Geva et al. (2021) note that, for a hidden dimension \(d_{hidden}\), the feed-forward module weights \(W_{1}\in\mathbb{R}^{d\times d_{hidden}}\) and \(W_{2}\in\mathbb{R}^{d_{hidden}\times d}\) act as key and value matrices, plus a bias term \(\beta_{f}\in\mathbb{R}^{1\times d}\). They propose an alternative formulation of the feed-forward layer:
\[\text{FeedForward($X$)}=\sum_{i}^{d_{hidden}}f(Xk_{i})\cdot v_{i}+\beta_{f}= \sum_{i}^{d_{hidden}}m_{i}\cdot v_{i}+\beta_{f} \tag{25}\]
where
\[\begin{split} k_{i}&=W_{in}^{T}[:,i]\quad\in \mathbb{R}^{d\times 1}\\ v_{i}&=W_{out}[i,:]\quad\in\mathbb{R}^{1\times d} \end{split} \tag{26}\]
Such that \(W_{1}\) acts as a storage matrix for keys \(k_{i}\), \(W_{2}\) acts as a storage matrix for values \(v_{i}\), and the activation function \(f\) assigns a weight \(m_{i}\) to each value
Figure 4: Visualization of the residual update for a single attention head. For each normalized data point (in gray), there is a corresponding un-normalized data point in the residual stream (in blue). Data points in the residual stream are updated according to a given direction (in green) calculated from the self-attention scores and the update matrix \(W_{VO}\).
\(v_{i}\) depending on the input \(X\). In (Geva et al., 2021), the top-n examples in the training dataset that resulted in the highest \(m_{i}\) coefficients showed interpretable patterns, such that each key \(k_{i}\) in a 16-layer transformer model trained on WikiText-103 (Merity et al., 2016) was associated with either a syntactical or semantical pattern by human experts. For the values \(v_{i}\), their role is better understood in terms of the impact that they have on the residual stream (referred to as sub-updates):
\[X_{l+1}=X_{l}+\beta_{f}+\sum_{i}^{d_{hidden}}m_{i}\cdot v_{i} \tag{27}\]
It can be seen that each value \(v_{i}\) modifies the residual stream independently, implying that these share the same space as \(W_{E}\). Indeed, experiments from Geva et al. (2021, 2022) have shown that many values \(v_{i}\) are semantically meaningful and can be intervened for applications like zero-shot toxic language suppression.
To conclude this subsection, we highlight that the attention and feed-forward modules behave very similarly (see Equation 22 and Equation 27), as both calculate relevance scores and aggregate sub-updates for the residual stream. However, the way the scores and updates are calculated is very different. The attention module relies primarily on dynamic context for its scores and values, while the feed-forward module relies on static representations.
### The Word Embedding Matrix \(W_{e}\) and Output Probabilities
Once all the attention and feed-forward updates have been applied, the output probabilities of the network can be obtained as follows Xiong et al. (2020):
\[p(Y)=\text{softmax}\big{(}\text{LayerNorm}(X_{L})W_{E}^{T}\big{)} \tag{28}\]
In the case where layer normalization has no trainable parameters, Equation 28 can be interpreted as measuring the similarity between the final layer representation \(X_{L}\) when projected to \(\mathcal{H}_{S}\), and each of the embedding vectors in \(W_{E}\). Given that all vectors in the projection have the same norm \(\sqrt{d}\), the only relevant factor in deciding the output probability distribution \(p(y^{t})\in[0,1]^{|V|}\), at a given timestep \(t\), is the location of its corresponding vector \(x_{l}^{t}\) within \(\mathcal{H}_{S}\). This behavior is very similar to that described by the von Mises-Fisher distribution (Fisher, 1953), as both represent distributions parameterized by a reference vector within a hyper-sphere. Nonetheless, in the case of transformers, the support of the distribution is defined over a discrete set of vectors in \(\mathbb{R}^{d}\), instead of \(\mathcal{H}_{S}\) as a whole.
In the case the layer normalization includes scaling and bias parameters \(\gamma\) and \(\beta\), the output probabilities are calculated as follows:
\[p(Y)=\text{softmax}\big{(}\hat{X_{L}}\Gamma W_{E}^{T}+\beta W_{E}^{T}\big{)} \tag{29}\]
where \(\hat{X_{L}}\) is the projection of \(X_{L}\) to \(\mathcal{H}_{S}\) and \(\Gamma\) is a diagonal matrix such that \(\Gamma_{ii}=\gamma_{i}\). The effect of \(\Gamma\) on the representation is that of transforming \(\mathcal{H}_{S}\) into an ellipsoid (see Top Right section of Figure 3) while \(\beta W_{E}^{T}\) acts as a bias that assigns higher probability to certain tokens independent of the input.
In both cases (with and without bias and scale parameters), this perspective aligns with that of iterative refinement (Jastrzebski et al., 2017) discussed in nostalgebraist (2020), Elhage et al. (2021), Geva et al. (2022), Belrose et al. (2023), given that intermediate representations \(X_{l}\) can always be converted into output probabilities using Equation 28.
To conclude this section, we would like to highlight that, by considering the role of layer normalization and how it constrains the representation space, we can get a geometric intuition behind iterative refinement. We provide a visual interpretation of this concept in Figure 1.
## 4 Experiments
This section presents our experimental results. All experiments use pre-trained weights from the 124M parameter version of GPT-2 (Radford et al., 2019; Karpathy, 2023) unless stated otherwise. Code to replicate all experiments is available at: [https://github.com/santiagOm/traveling-words](https://github.com/santiagOm/traveling-words).
### Impact of Layer Normalization on the Word Embeddings
To measure the impact of layer normalization on the position of the embedding vectors in \(W_{E}\), we calculated both the \(\ell_{2}\) and cosine distances between the layer-normalized weights and the following settings:
* Original: The original word embeddings without any modification
* Centered: Original + centering around the mean \(\mathrm{E}[w_{e}]\)
* Scaled: Original divided by the average vector norm \(\mathrm{E}[||w_{e}||_{2}]\) and multiplied by \(\sqrt{d}\)
* Centered + Scaled: Original + centering + scaling
The results in Table 1 show that the mean cosine distance between the original word embeddings and the embeddings after normalization is close to zero, meaning that projection onto \(\mathcal{H}_{S}\) does not modify the orientation of the embedding vectors. The results also confirm this when centering is applied, as the cosine distance increases significantly when the original vectors are displaced from the origin towards the mean. On the other hand, it can be seen that the \(\ell_{2}\) distance is high for all settings except for when scaling is applied without centering. Given an average norm of \(\mathrm{E}[w_{e}]=3.959\) and for \(\sqrt{d}=27.713\) we can conclude that the original word embeddings lie between the origin and \(\mathcal{H}_{S}\) rather than on its surface, with different embeddings having different norms.
Variance in the norm of embedding vectors is likely to be a result of the use of the word embedding matrix as a classification layer later in the network (see Equation 29). To verify whether this is the case, we select the top and bottom 5 embedding vectors based on the three following criteria:
* Norm: The norm of the original embedding vector in \(W_{E}\)
* Scaled Norm: The norm of the embedding vector when scaled by the Layer Norm parameter \(\Gamma\)
* Norm + Bias: The norm of the original embedding vector plus the bias scores obtained from \(\beta W_{E}^{T}\)
* Scaled Norm + Bias: The sum between the Scaled Norm and the bias scores.
The sorted tokens in Table 2 show that considering only the norm of the embeddings is not enough, as tokens that are not commonly used (like 'SPONSORED' and'soDeliveryDate') have the highest norms, while common words like 'for', 'an', 'on' and 'in' have the lowest norm. After considering the scaling parameter \(\Gamma\), we observe that punctuation signs like the newline character or the comma ',' have the lowest norm, and that there is no clear pattern on the top tokens.
\begin{table}
\begin{tabular}{l c c} \hline
**Setting** & **Mean \(\ell_{2}\) Distance** & **Mean Cosine Distance** \\ \hline Original & 23.747 (0.432) & \(<\)0.001 (\(<\)0.001) \\ Centered & 24.872 (0.432) & 0.150 (0.035) \\ Scaled by \(\sqrt{d}\) & 2.413 (1.862) & \(<\)0.001 (\(<\)0.001) \\ Centered + Scaled by \(\sqrt{d}\) & 14.591 (1.469) & 0.150 (0.035) \\ \hline \end{tabular}
\end{table}
Table 1: Distance between the normalized embeddings LayerNorm(\(W_{E}\)) and different transformations of the embedding matrix \(W_{E}\).
\begin{table}
\begin{tabular}{c c c c} \hline
**Position** & **Norm** & **Scaled Norm** & **Norm + Bias** & **Scaled Norm + Bias** \\ \hline Top 1 & SPONSORED & \(\backslash\)xa9\(\backslash\)xb6\(\backslash\)xe6 &, & the \\ Top 2 & \(\backslash\)x96\(\backslash\)x9a & tremed & the, &, \\ Top 3 & soDeliveryDate & \(\backslash\)x96\(\backslash\)x9a &. & and \\ Top 4 & enegger & senal & and & a \\ Top 5 & Reviewer & millenn & - & in \\ \hline Bottom 5 & for & - & \(\backslash\)xc0 & \(\backslash\)x07 \\ Bottom 4 & an & ( & \(\backslash\)x07 & \(\backslash\)x0f \\ Bottom 3 & on & “\(\backslash\)n” & \(\backslash\)x10 & oreAndOnline \\ Bottom 2 & in &, & \(\backslash\)x11 & \(\backslash\)x06 \\ Bottom 1 & at &. & \(\backslash\)xfe & \(\backslash\)xc1 \\ \hline \end{tabular}
\end{table}
Table 2: Top 5 and Bottom 5 tokens from the word embedding matrix. Tokens were sorted according to the relevance of their corresponding embedding vectors under different measurement settings.
After considering bias, we see that the distribution of top tokens clearly shifts, with punctuation symbols and common words at the top and uncommon bytes at the bottom. Finally, note that when both scale and bias are considered, the top tokens are consistent with some of the most common words in the English language: 'the', 'and', 'a' and 'in' with the only exception being the comma character, which is also very common in natural language, while the bottom tokens are related to uncommon bytes and an anomalous token.
### Probing Attention Heads with Normalized Representations of Common Nouns
Next, we use the interpretation from subsection 3.3 and 3.4 to probe the attention heads at layers 0, 5 and 11 of the GPT-2 model using as inputs the 100 most common nouns taken from the Corpus of Contemporary American English (COCA) [Davies, 2010]. First, we transform the embedding matrix \(W_{E}\) according to the normalization parameters specific to each layer (see Figure 3) and then multiply the normalized embeddings \(W_{E}^{norm}\) by either \(W_{QK}\) or \(W_{VO}\).
Then, we perform decoding steps specific to each matrix after multiplication:
* For \(W_{QK}\), we retrieve the top-k closest embedding vectors from \(W_{E}^{norm}\) based on dot product similarity.
* For \(W_{VO}\), we add the head-specific and layer-specific output biases (see Equation 22) to obtain the "update vectors". These update vectors are then added to the original embeddings from \(W_{E}\) and transformed according to the normalization parameters from the last layer; then, we retrieve the top-k closest embeddings from the original \(W_{E}\) based on dot product similarity.
#### 4.2.1 Query-Key Transformations
In Table 3, we present the results for the Query-Key transformations at layer 0 given the query inputs 'time', 'life' and 'world'. We note that some of the heads preserve the meaning of the query, as is the case for heads 1, 5 and 10, possibly looking for repetition, while others look for keys that precede it. Such precedence heads might help to disambiguate the meaning of the words, with examples like: 'Showtime' vs.'spacetime', 'battery life' vs. 'wildlife' and 'underworld' vs. 'Westworld'. Other heads appear to be looking for contextual associations, as is the case for head 2, which seems to relate 'world' with dates and concepts from the First and Second World wars. When looking at deeper layers (as shown in Table A.1 & A.2), we were not able to identify any meaningful patterns on the query transformations, suggesting that these layers might look for more complex patterns.
#### 4.2.2 Key-Value Transformations
In Table 4, we present the results for the Key-Value transformations for the same three inputs. For most heads at layer 0, the meaning of the input key is kept as is. However, when the sum of all the heads is considered, we see a slight shift in the meaning of the words.
For heads at layer 5 (shown in Table A.3), we see that although most of the heads preserve the meaning of the input keys 'life' and 'world' (and around half of the heads for the input 'time'), the sum of all heads does change the word meaning dramatically, and without a clear output pattern. As our experiment is limited to testing a single input key at a time, it might be possible that updates in this layer rely more heavily on the composition between multiple keys, which we did not capture.
Finally, in the last layer (Table A.4), we see that most individual heads map to seemingly arbitrary values, with only a few preserving the meaning of the input key. However, when the sum of the heads is considered, the layer preserves the meaning of the input keys. To test the hypothesis that meaning-preserving heads dominated the layer update, we measured the norm of the output values for each head (before adding the layer-specific bias \(\beta_{o}\)). We found that, in most cases, these heads do not have higher norms. Instead, heads promoting common tokens like 'the', ',' and 'and' had the highest norms. These results suggest that contrary to our hypothesis, the heads at the last layer work together to preserve the meaning of the input keys and mitigate the network's bias towards common tokens.
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline \multirow{2}{*}{**Head**} & \multicolumn{3}{c|}{**Query \(\rightarrow\) Keys**} \\ \cline{2-4} & time & life & world \\ \hline
0 & Level, [?], offenders & battery, Battery, Battery & legraph, Vers, Malf \\
1 & time, time, Time & Life, life & World, world, world \\
2 & cinematic, Priest, priest & Notre, fetal, abortion & 1914, Churchill, 1916 \\
3 & space, lunch, mid & augh, ertain, ough & under, Nether, Fort \\
4 & soft, heavy, tool & Middle, Hans, Middle & ether, Unt, Know \\
5 & time, time, Time & life, Life, Life & world, World, world \\
6 & Rated, chirop, u & Fukushima, chirop, ulic & ipt, u, Meta \\
7 & Show, bed, Movie & pro, wild, Wild & Disc, West, West \\
8 & java, framework, watch & shark, sharks, Wild & edit, ”$./, movie \\
9 & stones, pal, cards & Trojan, malware, Wi & Rogers, COUNTY, Rd \\
10 & time, time, Time & life, life, Life & world, world, World \\
11 & Wine, a, food & PHI, everal, Span & agus, true, ‘,’ \\ \hline \hline \end{tabular}
\end{table}
Table 3: Transformation of Queries Across Transformer Heads at Layer 0
### Singular Value Decomposition of the \(W_{vo}\) matrix
To verify whether the key-value interpretation of \(W_{VO}\) matrix proposed in subsection 3.4 is correct, we probe each of its singular vectors (as proposed in Millidge and Black (2022)). For the left singular vectors \(U\) (scaled by \(\Sigma\)), we use the normalized embeddings \(W_{E}^{norm}\) as a probe, while for the right singular vectors \(V^{T}\), we use the original embeddings \(W_{E}\). Given that all singular values are constrained to be positive, we get two possible singular vector pairs corresponding to each singular value: \((u,v)\) and \((-u,-v)\). For ease of analysis, we choose the signed pair with its \(v\) component closest to any of the embeddings \(w_{e}\in W_{E}\), using the dot product similarity.
We did not observe any interpretable pattern for the attention heads at layer 0 and found only one interpretable head at layer 5 (head 10), which referred to terms in politics and chemistry. However, we found that most heads in layer 11 were interpretable (except for heads 5, 7 and 9) and present the results for all heads in Appendix B. An illustrative case of these patterns is head 3, where most of its singular vector mappings are related to jobs or industries. For example, 'Dairy' maps to 'USDA' (the United States Department of Agriculture), 'engine' to 'drivers', 'trading' to 'Sales' and so on. Similar patterns were present in other heads, listed as follows:
* Head 0: Formatting and punctuation symbols (end of text, new line, brackets and parenthesis)
* Head 1: Gender words
* Head 2: Proper Nouns (Places)
* Head 3: Jobs / Industries
* Head 4: Letters and Numbers
\begin{table}
\begin{tabular}{l|l|l|l} \hline \hline \multirow{2}{*}{**Head**} & \multicolumn{3}{c|}{**Key \(\rightarrow\) Values**} \\ \cline{2-3} & time & life & world \\ \hline
0 & time, Time, time & life, choice, seal & world, World, worlds \\
1 & time, TIME, time & life, lihood, life & world, Goes, ship \\
2 & time, [?], Minutes & life, Life, life & world, world, World \\
3 & time, Time,theless & life, Life, life & world, World, worlds \\
4 & time, time, Time & life, Life, Life & world, World, world \\
5 & time, Time, Time & life, Life, Life & world, World, worlds \\
6 & time, time, Time & life, life, Life & world, world, Feather \\
7 & time, elless, times & life, Experience, Life & world, World, Abyss \\
8 & time, iversary, melodies & life, challeng, conservancy & world, worlds, droid \\
9 & time, time, recall & [?], local, Main & [?], world, local \\
10 & equivalents, ligation, planes & life, ento, planner & world, ento, Tanzania \\
11 & time, Time, Time & life, Life, +++ & world, World, Trials \\ \hline Sum & time, etime, watch & Indigo, life, crew & world, Unleashed, World \\ \hline \hline \end{tabular}
\end{table}
Table 4: Transformation of Keys Across Transformer Heads at Layer 0
* Head 6: Suffixes and Prefixes related to the ending and beginning of words
* Head 8: Punctuation symbols
* Head 10: Proper Nouns (First and Last names)
* Head 11: The identity function (input similar to the output)
We found that these patterns were consistent with those obtained in the "Key \(\rightarrow\) Value" results from Table A.4, implying that the subject-specific behavior of the singular vectors is reflected in the input-output transformations of the attention heads. These results complement previous work from Millidge and Black (2022), in which only the right singular vectors \(V^{T}\) were considered.
#### SVD of the \(W_{qk}\) matrix
In additional experiments on the SVD of the \(W_{QK}\) matrix, we found that some singular vector pairs had clear associations. For example, in head 0 of layer 0, we found some associations related to programming languages ('self, class, =, import' \(\rightarrow\) 'Python') and digital cameras ('Video, 264, minutes' \(\rightarrow\) 'Nikon, lineup, shot, camera') but we could not identify any specialization for the heads. Surprisingly, we did find that heads at the last layer had identifiable patterns on their left singular vectors (associated with the queries) consistent with those listed for the \(W_{VO}\) matrix (punctuation for head 0, gender for head 1, and so on), but no clear patterns were identified for the right singular vectors.
### Visualizing Iterative Refinement
Finally, we visualize how the information in the residual stream is updated (i.e. the iterative refinement process) leveraging dimensionality reduction techniques, as shown in Figure 5. For this, we chose the test sentence 'To kill two birds with one stone', as the predictability of its last token,'stone', given the previous context was high (correctly predicted by the model) and none of the words in the sentence repeated. To project the high dimensional embeddings into 3D space, we used UMAP (McInnes et al., 2018), with Laplacian Eigenmap initialization (Belkin and Niyogi, 2001; Kobak and Linderman, 2021), and we fit the transform using the first 10,000 embedding vectors from \(W_{E}\) to accurately reflect proximity in the original embedding space. We show the original embedding tokens as reference (in blue) and plot the trajectory of the second-to-last token, 'one', as we process the entire sequence (with added positional embeddings) throughout the network. For each layer, we transform the latent representations in the residual stream using the normalization parameters from the final output layer before projecting with UMAP. It can be seen that the representation of the second-to-last token shifts from its original meaning ('one') towards the meaning of the next token ('stone'). Although the figure also shows the magnitude and direction of each update in the trajectory, it is important to mention that these quantities might have been modified due to the dimensionality reduction process.
## 5 Conclusion
We have presented a new interpretation of transformer models based on the geometric intuition behind each component and how all these components come together as the transformation of the meaning of one input token to the next.
First, we showed how layer normalization can be better understood as a projection of latent features in \(\mathbb{R}^{d}\) to a \((d-1)\)-dimensional hyper-sphere and provide experimental evidence that the word embeddings learned by GPT-2 are distributed toward different directions of the hyper-sphere. We also showed that the parameters of the final normalization layer are crucial to obtain high-scoring tokens consistent with high-frequency tokens in the English language.
Next, we discussed the role of the \(W_{QK}\) and \(W_{VO}\) matrices as transformations
Figure 5: UMAP 3D projection of the phrase ‘To kill two birds with one stone’. The original word embeddings are in blue, the final latent representation for the second-to-last token (‘one’) in purple, and its trajectory in red, with each trajectory segment representing an update between transformer blocks. Note that the latent representation starts close to its corresponding embedding, ‘one’, and gets closer to that of the next token, ‘stone’, with each update.
related to the hyper-sphere, with \(W_{QK}\) as an affine transformation that overlaps queries and keys, and \(W_{VO}\) as a key-value map between the hyper-sphere and the original embedding space. These intuitions were tested with probing experiments, showing promising results in understanding the role of query-key attention in earlier layers and extending the results from Millidge and Black (2022) on the subject-specific nature of the \(W_{VO}\) matrix in attention heads at deeper layers.
Finally, we integrated these ideas and the impact of each component on the residual stream to provide visual evidence on how the iterative refinement process works within transformers.
|
2301.13542 | Theoretical aspects in penalty hyperparameters optimization | Learning processes are useful methodologies able to improve knowledge of real
phenomena. These are often dependent on hyperparameters, variables set before
the training process and regulating the learning procedure. Hyperparameters
optimization problem is an open issue in learning approaches since it can
strongly affect any real data analysis. They are usually selected using
Grid-Search or Cross Validation techniques. No automatic tuning procedure
exists especially if we focus on an unsupervised learning scenario. This study
aims to assess some theoretical considerations for tuning penalty
hyperparameters in optimization problems. It considers a bi-level formulation
tuning problem in an unsupervised context, by using Gradient-based methods.
Suitable conditions for the existence of a minimizer in an infinite-dimensional
Hilbert space are outlined, together with some theoretical results, applicable
in all those situations when it is unnecessary or not possible obtaining an
exact minimizer. An iterative algorithmic strategy is considered, equipped with
a stopping criterion via Ekeland's variational principle. | Flavia Esposito, Laura Selicato, Caterina Sportelli | 2023-01-31T10:35:39Z | http://arxiv.org/abs/2301.13542v1 | # Theoretical aspects in penalty hyperparameters optimization
###### Abstract
Learning processes are useful methodologies able to improve knowledge of real phenomena. These are often dependent on hyperparameters, variables set before the training process and regulating the learning procedure. Hyperparameters optimization problem is an open issue in learning approaches since it can strongly affect any real data analysis. They are usually selected using Grid-Search or Cross Validation techniques. No automatic tuning procedure exists especially if we focus on an unsupervised learning scenario.
This study aims to assess some theoretical considerations for tuning penalty hyperparameters in optimization problems. It considers a bi-level formulation tuning problem in an unsupervised context, by using Gradient-based methods. Suitable conditions for the existence of a minimizer in an infinite-dimensional Hilbert space are outlined, together with some theoretical results, applicable in all those situations when it is unnecessary or not possible obtaining an exact minimizer. An iterative algorithmic strategy is considered, equipped with a stopping criterion via Ekeland's variational principle.
Hyperparameters optimization, learning approaches, existence
###### Abstract
The authors would like to thank Prof. A. M. Candela and Prof. N. Del Buono from Universita degli Studi di Bari Aldo Moro for the deep discussions on the preliminary version of this manuscript. This work was supported by INDAM-GNCS.
2021 68Q32, 46N10, 90C46, 49J27, 90C48
## 1 Introduction
Training a Machine Learning (ML) algorithm is quite important to produce data-driven models, which can be successfully applied in real life applications. These processes often require to specify several variables by the users, namely hyperparameters, which must be set before the learning procedure starts. Hyperparameters govern the whole learning process and play a crucial role in guaranteeing good model performances. They are often manually specified, and the lack of an automatic tuning procedure makes the field of Hyperparameter Optimization (HPO) an ever-evolving topic. The literature offers various solutions for hyperparameters tuning, from Gradient-based to Black-Box or Bayesian's approaches, beside some naive but daily used methods such as Grid and Random search. A brief overview on existing methods can be found in [1]. Hyperparameters can be of different types (discrete, continuous, categorical), and in most cases, the number of their configurations to explore is infinite. This paves the way for a mathematical formalization of the HPO in ML context with abstract spaces, such as Hilbert spaces.
A supervised learning algorithm may be represented as a mapping that takes a configuration of hyperparameter and a dataset \(D\) and returns an hypothesis [2]:
\[\mathscr{A}:\Lambda\times D\rightarrow\mathscr{H};\hskip 28.452756pt\mathscr{A}( \lambda,D)=h,\quad\text{with}\quad D=\bigcup_{N\in\mathbb{N}}(X\times Y)^{N} \tag{1}\]
where \(D\) is the space of finite dimensional dataset, representing a task, \(X\) and \(Y\) are the input and output spaces, \(\Lambda\) is an hyperparameter space, and \(\mathscr{H}\) is an hypothesis space. A quite standard claim for the hypotheses set is to be a linear function space, endowed with a suitable norm (more binding arising from an inner product): two requirements satisfied when \(\mathscr{H}\) is a Hilbert space of functions over \(X\)1. Assuming an Hilbert space structure on the hypothesis space has some advantages: (i) practical computations reduced to ordinary linear algebra operations and (ii) self duality; that is for any \(x\in X\) a representative
of \(x\) can be found, i.e., \(\mathscr{R}_{x}\in\mathscr{R}\) exists such that
\[h(x)=\langle\mathscr{R}_{x},h\rangle\quad\text{ for all }h\in\mathscr{H}, \tag{2}\]
where \(\mathscr{R}_{x}\) is a suitable positive definite "kernel". This construction gives the chance to connect the abstract structure of \(\mathscr{H}\) and what its elements actually are, flipping the construction of the hypotheses set from the kernel. Providing a suitable positive function \(k\) on \(X\), \(\mathscr{H}\) can be set as the minimal complete space of functions involving all \(\{k_{x}\}_{x\in X}\) equipped with the scalar product in (2). Thus, \(\mathscr{R}\) is outlined in a unique way, and it is named the Representing Kernel Hilbert Space mapped to the kernel \(k\).
Starting from this abstract scenario, one can deepen the HPO in supervised ML. Formally, HPO can be formulated as the problem of minimizing the discrepancy between \(\mathscr{A}\), trained on a given training dataset \(D_{tr}\), and a validation dataset \(D_{val}\)[3], to find the optimal \(\lambda^{*}\) such that
\[\lambda^{*}=\operatorname*{argmin}_{\lambda\in\Lambda}\mathscr{V}(\mathscr{A }(\lambda,D_{tr}),D_{val}),\quad\text{where}\quad\mathscr{V}:\mathscr{H}\times X \rightarrow\mathbb{R}. \tag{3}\]
In this study, we will address problem (3) through Gradient-based methods (GB), by using a bi-level approach. Bi-level programming solves an outer optimization problem subject to the optimality of an inner optimization problem, and it can be adopted to formalize HPO for any learning algorithm [4; 5; 6; 7]. We will work on Hilbert spaces for solving HPO in unsupervised problems, considering as hyperparameter the penalty coefficient. We already treat this aspect in the particular and more specific case of Nonnegative Matrix Factorization task, encurring in some generalization problems and restrictions of the theorems' assumptions [8].
To overcome the difficulties in ensuring the theoretical assumptions when real data domains are considered, this work extends existence and uniqueness theorems for the solution of the hyperparameters bi-level problem to the more general framework of infinite dimensional Hilbert space. This latter also allows the application of the Ekeland's variational principle to state that whenever a functional is not guaranteed to have a minimum, under suitable assumptions, a "good" substitute can be found, namely the best one can get as an approximate minimum. One of the purposes of this paper is to use this theoretical tool as a stopping criterion for the update of the hyperparameters as we will see later.
The outline of the paper is as follows. Section 2 introduces the classical bi-level formalization of HPO and some preliminary notions in a supervised context. Section 3 illustrates our proposal, extension on the unsupervised context. A general framework addressing HPO in Hilbert space is also set, and some general abstract tools are stated in Section 4. Section 5 presents a critical discussion and some practical considerations. Finally, Section 6 summarizes the obtained results and draws some conclusions.
## 2 Previous works and preliminaries
As briefly mentioned in the introduction, in a supervised learning scenario, HPO can be addressed through a bi-level formulation. This approach looks for the hyperparameters \(\lambda\) such that the minimization of the regularized training leads to the best performance of the trained data-driven model on a validation set. Accordingly to the ideas introduced in [9; 10], the best hyperparameters for a data learning task can be selected as the solution of the following problem:
\[\min\{J(\lambda):\lambda\in\Lambda\}, \tag{4}\] \[J(\lambda)=\inf\{\mathscr{E}(w_{\lambda},\lambda):w_{\lambda} \in\operatorname*{argmin}_{u\in\mathbb{R}^{r}}\mathscr{L}_{\lambda}(u)\}, \tag{5}\]
where \(w\in\mathbb{R}^{r}\) are \(r\) parameters, \(J:\Lambda\to\mathbb{R}\) is the so-called \(Response\)\(Function\) of the outer problem \(\mathscr{E}:\mathbb{R}^{r}\times\Lambda\to\mathbb{R}\), and for every \(\lambda\in\Lambda\subset\mathbb{R}^{p}\), \(\mathscr{L}_{\lambda}:\mathbb{R}^{r}\to\mathbb{R}\) is the inner problem.
A reformulation of HPO as a bi-level optimization problem is also solved via some GB algorithms. In particular, in GB methods HPO is addressed with classical procedure for continuous optimization, in which the hyperparameter update is given by
\[\lambda_{t+1}=\lambda_{t}-\alpha\mathbf{h}_{t}(\lambda) \tag{6}\]
where \(\mathbf{h}_{t}\) is an approximation of the gradient of function \(J\) and \(\alpha\) is a step size. It is known that the main challenge in this context is the computation of \(\mathbf{h}_{t}\), called hypergradient. In several cases, a numerical approximation of the hypergradient can be calculated for real-valued hyperparameters, although few learning algorithms are differentiable in the classical sense.
There are two main strategies for computing the hypergradient: iterative differentiation [9; 11; 12] and implicit differentiation [13; 14]. The former requires calculating the exact gradient of an approximate objective. This is defined through the recursive application of an optimization dynamics that aims to replace and approximate the learning algorithm \(\mathscr{A}\); the latter involves the numerical application of the implicit function theorem to the solution mapping \(\mathscr{A}(D_{tr};\ \cdot)\), when it is expressible through an appropriate equation [2].
In this study, we follow the iterative strategy, so that problem in (4)-(5) can be addressed through a dynamical system type approach.
If the following hypothesis hold:
Hypothesis 1.: the set \(\Lambda\) is a compact subset of \(\mathbb{R}\);
2. the Error Function \(\mathscr{E}:\mathbb{R}^{r}\times\Lambda\to\mathbb{R}\) is jointly continuous;
3. the map \((w,\lambda)\to\mathscr{L}_{\lambda}(w)\) is jointly continuous, and then the problem \(\operatorname*{argmin}\mathscr{L}_{\lambda}\) is a singleton for every \(\lambda\in\Lambda\);
4. the problem \(w_{\lambda}=\operatorname*{argmin}\mathscr{L}_{\lambda}\) remains bounded as \(\lambda\) varies in \(\Lambda\);
the problem in (4)-(5) becomes:
\[\min_{\lambda\in\Lambda}J(\lambda)=\mathscr{E}(w_{\lambda}^{*},\lambda),\quad w_{ \lambda}^{*}=\operatorname*{argmin}_{u}\mathscr{L}_{\lambda}(u). \tag{7}\]
It can be proved that the optimal solution \((w_{\lambda^{*}},\lambda^{*})\) of problem (7) exists [11]. The goal of HPO is to minimize the validation error of model \(g_{w}:X\to Y\), parameterized by a vector \(w\in\mathbb{R}^{r}\), with respect to hyperparameters \(\lambda\).
Considering the penalty optimization problems in which hyperparameter is the penalty coefficient \(\lambda\in\mathbb{R}_{+}\), the \(Inner\ Problem\) is the penalized empirical error represented by \(\mathscr{L}\), defined as:
\[\mathscr{L}_{\lambda}(w)=\sum_{(x,y)\in D_{tr}}\ell(g_{w}(x),y)+\lambda r(w), \tag{8}\]
where \(\ell\) is the loss function, \(D_{tr}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) the training set, and \(\mathscr{r}:\mathbb{R}^{r}\rightarrow\mathbb{R}\) is a penalty function. While the \(Outer\ Problem\) is the generalized error of \(g_{w}\) represented by \(\mathscr{E}\):
\[\mathscr{E}(w,\lambda)=\sum_{(x,y)\in D_{val}}\ell(g_{w}(x),y), \tag{9}\]
where \(D_{val}=\{(\mathbf{x}_{i},y_{i})\}_{i=1}^{n}\) is the validation set. Note that \(\mathscr{E}\) does not explicitly depend on \(\lambda\).
This work will allow overcoming some assumptions of Hypothesis 1 (such as compactness) that are difficult to satisfy in real data learning contexts, and also to use some theoretical result as the Ekeland's variational principle, stated in the following, to improve iterative algorithms.
**Theorem 1** (Ekeland's variational principle): _([15] Let \((V,d)\) be a complete metric space and \(J:V\rightarrow\bar{\mathbb{R}}\) be a lower semi-continuous function which is bounded from below. Suppose that \(\varepsilon>0\) and \(\tilde{v}\in V\) exist such that_
\[J(\tilde{v})\leq\inf_{V}J+\varepsilon.\]
_Then, given any \(\rho>0\), \(v_{\rho}\in V\) exists such that_
\[J(v_{\rho})\leq J(\tilde{v}),\qquad d(v_{\rho},\tilde{v})\leq\frac{ \varepsilon}{\rho},\]
_and_
\[J(v_{\rho})<J(v)+\rho\,d(v_{\rho},v)\qquad\forall\;v\neq v_{\rho}.\]
## 3 Our Proposal
The bi-level HPO framework can be modified to include unsupervised learning paradigms, generally designed to detect some useful latent structure embedded in data. Tuning hyperparameters for unsupervised learning models is more complex than the supervised case due to the lack of the output space, which defines the ground truth collected in the validation set.
This section describes a general framework to address HPO in Hilbert spaces for the unsupervised case and a corollary of the Ekeland's variational principle used to derive a useful stopping criterion for iterative algorithms solving this HPO.
Let \(X\in\mathbb{R}^{n\times m}\) be a data matrix, with reference to the problem (4)-(5), where now \(J:\Lambda\to\mathbb{R}\) is a suitable functional and \(\Lambda\) a Hilbert space equipped with the scalar product \((\cdot,\cdot)\), the outer problem is:
\[\mathscr{E}:\mathbb{R}^{r}\times\Lambda\to\mathbb{R}\qquad\mathscr{E}(w, \lambda)=\sum_{x\in X}\ell(g_{w}(x)), \tag{10}\]
and for every \(\lambda\in\Lambda\) the inner problem is:
\[\mathscr{L}:\mathbb{R}^{r}\to\mathbb{R}\qquad\mathscr{L}_{\lambda}(w)=\sum_{x \in X}\ell(g_{w}(x))+\mathscr{R}(\lambda,w), \tag{11}\]
where \(\mathscr{R}:\Lambda\times\mathbb{R}^{r}\to\mathbb{R}\) is a penalty function. We want to emphasize the new formulation with respect to (8) regarding the function \(\mathscr{L}_{\lambda}\), in which each component of the parameter \(w\) is penalized independently, and all optimization is performed on the data matrix \(X\).
The bi-level problem associated to (10)-(11) can be solved with a dynamical system approach in which the hypergradient is computed. Once the hypergradient is achieved a gradient-based approach can be used to find the optimum \(\lambda^{*}\). The Ekeland's variational principle can be used to construct an appropriate stopping criterion for iterative algorithms, with the aim of justifying and setting the hyperparameters related to the stopping criterion more appropriately. Roughly speaking, this variational principle asserts that, under assumptions of lower semi-continuity and boundedness from below, if a point \(\tilde{\lambda}\) is an "almost minimum point" for a function \(J\), hence a small perturbation of \(J\) exists which attains its minimum at a point "near" to \(\tilde{\lambda}\). As a fruitful selection of \(\rho\) occurs when \(\rho=\sqrt{\varepsilon}\) and such a choice allows us to reduce the number of hyperparameters to the precision error only, thus we will use Theorem 1 in the following form.
**Corollary 1**: _Let \((V,d)\) be a complete metric space and \(J:\Lambda\to\bar{\mathbb{R}}\) be a lower semi-continuous function which is bounded from below. Suppose that \(\varepsilon>0\) and \(\tilde{\lambda}\in\Lambda\) exist such that_
\[J(\tilde{\lambda})\leq\inf_{\Lambda}J+\varepsilon.\]
_Then, \(\tilde{z}\in\Lambda\) exists such that_
\[J(\tilde{z})\leq J(\tilde{\lambda}),\qquad d(\tilde{z},\tilde{\lambda})\leq \sqrt{\varepsilon}.\]
_and_
\[J(\tilde{z})<J(\lambda)+\sqrt{\varepsilon}\,d(\tilde{z},\lambda)\quad\forall \;\lambda\neq\tilde{z}.\]
## 4 Main Abstract results
In this section, we are ready to weaken the assumptions we discussed earlier and provide results related to the use of the Ekeland's principle as a stopping criterion. We mention an abstract result of the existence of a minimizer in Hilbert spaces which has great importance and a wide range of applications in several fields. As just one example, the Riesz's Representation Theorem, even if implicitly, makes use of the existence of a minimizer [16]. This is a widely relevant issue about Hilbert spaces, which makes them nicer than Banach spaces or other topological vector spaces. One can think, for example, that the whole Dirac Bra-ket formalism of quantum mechanics relies on this identification.
### Abstract Existence Theorem
It is well known that each bounded sequence in a normed space \(\Lambda\) has a norm convergent subsequence if and only if it is a finite dimensional normed space. Thus, given a normed space \(\Lambda\), as the strong topology (i.e., the one induced by the norm) is too strong to provide any widely appropriate subsequential extraction procedure, one can consider other weak topologies joined with the linear structure of the space and look for subsequential extraction processes therein.
In Banach spaces, as well as in Hilbert spaces, the two most relevant weaker-than-norm topologies are the weak-star topology and the weak topology. As the former is established in dual spaces, the latter is set up in every normed space. The notions of these topologies are not self-contained but fulfill a leading role in many features of the Banach space theory. In this regard, here we state some results we will use shortly.
**Theorem 2**: _If \(\Lambda\) is a finite-dimensional space, the strong and weak topologies coincide. In particular, it follows that the weak topology is normable, and then clearly metrizable, too. If \(\Lambda\) is an infinite-dimensional space, the weak topology is strictly contained in the strong topology, namely open sets for the strong topology exist which are not open for the weak topology. Furthermore, the weak topology turns to be not metrizable in this case._
**Definition 1**: A functional \(J:\Lambda\to\bar{\mathbb{R}}\) with \(\Lambda\) topological space, is said to be lower semi-continuous on \(\Lambda\) if for each \(a\in\mathbb{R}\), the sublevel sets
\[J^{-1}(]-\infty,a])=\{\lambda\in\Lambda:J(\lambda)\leq a\}\]
are closed subsets of \(\Lambda\).
In the following we introduce a "generalized Weierstrass Theorem" which gives a criteria for the existence of a minimum for a functional defined on a Hilbert space. For this reason, the incoming results will be provided for the abstract framework of a Hilbert space although, in some cases, they apply in
the more general context of Banach spaces. Thus, throughout the remaining part of this section we denote by \(\Lambda\) any real infinite dimensional Hilbert space. In an infinite dimensional setting, the following definitions are strictly related to the different notions of weak and strong topology.
**Definition 2**: A functional \(J:\Lambda\to\bar{\mathbb{R}}\) is said to be strongly (weakly, respectively) lower semi-continuous if \(J\) is lower semi-continuous when \(\Lambda\) is equipped with the strong (weak, respectively) topology.
**Definition 3**: A functional \(J:\Lambda\to\bar{\mathbb{R}}\) is said to be strongly (weakly, respectively) sequentially lower semi-continuous if
\[\liminf_{n\to+\infty}J(\lambda_{n})\geq J(\lambda)\]
for any sequence \((\lambda_{n})_{n}\subset\Lambda\) such that \(\lambda_{n}\to\lambda\) (\(\lambda_{n}\rightharpoonup\lambda\), respectively).
We proceed by providing some useful results.
**Proposition 3**: _The following statements are equivalent:_
1. \(J:\Lambda\to\mathbb{R}\) _is sequentially weakly lower semi-continuous functional;_
2. _the epigraph of_ \(J\) _is weakly sequentially closed, where, by definition, it is_ \[\operatorname{epi}(J)=\{(\lambda,t)\in\operatorname{dom}(J)\times\mathbb{R}:J (\lambda)\leq t\}.\]
_Remark 1_: As a further consequence of the preliminary Theorem 2, we have that sequential weak lower semi-continuity and weak lower semi-continuity do not match if \(\Lambda\) is infinite dimensional since weak topology is not metrizable. However, the weaker concept of sequential weak lower semi-continuity meets our needs.
**Proposition 4**: _Let \(\mathscr{C}\subseteq\Lambda\) be a closed and convex subset. Then, \(\mathscr{C}\) is weakly sequentially closed, too._
Since a sequentially weakly closed set is also strongly closed, it follows that a sequentially weakly lower semi-continuous functional is also (strongly) lower semi-continuous. Instead, the converse holds under an additional assumption. In particular, Proposition 4 allows us to infer the following results.
**Proposition 5**: _If \(J:\Lambda\to\mathbb{R}\) is a strongly lower semi-continuous convex functional; thus \(J\) is weakly sequentially lower semi-continuous, too._
_Proof_ Since \(J\) is lower semi-continuous, thus \(\operatorname{epi}(J)\) is closed. On the other hand, since \(J\) is convex, so it is \(\operatorname{epi}(J)\), whence Proposition 4 ensures that \(\operatorname{epi}(J)\) is weakly sequentially closed, i.e., \(J\) is weakly sequentially lower semi-continuous.
Thus, we are able to state the main result of this section.
**Theorem 6**: _Let \(\mathscr{C}\subset\Lambda\) be a non-empty, closed, bounded and convex subset. Let \(J:\Lambda\to\mathbb{R}\) be a lower semi-continuous and convex functional. Thus \(J\) achieves its minimum in \(\mathscr{C}\), i.e., \(\bar{\lambda}\in\mathscr{C}\) exists such that \(J(\bar{\lambda})=\inf\limits_{\lambda\in\mathscr{C}}J(\lambda)\)._
Let \(m:=\inf\limits_{\lambda\in\mathscr{C}}J(\lambda)\); hence \((\lambda_{n})_{n}\subset\mathscr{C}\) exists such that
\[J(\lambda_{n})\to m\quad\text{ as }n\to+\infty. \tag{12}\]
Now, our boundness assumption on \(\mathscr{C}\) implies that, up to subsequences, \(\lambda\in\mathscr{C}\) exists such that \(\lambda_{n}\rightharpoonup\lambda\) as \(n\to+\infty\). Actually, since \(\mathscr{C}\) is a closed and convex subset of \(\Lambda\), thus Proposition 4 applies, which guarantees that \(\lambda\in\mathscr{C}\).
Finally, from (12), Proposition 5 and Definition 3 we infer that \(J(\bar{\lambda})\leq m\), which gives the desired result.
If every closed and bounded subset in a metric space is compact, the space is said to have the Heine-Borel property. This property holds in every finite dimensional normed space but, in general, may not be true.
We observe that Theorem 6 still holds if the subset \(\mathscr{C}\) is not bounded as long as we ask for an additional assumption on the functional \(J\). In fact, requiring \(J\) to be coercive2 (and if at least \(\bar{\lambda}\in\mathscr{C}\) exists such that \(J(\bar{\lambda})<+\infty\)), then any minimizer of \(J\) on \(\mathscr{C}\) necessarily lies in some closed ball of radius \(r>0\). In fact, since \(J(\bar{\lambda})<+\infty\), any minimizer \(\lambda\) of \(J\) must have \(J(\lambda)\leq J(\bar{\lambda})\); furthermore, since \(J\) is coercive, a sufficient large radius \(r>0\) exists such that \(J(\lambda)>J(\bar{\lambda})\) for all \(\lambda\in\mathscr{C}\) with \(\|\lambda\|>r\). Thus, any minimizer, if exists, lies in the ball \(\{\lambda\in\mathscr{C}:\|\lambda\|\leq r\}\).
Footnote 2: We say that a functional \(J:H\to\mathbb{R}\) is coercive if \(J(u)\to\infty\) as \(\|u\|\to\infty,u\in H\).
In particular, Theorem 6 applies to the intersection between \(\mathscr{C}\) and a closed ball of suitable radius, since it turns to be convex if we formally require \(\mathscr{C}\) to be closed and convex.
Namely, the following result holds.
**Corollary 2**: _Let \(\mathscr{C}\subset\Lambda\) be a non-empty, closed and convex subset. Let \(J:\Lambda\to\mathbb{R}\) be a lower semi-continuous, convex and coercive functional. Thus \(J\) achieves its minimum, i.e., \(\bar{\lambda}\in\mathscr{C}\) exists such that \(J(\bar{\lambda})=\inf\limits_{\lambda\in\mathscr{C}}J(\lambda)\)._
Now we introduce a couple of results which are a direct consequence of Ekeland's variational principle. For the sake of completeness, here we provide them with all the details (see [17] for the original statements). Let \(\Lambda\) be a complete metric space and \(J:\Lambda\to\mathbb{R}\) be the lower semicontinuous response function on \(\Lambda\). Suppose that a point \(\lambda\in\Lambda\) exists such that \(J(\lambda)<+\infty\). Thus, the following results hold.
**Theorem 7** (Perturbation Result): _Let \(J_{\lambda}:\Lambda\to\bar{\mathbb{R}}\) be a lower semicontinuous function such that the inequality_
\[|J_{\lambda}(\gamma)-J(\gamma)|\leq\zeta(d(\gamma,\lambda))\quad\text{holds} \quad\forall\gamma\in\Lambda, \tag{13}\]
_where \(J_{\lambda}(\cdot)\) denote model function3, \(\zeta\) is some growth function4, and let \(\lambda^{+}\) be a minimizers of \(J_{\lambda}.\) If \(\lambda^{+}\) coincides with \(\lambda\), then \(|\nabla J(\lambda)|=0.\) On the other hand, if \(\lambda\) and \(\lambda^{+}\) are distinct, then a point \(\hat{\lambda}\in X\) exists which satisfies_
Footnote 3: As \(model\)\(function\) we mean the Taylor’s expansion of \(J\) in \(\lambda\), stopped to the first order.
Footnote 4: A differentiable univariate function \(\zeta:\mathbb{R}_{+}\to\mathbb{R}_{+}\) is called a growth function if it satisfies \(\zeta(0)=\zeta^{\prime}(0)=0\) and \(\zeta^{\prime}>0\) on \((0,+\infty).\) If in addition, equalities \(\lim\limits_{t\to 0}\zeta^{\prime}(t)=\lim\limits_{t\to 0}\zeta(t)/\zeta^{ \prime}(t)=0\) hold, we say that \(\zeta\) is a proper growth function.
1. \(d(\lambda^{+},\hat{\lambda})\leq 2\cdot\frac{\zeta(d(\lambda^{+},\lambda))}{ \zeta^{\prime}(d(\lambda^{+},\lambda))}\) _(point proximity)_
2. \(J(\hat{\lambda})\leq J(\lambda^{+})+\zeta(d(\lambda^{+},\lambda))\) _(value proximity)._
_Proof_ By Taylor's theorem it is simple to verify that \(|\nabla J_{\lambda}|(\lambda)=|\nabla J|(\lambda).\)
Now, since \(\lambda\) is a minimizer, we have \(|\nabla J(\lambda)|=0\) if \(\lambda^{+}=\lambda.\) On the other hand, if \(\lambda^{+}\neq\lambda,\) from inequality (13) and the definition of \(\lambda^{+},\) it follows that
\[J(\gamma)\geq J_{\lambda}(\lambda^{+})-\zeta(d(\gamma,\lambda)).\]
Let us define the new function
\[G(\gamma):=J(\gamma)+\zeta(d(\gamma,\lambda)).\]
Thus, from assumption (13) and inequality \(\inf G\geq J_{\lambda}(\lambda^{+})\) we infer that
\[G(\lambda^{+})-\inf G\leq J(\lambda^{+})-J_{\lambda}(\lambda^{+})+\zeta(d( \lambda^{+},\lambda))\leq 2\zeta(d(\lambda^{+},\lambda)).\]
Whence, Theorem 1 applies and, having \(\varepsilon:=2\zeta(d(\lambda^{+},\lambda)),\) for all \(\rho>0\)\(\lambda_{\rho}\) exists such that
\[G(\lambda_{\rho})\leq G(\lambda^{+})\quad\text{ and }\quad d(\lambda^{+}, \lambda_{\rho})\leq\frac{\varepsilon}{\rho}.\]
The desired result follows simply by placing \(\rho=\zeta^{\prime}(d(\lambda^{+},\lambda))\) with \(\hat{\lambda}=\lambda_{\rho}.\)\(\square\)
An immediate consequence of Theorem 7 is the following subsequence convergence result.
**Corollary 3** (Subsequence convergence to stationary points): _Consider a sequence of points \(\lambda_{k}\) and closed functions \(J_{\lambda_{k}}:\Lambda\to\bar{\mathbb{R}}\) satisfying \(\lambda_{k+1}=\operatorname*{argmin}_{\gamma}J_{\lambda_{k}}(\gamma)\) and \(d(\lambda_{k+1},\lambda_{k})\to 0.\) Moreover suppose that the inequality_
\[|J_{\lambda_{k}}(\gamma)-J(\gamma)|\leq\zeta(d(\lambda_{k},\gamma))\quad \text{holds}\quad\forall k\in\mathbb{N}\quad\text{and}\quad\gamma\in\Lambda, \tag{14}\]
_where \(\zeta\) is a proper growth function. If \((\lambda^{*},J(\lambda^{*}))\) is a limit point of the sequence \((\lambda_{k},J(\lambda_{k}))\), then \(\lambda^{*}\) is stationary for \(J\)._
Two interesting consequences for convergence analysis flow from there. Suppose that the models are chosen in such a way that the step-sizes \(\|\lambda_{k+1}-\lambda_{k}\|\) tend to zero. This assumption is often enforced by ensuring that \(J(\lambda_{k+1})<J(\lambda_{k})\) by at least a multiple of \(\|\lambda_{k+1}-\lambda_{k}\|^{2}\) (sufficient decrease condition).
Then, assuming for simplicity that \(J\) is continuous on its domain, any limit point \(\lambda^{*}\) of the iterate sequence \(\lambda_{k}\) will be stationary for the problem (Corollary 3).
Thus, by choosing an error \(\varepsilon\), we can stop update (6) for GB algorithms in the context of bi-level HPO for penalty hyperparameter, according to the pseudo code in 1.
```
0: Error \(\varepsilon\). Some starting points \(\lambda_{0}\), \(\lambda_{1}\).
0: Optimum \(\lambda^{*}\)
1:while\(\|\lambda_{t}-\lambda_{t-1}\|>\varepsilon\)do
2: Compute \(\mathbf{h}(\lambda)\);
3: update \(\lambda_{t+1}=\lambda_{t}-\alpha\mathbf{h}_{t}(\lambda_{t})\);
4:\(t+=1\).
5:endwhile
```
**Algorithm 1** Pseudo-code
## 5 Discussion and practical considerations
We want to emphasize that moving to infinite dimensional Hilbert spaces are not a mere abstract pretense, but it is also important in some application contexts. For example, when Support Vector Machine (SVM) are taken into consideration, a well known "kernel trick" permits to interpret a Gaussian kernel as an inner product in a feature space. This is potentially infinite-dimensional, allowing to read the SVM classifier function as a linear function in the feature space [18]. Another example is provided by the quantum system possible states problem, in which the state of a free particle can be described as vectors residing in a complex separable Hilbert space [19].
Indeed, the strength of this article lies in theory. Both the existence theorem and the stopping criterion allow us to build an approach based on solid mathematical foundations useful for future extensions and generalizations to other problems, too. For example, infinite-dimensional Covariance Descriptors (CovDs) for classification is a fertile application arena for the extensions developed here. This finds motivation in the fact that CovDs could be mapped to Reproducing Kernel Hilbert Space (RKHS) via the use of SPD-specific kernels [20].
## 6 Conclusions
In this paper, we studied the task of penalty HPO and we provided a mathematical formulation, based on Hilbert spaces, to address this issue in an unsupervised context.
Focusing on bi-level formulation, we showed some relaxed theoretical results both to weaken the hypotheses necessary for the existence of the solution.
Our approach differs from the more standard techniques in reducing the random or black box strategies giving stronger mathematical generalization suitable also when it is not possible obtaining exact minimizer.
We also propose to use the Ekeland's principle as a stopping criterion, which fits well in the context of GB methods.
## Declarations
* **Funding:** The author F. E. was funded by REFIN Project, grant number 363BB1F4, Reference project idea UNIBA027 "Un modello numerico-matematico basato su metodologie di algebra lineare e multilineare per l'analisi di dati genomici". The author C. S. was partially supported by PRIN project "Qualitative and quantitative aspects of nonlinear PDEs" (2017JPCAPN_005) funded by Ministero dell'Istruzione, dell'Universita e della Ricerca.
* **Conflict of interest:** The authors have no relevant financial or non-financial interests to disclose.
* **Data availability:** Data sharing not applicable to this article as no datasets were generated or analysed during the current study.
|
2309.08026 | Determinants of successful mitigation in coupled social-climate dynamics | Understanding the impact of human behavior is crucial for successful
mitigation of climate change across the globe. To shed light onto this issue,
here we couple the forest dieback model with human behaviors. Using
evolutionary game theory, we build a time-delay system where forest growth is
impacted by both temperature and human mitigation choices, the latter being
informed by temperature forecasts. Simulations of the coupled system over 200
years show us the varying outcomes: forest dies out and no one is a mitigator,
forest dies out and everyone is a mitigator, or the forest survives and
everyone is a mitigator. There exist rare cases where no one is a mitigator and
yet the forest survives, but with a low coverage. We also find occasional
oscillations where the proportion of mitigators vary between 0 and 1. Our
results are based on simple models but have profound insights into determinants
of behavior changes desired in social-climate dynamics. | Longmei Shu, Feng Fu | 2023-09-14T20:57:16Z | http://arxiv.org/abs/2309.08026v1 | # Determinants of successful mitigation in coupled social-climate dynamics
###### Abstract
Understanding the impact of human behavior is crucial for successful mitigation of climate change across the globe. To shed light onto this issue, here we couple the forest dieback model with human behaviors. Using evolutionary game theory, we build a time-delay system where forest growth is impacted by both temperature and human mitigation choices, the latter being informed by temperature forecasts. Simulations of the coupled system over 200 years show us the varying outcomes: forest dies out and no one is a mitigator, forest dies out and everyone is a mitigator, or the forest survives and everyone is a mitigator. There exist rare cases where no one is a mitigator and yet the forest survives, but with a low coverage. We also find occasional oscillations where the proportion of mitigators vary between 0 and 1. Our results are based on simple models but have profound insights into determinants of behavior changes desired in social-climate dynamics.
keywords: social-climate models, evolutionary game theory, climate change +
Footnote †: journal: Proceedings of the Royal Society A
## 1 Introduction
Climate change, or global warming is one of many public goods games where social cooperation can play a very important role in averting potential catastrophic scenarios [1; 2; 3; 4]. Typically such collective-risk social dilemmas have complicated dynamics [5; 6; 7; 8]. Experiments with real people and money reward were also carried out to see how people would act in threshold climate games [9; 10]. Field experiments in multiple countries around the world [11] give us some insights on the climate problem as well.
Models increasingly help us understand the interactions between the carbon cycle, the climate system, human processes, and the impact of policies [12]. To ensure those policy decisions are robust to uncertainties, multiple scenarios are often laid out, ranging from carbon emission trajectories to socio-economic systems pathways. Influence in these models usually flows in one-direction, from socio-economic systems to the Earth system. Yet, two-way feedback mechanisms link climate and social processes: human behavior changes the climate, and the climate changes human opinions and consequently human behavior. Coupled human-environment models are already widely applied to study other systems such as fisheries and forests and the need for coupled social-climate models has been noted [13; 14; 15]. Such eco-evolutionary models where the environment changes depending on the actions of the players again adds to the rich dynamics of the system [16; 17].
Many social-climate models are complicated and we don't have a good intuition of the dynamics over time. Here we focus on a simple climate model, the forest dieback model, and couple it with a replicator equation that determines the percentage of mitigators in the population based on the warming over the past 10 years. So the social part of our model has a time-delay effect, and the growth rate of the forest depends on the percentage of mitigators in the population. We study the behavior of this coupled social-climate model under changing parameter values, the ambient temperature (or background temperature of the environment), and a sensitivity to warming. The sensitivity to warming decides how likely people are going to adopt mitigative behavior based on current warming. We solve our delay system over 200 years and see if the forest dies out or survives, and how much of the population mitigates climate change.
Since our model is relatively simple, we are able to analyze the equilibrium points well and see how they change when the parameters change. We run simulations over a wide range of parameters to see all possible rich dynamics of our model. Our findings, though derived from simple models, offer deep understanding into the factors influencing mitigation choices in the context of social-climate interactions.
## 2 Methods and Model
### Forest dieback model
We consider the climate and temperature for Amazonian forests. The vegetation coverage \(v\) is between 0 and 1, with 0 for bare soil and 1 for full
forest coverage. Then \(v\) satisfies the following equation
\[\frac{dv}{dt}=gv(1-v)-\gamma v. \tag{1}\]
Here \(\gamma=0.2\) is the disturbance rate [18] and \(g\) is the growth rate. The equilibrium points of the system have to satisfy
\[gv(1-v)=\gamma v,\quad v=0\text{ or }v=1-\frac{\gamma}{g}.\]
The fixed point under varying values of \(g\) are plotted in Figure 1. We can see the vegetation coverage has a positive robust stable fixed point when \(g>2\).
In the forest dieback model \(g\) is given by
\[g=g_{0}\left[1-\left(\frac{T-T_{opt}}{\beta}\right)^{2}\right]. \tag{2}\]
Figure 1: Equilibrium under varying growth rate \(g\)
Here \(g_{0}=2\) is the maximum growth rate, and \(\beta=10\) is the half-width of the growth versus temperature curve. \(T_{opt}=28^{\circ}\)C represents the optimal temperature for plant growth.
\(T\) is the actual temperature and is given by
\[T=T_{v}+(1-v)a. \tag{3}\]
Here \(a=5\) is the difference between surface temperature of bare soil and forest and \(T_{v}\) is the temperature with full forest coverage. For convenience, we will call it the ambient temperature from here on.
Plug (2) and (3) into (1), we get fixed points of the forest dieback model satisfies
\[v\left\{2(1-v)\left[1-\left(\frac{T_{v}+a(1-v)-T_{opt}}{\beta}\right)^{2} \right]-\gamma\right\}=0. \tag{4}\]
Plug in all the parameter values, we get
\[v=0\text{ or }(1-v)[1-0.01(T_{v}-23-5v)^{2}]=0.1.\]
Equilibrium points for varying values of \(T_{v}\) are shown in Figure 2. We can see bifurcation [19; 5; 20] happens and we have two different fixed points when the ambient temperature \(T_{v}\) is between \(32.5^{\circ}\)C and \(34.7^{\circ}\)C. The green dashed line represents the unstable fixed points \(v=0\), the blue curve represents the stable and robust fixed points, and the red dashed curve represents fixed points in the shaded bifurcation region.
Figure 2: Equilibrium under varying ambient temperature
### Human actions
We divide the population into two groups, mitigators (M) and nonmitigators (N). And the percentage of mitigators is \(x\), the percentage of nonmitigators is \(y\). So \(0<x,y<1,x+y=1\). The payoffs of mitigators and nonmitigators are
\[E_{M} =-\alpha+\frac{1}{2}f(T_{f})+\delta x,\] \[E_{N} =-\frac{1}{2}f(T_{f})+\delta y.\]
Here \(\alpha=1\) is the cost of mitigation, and \(\delta=1\) is the strength of social norms. [21]
Let \(\bar{E}\) be the average payoff in the population, then the proportion of mitigators satisfies the following replicator equation.
\[\frac{dx}{dt} =x(E_{M}-\bar{E})\] \[=x(E_{M}-xE_{m}-yE_{N})\] \[=x(1-x)(E_{M}-E_{N})\] \[=x(1-x)[-\alpha+f(T_{f})+\delta(2x-1)]\]
So far we haven't explained what \(f(T_{f})\) is yet. This term corresponds to the cost of global warming and satisfies the following equation.
\[f(T_{f})=\frac{f_{max}}{1+e^{-w(T_{f}-T_{c})}} \tag{5}\]
Here \(f_{max}=5\) is the maximum warming cost, \(w=3\) is the non-linearity of warming cost, and \(T_{c}\) is the critical temperature [21]. \(T_{f}\) is the perceived temperature rise given by
\[T_{f}(t)=\frac{t_{f}}{t_{p}}[T(t)-T(t-t_{p})] \tag{6}\]
Here \(t_{p}=10\) is the number of previous years used for temperature projection, \(t_{f}=15\) is the number of years ahead for temperature projection.
Plug (5),(6) in the replicator equation, we get
\[\frac{dx}{dt}=x(1-x)\left[\delta(2x-1)-\alpha+\frac{f_{max}}{1+e^{-w[t_{f}/t_{ p}T(t)-t_{f}/t_{p}T(t-t_{p})-T_{c}]}}\right]. \tag{7}\]
### Coupled social-climate model
In the previous section, we already considered how the temperature affects human actions. Now we will try to consider the effects of human actions on the climate as well. We will modify the growth rate given by (2) by adding a term that depends on \(x\), the proportion of mitigators,
\[g=g_{0}\left[1-\left(\frac{T-T_{opt}}{\beta}\right)^{2}\right]\eta(x).\]
Obviously we want \(\eta(x)\) to increase when \(x\) increases, since the more mitigators there are, the better the forest grows. To make sure that the change in the proportion of mitigators has an observable effect in the forest coverage, we set \(\eta(x)=0.2+0.4x\) so that when \(x\) changes from 0 to 1, roughly \(g\) changes between 0.4 and 1.2, where the equilibrium forest coverage is sensitive to the growth rate, see Figure 1.
Plug in the values for all parameters other than \(T_{v}\) and \(T_{c}\), the two equations we are coupling are
\[\begin{cases}\frac{dv}{dt}&=2[1-0.01(T_{v}-23-5v)^{2}](0.2+0.4x)v(1-v)-0.2v,\\ \\ \frac{dx}{dt}&=x(1-x)\left[2x-2+\frac{5}{1+e^{-3[-7.5v(t)+7.5v(t-10)-T_{c}]}} \right].\end{cases} \tag{8}\]
This is a system of autonomous nonlinear delay differential equations.
## 3 Theoretical Analysis
We now turn to analyze the coupled social-climate model in Eq. 8. Let's look at the fixed points of this system. For \(dv/dt=0\) we get either \(v=0\) or
\[h(v,x,T_{v})=2(1-v)(0.2+0.4x)[1-0.01(T_{v}-23-5v)^{2}]-0.2=0. \tag{9}\]
\(h(v,x,T_{v})\) depends on the value of \(x\) smoothly. As we vary the value of \(x\) through \([0,1]\), we expect the curve defined by (9) for \(T_{v}\) and \(v\) to change smoothly. We show the curves for \(x=0\) and \(x=1\) in Figure 3. As one can see, the solution curve is well defined for \(v\) in \((0,1)\) for a given value of
up to a certain point, beyond which bifurcation happens where one ambient temperature \(T_{v}\) corresponds to two different values for \(v\) and a small change in \(T_{v}\) can result in a dramatic change in \(v\) value. For \(x=0\), bifurcation happens when \(T_{v}\) is between \(30^{\circ}\)C and \(30.2^{\circ}\)C. For \(x=1\), bifurcation happens when \(T_{v}\) is between \(32.1^{\circ}\)C and \(33.7^{\circ}\)C.
One can check that above the blue curve \(h(v,x,T_{v})<0\) and below the blue curve \(h(v,x,T_{v})>0\), so \(v=0\) is unstable and the fixed point determined by the blue curve is stable and robust.
To have a better understanding of our equation, we also show plots for the equilibrium points under different fixed ambient temperatures. We still expect the relationship curve to change smoothly as the ambient temperature changes. For ambient temperatures between \(18^{\circ}\)C and \(30^{\circ}\)C, the curve looks similar to the case where \(T_{v}=25^{\circ}\)C in figure 4. Below this curve, \(h(v,x,T_{v})>0\), and above the curve \(h(v,x,T_{v})<0\). As the ambient temperature lowers, this curve shifts down until it disappears from our phase space around \(14^{\circ}\)C. However, as the ambient temperature rises, the curve doesn't rise up and disappear. Instead we see a horizontal parabola-like curve that moves up and right until it disappears from the right side of our phase space around \(34^{\circ}\)C. To show this process, we plot the curve under ambient temperatures \(30^{\circ}\)C, \(31^{\circ}\)C, \(32^{\circ}\)C, and \(33^{\circ}\)C in Figure 4.
As one can see, when the temperature is higher than \(30^{\circ}\)C, inside of the parabola-like curve, \(dv/dt>0\), and outside of the parabola-like curve \(dv/dt<0\). This gives us a hysteresis loop and backward bifurcation. The blue curve represents stable fixed points and the red dashed curve represents unstable fixed points even though they are technically the same curve, i.e.
Figure 3: Equilibrium under varying ambient temperature and mitigation
they are determined by the same equation. Below a certain vegetation coverage, the fixed point is unstable and above a certain vegetation coverage, the fixed point is stable.
Notice that the green straight line \(v=0\) sometimes is dashed and sometimes is solid, the solid parts correspond to stable fixed points and dashed parts correspond to unstable fixed points.
Figure 4: Equilibrium under varying mitigation and ambient temperature
For \(dx/dt=0\) we get \(x=0\), \(x=1\), or
\[x^{*}(T_{c})=1-\frac{2.5}{1+e^{3T_{c}}}. \tag{10}\]
For \(x^{*}\) to be between 0 and 1, we need \(T_{c}>1/3\ln(1.5)\approx 0.135^{\circ}C\). And for the default value listed in [18], where \(T_{c}=2.5^{\circ}C\), we have \(x^{*}(2.5)\approx 0.9986\), very close to 1. Among the three fixed point values, \(x=0\) and \(x=1\) are stable and \(x^{*}(T_{c})\) is unstable. We show a graph of (10) in Figure 5. As one can see \(x^{*}\) approaches 1 as soon as the critical warming reaches around \(2^{\circ}\)C.
We give a qualitative phase portrait of the system when (9) has a well defined solution for \(v\), which we will call \(v^{*}(x,T_{v})\). In Figure 6 (a), we have 6 fixed points and \(v^{*}\) is on a red curve that moves smoothly when \(T_{v}\) changes. When \(T_{v}\) decreases, the red curve moves down. For example, when \(T_{v}=15^{\circ}\)C, the left endpoint of the curve will move below 0, leaving us 5 fixed points instead of 6, see Figure 6 (b). As the curve moves further down we will have 4 fixed points, when the curve only intersects \(x=1\). When \(T_{v}\leq 13^{\circ}\)C, the curve will move out of the phase space, leaving us 3 fixed points, \((0,0),(1,0)\) and \((x^{*},0)\).
On the other hand when \(T_{v}\) increases, instead of moving up, the red curve moves up and to the right showing a horizontal parabola that opens to the right. We will run into a 5-fixed-point case similar to phase portrait 6 (b). When \(T_{v}=33^{\circ}\)C, we see a horizontal parabola which gives us 7 fixed points, see Figure 6 (c). When \(T_{v}\geq 34^{\circ}\)C the red curve moves out of our
Figure 5: Equilibrium under varying critical warming
\([0,1]\times[0,1]\) phase space, leaving us with the 3 fixed points on the horizontal axis, \((0,0),(1,0)\) and \((x^{*},0)\).
In phase portrait 6 (a), we can see that only two of the fixed points are stable, namely \((0,v^{*}(0,T_{v}))\) and \((1,v^{*}(1,T_{v}))\). All the other fixed points are unstable. When \(T_{v}\) decreases and the red curve moves down, fixed point \((0,v^{*}(0,T_{v}))\) disappears and \((0,0)\) becomes stable. This stays true until
Figure 6: Qualitative phase portrait
\((1,v^{*}(1,T_{v}))\) disappears and \((1,0)\) becomes stable.
Now when \(T_{v}\) increases, the red curve becomes a parabola, starting somewhere between \(0\) and \(x^{*}\) on the \(x-\)axis. It goes up and then right, still giving us 5 fixed points. Although the shape of the curve is different from phase portrait 6 (b), the stability analysis stays the same, so \((0,0)\) replaces \((0,v^{*}(0,T_{v}))\) as the stable fixed point. As the parabola moves to the right, we will have 6 fixed points again, where the parabola starts from somewhere between \(x^{*}\) and 1 on the \(x-\)axis and moves left and up, then right and up. Although the number of fixed points changes and the phase portrait looks different, the stable fixed points are the same as before, \((0,0)\) and \((1,v^{*}(1,T_{v}))\). As \(T_{v}\) increases further, we will reach phase portrait 6 (c) with 7 fixed points, and we will have one extra stable fixed point \((1,0)\). As the parabola moves further right, it will only intersect \(x=1\), we have 5 fixed points and the same three stable ones, \((0,0),(1,0)\) and \((1,v^{*}(1,T_{v}))\). Finally when the red curve disappears on the right, we only have the three fixed points on the \(x-\)axis left and \((0,0)\) and \((1,0)\) are both stable.
Also as we pointed out earlier, when \(T_{c}\geq 2^{\circ}\)C, \(x^{*}\) is very close to 1. This means the right strip in the phase portraits in Figure 6 can be very thin.
## 4 Simulation results
We use the original uncoupled forest dieback model (1) to compute the temperature of the forest for the first 10 years. Then we use a delay differential equation solver to find the evolution of the coupled social-climate model (8) over years 10 to 200. We repeat the simulation with different initial conditions \(x(0),v(0)\) and different \(T_{v}\) and \(T_{c}\) values.
We already discussed the stability of the qualitative phase portraits of our system and the stable fixed points were \((0,0),(1,0),(0,v^{*}(0,T_{v}))\) and \((1,v^{*}(1,T_{v}))\). This tells us the proportion of mitigators should converge to either 1 or 0, which is what we see in our simulation results. In the simulation results, the vegetation coverage either stabilizes at a positive value or converges to zero, with some rare cases where it's oscillating over a long time. We should point out that some oscillations die out in 50 years or so and converge, so it's possible the oscillations that persist by year 200 could
die out if we computed for a longer time period as well. We only computed the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as as the time period as well as the time period as well as the time period as well as the time period as well as the time period as the time period as well as the time period as well as the time period as well as the time period as well as the time period as well as the time as well as the time period as well as the time as the time period as well as the time period as well as the time as well as the time as well as the time as the time as well as the time as the time as well as the time as the time as well as the time as well as the time as the time as well as the time
the model up to year 200 since it might not be realistic to compute it for longer considering the ambient temperature probably would have changed.
We call the cases where both \(x\) and \(v\) converge to 0 by year 200 "Barren" and the cases where \(x\) converges to 1 and \(v\) converges to 0 "Brown". "Green" represents cases where \(x\) converges to 1 and \(v\) converges to some positive value typically around 0.7 or 0.8. We will call the long term oscillations "Swing". Our simulation results are shown in Figure 7.
Here the dark blue or purple region, the "Barren", means the forest dies out and nobody is a mitigator. The blue region, the "Brown", represents situations where the forest dies even though everyone is a mitigator. The green regions, the "Green", means the forest survives and everyone is a mitigator. The yellow region, the "Swing", represents cases where the vegetation and mitigation oscillate and don't converge by year 200.
In general, we can view \(T_{c}\) as the sensitivity level of the population towards warming. As we can see, typically, there is more "Green" for low values of \(T_{c}\), which means people will adopt mitigative actions as soon as they feel a small amount of warming. As \(T_{c}\) increases, people become less sensitive towards warming and don't adopt mitigation until the warming is rather severe. This might be too late as we can see a lot more "Barren" at high values of \(T_{c}\).
The ambient temperature \(T_{v}\) plays a role as well. When \(T_{v}\) is either too high or too low, we also see more "Barren" and "Brown". This agrees with the general consensus that there is a critical warming that we cannot go back from. And of course if the ambient temperature is too low we probably can't survive either. It is worth noting that between around 20\({}^{\circ}\)C and 30\({}^{\circ}\)C, we see a lighter violet region, which represents no mitigators and the vegetation coverage converges to a low positive value, around 20% for example. This happens when we have favorable environment where the forest stays even though there are no mitigators.
The initial conditions also change the final result. For example, when we start from a high forest coverage of 90% and low mitigation at 10%, we have a good amount of "Green" and small amounts of "Barren". If the mitigation we start with is 90%, the "Barren" region above 30\({}^{\circ}\)C actually becomes bigger, this could be because we don't have a lot of room for improvement in behavior since the mitigation we started with is already high. And also if we start with low forest coverage at 10%, we will have huge "Barren" regions.
We can also see that oscillations happen typically when we cross over from one region to the other. Our model is nonlinear and has time-delay, it
Figure 8: Long term oscillations
can give us very rich dynamics. Some oscillations die away within the first 50 years while some persist up to year 200. Some oscillations are small in amplitude and look like small persistent noise while others have value variation from 0 to 1, the full range of possible values. We show two specific plots for the evolution of vegetation and mitigation over 200 years for the initial condition \(x(0)=0.9,v(0)=0.1\) in Figure 8. As one can see, the mitigation varies from 0 to 1, the vegetation varies between 0.4 and 0.8, and the period can vary greatly as well.
We show two examples of evolution over time for "Barren" in Figure 9 and two examples of "Brown" in Figure 10. Figure 11 shows two cases of "Green" evolution and Figure 12 shows the cases where the ambient temperature is optimal and the forest survives even with no mitigators in the population at all.
Figure 9: Two different instances of ”Barren” evolution
Figure 10: Two different instances of ”Brown” evolution
Figure 11: Two different instances of ”Green” evolution
Figure 12: Two different instances of optimal ambient temperature evolution
## 5 Discussion
Evolutionary game theory is used in this study to build a time-delay system where the temperature of the forest depends on human behaviors and human behaviors also depend on the current temperature. The replicator equation determines the percentage of mitigators in the population based on the warming over the past 10 years. This means that the social part of the model has a time-delay effect, and the growth rate of the forest depends on the percentage of mitigators in the population. By studying this coupled social-climate model under changing parameter values, ambient temperature, or background temperature of the environment, and sensitivity to warming, we can gain insights into how human behavior affects climate change and how we can mitigate it.
Simulations of the coupled system over 200 years show us the varying results of forest dies out and no one is a mitigator, forest dies out and everyone is a mitigator, or the forest survives and everyone is a mitigator. There are rare cases where no one is a mitigator and yet the forest survives, but with low coverage. There are also rare oscillations where the proportion of mitigators varies between 0 and 1. These scenarios tell us that human behavior plays a crucial role in the survival of forests and that having everyone as a mitigator can help prevent forest dieback.
Our model was relatively simple and didn't consider the asymmetry of the population in aspects such as the contribution to global warming and resource inequalities [21; 22; 23]. We could modify both the the climate part and the social part of our model to see how the asymmetries in the population affects the climate change. Our model has a deterministic process but there are stochastic game descriptions of the climate change as well [24].
In sum, our work presents a transparent approach to modeling and analyzing the intricate interplay within social-climate systems. At its core, the climate model, grounded in forest dieback principles, may be straightforward, but it effectively incorporate the foundational effects of human behavior on both the climate and forest coverage. The decision-making process related to mitigation is firmly grounded on temperate forecasting and the sensitivity to temperature change. Our model, while simple, exhibits rich dynamics. This resonates with Bob May's renowned work on logistic maps and chaos: "Simple mathematical models with very complicated dynamics" [19]. Our current model and its extensions will help offer insights into the factors driving behavioral shifts in the context of climate dynamics. |
2309.17106 | Population dynamics model for aging | The chronological age used in demography describes the linear evolution of
the life of a living being. The chronological age cannot give precise
information about the exact developmental stage or aging processes an organism
has reached. On the contrary, the biological age (or epigenetic age) represents
the true evolution of the tissues and organs of the living being. Biological
age is not always linear and sometimes proceeds by discontinuous jumps. These
jumps can be positive (we then speak of rejuvenation) or negative (in the event
of premature aging), and they can be dependent on endogenous events such as
pregnancy (negative jump) or stroke (positive jump) or exogenous ones such as
surgical treatment (negative jump) or infectious disease (positive jump). The
article proposes a mathematical model of the biological age by defining a valid
model for the two types of jumps (positive and negative). The existence and
uniqueness of the solution are solved, and its temporal dynamic is analyzed
using a moments equation. We also provide some individual-based stochastic
simulations. | Jacques Demongeot, Pierre Magal | 2023-09-29T10:06:45Z | http://arxiv.org/abs/2309.17106v1 | # Population dynamics model for aging
###### Abstract
The chronological age used in demography describes the linear evolution of the life of a living being. The chronological age cannot give precise information about the exact developmental stage or aging processes an organism has reached. On the contrary, the biological age (or epigenetic age) represents the true evolution of the tissues and organs of the living being. Biological age is not always linear and sometimes proceeds by discontinuous jumps. These jumps can be positive (we then speak of rejuvenation) or negative (in the event of premature aging), and they can be dependent on endogenous events such as pregnancy (negative jump) or stroke (positive jump) or exogenous ones such as surgical treatment (negative jump) or infectious disease (positive jump). The article proposes a mathematical model of the biological age by defining a valid model for the two types of jumps (positive and negative). The existence and uniqueness of the solution are solved, and its temporal dynamic is analyzed using a moments equation. We also provide some individual-based stochastic simulations.
**Keywords:** Nonlocal transport equations; Equation of moments of distributions; Rejuvenation and Premature aging; Biological age.
## 1 Introduction
Over the past decade, scientists have studied several indicators of the health status of individuals. Chronological age, the time since birth, can be considered one of them. Several other indicators are required to improve the understanding of the health status of individuals. These indicators will combine CpG, DNA methylation (DNAm) and age (chronological age), health, and lifestyle outcomes. In the present article, we will call this indicator _biological age_, or _epigenetic age_[9][2].
Aging is generally considered irreversible due to the severe damage to the cell resulting in a gradual loss of functions and increasing fragility until final
death. Recent publications proved that this process can be reversible. For example, biological age increases due to stress or traumatism and decreases in the recuperation phase during post-partum or after stress-induced or surgery-induced cell aging [16] (for mice).
This possibility of forward and backward changes in biological age due to specific events during an individual's life (pregnancy, stress, surgery, traumatism, etc.) has been considered in various approaches. For example, in [4], the calculation of biological age is based on estimating the time left to live depending on the number of undifferentiated cells remaining in the stem cell reservoir of the organs providing the patient's vital functions.
The present article aims to present a model for a population of patients. Our model mainly consists of continuous growth of the biological age together with some models for jumps (forward and backward) in the biological age to derive a mean behavior at a population's level. The continuous growth of the biological age corresponds to the biological age when no accidents or jumps occur. Therefore, in this case, the biological age grows at the same speed as the chronological age. In our model, the difference between biological age and chronological age consists of modeling random jumps forward or backward when the patient becomes sick or recovers from a disease such as cancer. In our model, the disease's severity corresponds to the jumps' amplitude. Here it is assumed to increase linearly with biological age. In conclusion, we will propose a larger model class for the jumps. The general framework of models is the so-called Levy process [25][1]. In Huang et al.[8], a mathematical model based on a stochastic differential equation is proposed to model the dynamic of biological age at the level of a single patient.
The plan of the paper is the following. Section 2 is devoted to the biological background of aging. In section 3, we present an aging model with a rejuvenation mechanism only. In section 4, we present an aging model with a premature aging mechanism only. In section 5, we combine rejuvenation and premature aging mechanisms. Section 6 is devoted to numerical simulation, and section 7 is to the discussion.
## 2 Biological background
Due to microscopic cellular events, the health status of patients the life is first able to regenerate the tissue destroyed. When the chronological age increases, the patients are less capable of repairing such a tissue, leading to phenotypic medical symptoms. The gravity of the medical symptoms is also more and more severe when patients' chronological age increases. The medical symptoms result from an accumulation threshold at the microscopic level of cell delations. This justifies the model with jumps (aging and rejuvenating jump) in biological age with an amplitude that increases in the life course.
We can consider that, even if in fine a loss of function is due to the dysfunctioning (or even death) of organs or even to the death of the whole organism, it results from a continuous disappearance of differentiated cells not replaced by undifferentiated cells (which are finite in number at birth and whose proliferation is bounded by Hayflick's limit), the event signaling accelerated aging or
rejuvenation is a discrete phenomenon linked to a phenotypic symptom appearing at a precise moment corresponding to a biological age jump in the proposed mathematical models.
Due to microscopic cellular events, the health status of patients the life is first able to regenerate the tissue destroyed. When the chronological age increases, the patients are less capable of repairing such tissue, leading to phenotypic medical symptoms. The gravity of these medical symptoms is also more and more severe when patients' chronological age increases. The medical symptoms result from an accumulation at the microscopic level of cell deletions.
The determination of longevity is a problem that affects all species. The determining factors are multiple and are of two kinds, endogenous and exogenous. Among the endogenous determinants, we find, for example, i) the microbiome, whose importance has been demonstrated for species that have longevity comparable to the human species, such as crocodiles with a maximal life span of about 100 years [19] and ii) the genome, whose influence is decisive in family diseases such as progeria, a rare genetic disorder [22]. Progeria accelerates aging, reduces the maximal human life span, and presents early symptoms like osteoporosis and hair loss. The exogenous factors are due to the environment and come from the food, which often contains pesticides and oxidizing components, the stress (especially occupational stress and emotional stress), infectious diseases, chronic diseases (namely diabetes, neurodegenerative and cardiovascular diseases), cancer, accidental musculoskeletal trauma, and surgical operations, etc.) [16]. For example, an acute viral disease can cause a loss of 3 109 cells in 1 day, equivalent to 2 weeks of additional aging, because normal aging causes only a loss of 2 108 cells per day [15]. Another example is given by the pre-dementia, in which the observed loss of cells can be caused by multiple etiologies, among which the loss of limbic and hypothalamic cell connections contributing to altering the sense of satiety (causing the adipocytes depletion), alterations in cardiomyocytes insulin sensitivity, or age-related regulatory changes in carbohydrate metabolism of the liver cells [13]. The loss of cells due to apoptosis (not compensated by the mitoses remaining before the Hayflick's limit [4][7]) in the different organs (brain, heart, liver, etc.) involved in these dysfunctions causes not the marginal organ death, but the whole organism death due to an involution (in cell size and number) of these organs incompatible with the survival of the whole organism.
As the accelerating aging factors, the rejuvenation factors also have two sources: the endogenous origin comes from cellular repair processes, in particular DNA repair mechanisms [12][21] which prevent the abnormal apoptosis due to DNA damages (caused by radiations, abortive mitoses, etc.). The heterogeneous origin is due to recovery processes during the healing time after a disease or an exhausting physiological event like pregnancy [16][18][11].
The chronological age is the commonly used age which is the time since birth. To describe a more realistic lifetime expectancy, we now focus on the biological age, which is much more difficult to define since it is multi-factorial.
According to Gerdhem et al. [6], biological age is a commonly used term without clear definitions but is routinely used to describe patients. Biological age may differ from chronological age and correlates to gait, muscle strength, and balance. The general appearance is considered when estimating the biological age.
In Demongeot [4], the biological age is defined as the real age of interface tissues (like intestinal endothelium, alveoli epithelium, and skin epithelium) and differs from the chronological age classically used in demography, but unable to give useful information about the exact stage in development or aging an organ or more globally an organism has reached. Biological age is essentially determined by the number of divisions remaining before Hayflick's limit (the maximal number of mitoses for a cell lineage inside a dedicated differentiated cellular population) of the tissues of a critical organ (critical in the sense that its loss causes the death of the whole organism). We refer to Shay and Wright [17] for more information. The critical organs have vital functions in interaction with the environment (protection, homeothermy, nutrition, and respiration) and present a rapid turnover (the total cell renewal time is in mice equal to 3 weeks for the skin, 1.5 days for the intestine, 4 months for the alveolate and 11 days for mitochondrial inner membrane) conditioning their biological age.
Another estimation of the biological age is the epigenetic age based on the expression level of certain genes like ELMSAN1 (also known as MIDEAS) and their transcription factors (e.g., mir4505 for MIDEAS) [2]. Figure 2 gives the statistical relationship between the epigenetic age and the chronological age showing that the probability of observing backward or forward jumps in epigenetic age and the intensity of these jumps increase linearly with chronological age. We refer to [10] and [20] for more results on this subject.
Figure 1: _Kaplan-Meier survival confidence (\(95\%\)) curves. The region in between the curve \(s_{1}(t)\) and the curve \(s_{2}(t)\) corresponds to \(95\%\) of individuals having their probability of survival to time \(t\) between \(s_{1}(t)\) and \(s_{2}(t)\) (\(95\%\) confidence limits)._
## 3 Aging model with rejuvenation mechanism only
To describe the rejuvenation of a population of individuals structured with respect to biological age \(b\in[0,+\infty)\), we consider \(b\mapsto n(t,b)\) the population density at time \(t\). That is
\[\int_{b_{1}}^{b_{2}}u(t,b)\mathrm{d}b\]
is the number of individuals at time \(t\) with biological age between \(b_{1}\) and \(b_{2}\).
To model the rejuvenation, we will use the following system of partial differential equations
\[\left\{\begin{array}{rl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=-\tau_{+}u(t,b)+\tau_{+}\left(1+\delta_{+}\right)u\left(t,\left(1+\delta_{+}\right)b\right),\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L_{+}^{1}\left((0,\infty),\mathbb{R}\right),\end{array}\right. \tag{3.1}\]
where \(\tau_{+}>0\) is the rate of rejuvenation (i.e. \(\dfrac{1}{\tau_{+}}\) is for a single individual, the average time between two rejuvenations jumps), \(\delta_{+}\geq 0\) is the fraction of biological age \(b\) (after rejuvenation) that should be added to \(b\) to obtain \(\widehat{b}=\left(1+\delta_{+}\right)b\) the biological age before rejuvenation.
In this above model, the term \(\partial_{b}u(t,b)\) corresponds to a drift term with a constant velocity (which represents the classical chronological aging in the absence of perturbations jumps), the term \(\tau_{+}u(t,b)\) corresponds to the flow of individuals which rejuvenate (i.e. having a jump in biological age) at time \(t\). This flow is given by \(\tau_{+}\int_{0}^{\infty}u(t,b)\mathrm{d}b\). That is,
\[\int_{t_{1}}^{t_{2}}\tau_{+}\int_{0}^{\infty}u(t,b)\mathrm{d}b\mathrm{d}t\]
Figure 2: _The statistical relationship between chronological and epigenetic ages (top) with an indication of absolute error (bottom) in the prediction of epigenetic age by the chronological one. This figure is taken from [2]._
is the number of individuals which rejuvenate in between \(t_{1}\) and \(t_{2}\).
More precisely, when a rejuvenation occurs an individual having a biological age \(b\) after a rejuvenation's jump, its biological age was
\[\widehat{b}=\left(1+\delta_{+}\right)b>b+\delta_{+}b>b,\]
before the rejuvenation's jump.
In other words, a rejuvenating individual with a biological age \(b\) after a rejuvenation's jump, was an older individual with biological age \(\widehat{b}=\left(1+\delta_{+}\right)b\) before rejuvenation's jump. That is also equivalent to say that, rejuvenating individual starting from the biological age \(\widehat{b}\) end-ups with a biological age \(b=\dfrac{1}{1+\delta_{+}}\widehat{b}\) after rejuvenation. So this individual looses a fraction
\[1-\dfrac{1}{1+\delta_{+}}=\dfrac{\delta_{+}}{1+\delta_{+}}\]
of its biological age \(\widehat{b}\) before rejuvenation's jump.
Setting
\[g_{+}=1+\delta_{+}, \tag{3.2}\]
the system (3.1) becomes
\[\left\{\begin{array}{rl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=-\tau_{+}u(t, b)+\tau_{+}g_{+}u\left(t,g_{+}b\right),\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right).\end{array}\right. \tag{3.3}\]
Integral formulation of the solutions:By integrating the first equation of (3.3) along the characteristics (i.e. \(t-a\) constant) we obtain the following
\[u(t,b)=\left\{\begin{array}{rl}e^{-\tau_{+}t}u_{0}(b-t)+&v(t,b),&\text{if }t<b, \\ v(t,b),&\text{if }t>b,\end{array}\right. \tag{3.4}\]
where
\[v(t,b)=\left\{\begin{array}{rl}\int_{0}^{t}e^{-\tau_{+}(t-\sigma)}\tau_{+}g _{+}u\big{(}\sigma,g_{+}\left(b-t+\sigma\right)\big{)}\mathrm{d}\sigma,&\text{ if }t<b,\\ \int_{0}^{b}e^{-\tau_{+}(b-\sigma)}\tau_{+}g_{+}u\big{(}t-b+\sigma,g_{+}\sigma \big{)}\mathrm{d}\sigma,&\text{ if }t>b.\end{array}\right. \tag{3.5}\]
As a consequence of the above formula (3.4) and (3.5), and by applying fixed argument in suitable subspaces of \(L^{1}\) with compact supports, we obtain the following lemma.
**Lemma 3.1**.: _If_
\[\mathrm{Support}\left(u_{0}\right)\subset[0,b^{\star}],\]
_then_
\[\mathrm{Support}\left(u(t,.)\right)\subset[0,b^{\star}+t],\forall t>0.\]
**Abstract Cauchy problem formulation:** We refer to [24][23][3][5][14] for more results on semigroup theory and their application to age structured models. We consider
\[X=L^{1}\left(\left(0,\infty\right),\mathbb{R}\right),\]
endowed its standard norm
\[\|\phi\|_{L^{1}}=\int_{0}^{\infty}|\phi\left(\sigma\right)|\mathrm{d}\sigma.\]
We consider \(A:D(A)\subset X\to X\) the linear operator defined by
\[A\phi=-\phi^{\prime}\]
with
\[D(A)=\left\{\phi\in W^{1,1}\left(\left(0,\infty\right),\mathbb{R}\right):\phi \left(0\right)=0\right\}.\]
We consider \(B:X\to X\) the bounded linear operator defined by
\[B_{g}\phi(x)=g\,\phi\left(g\,x\right),\text{ for }x\geq 0.,\]
where \(g>0\). Then \(B_{g}\) is an isometric bounded linear operator. That is
\[\|B_{g}\phi\|_{L^{1}}=\|\phi\|_{L^{1}},\forall\phi\in L^{1}\left(\left(0, \infty\right),\mathbb{R}\right).\]
The problem (3.3) can be reformulated as an abstract Cauchy problem
\[\left\{\begin{array}{l}u^{\prime}(t)=Au(t)-\tau_{+}u(t)+\tau_{+}B_{g_{+}}u(t ),\text{ for }t\geq 0,\\ \text{with}\\ u(0)=u_{0}\in L^{1}_{+}\left(\left(0,\infty\right),\mathbb{R}\right).\end{array}\right. \tag{3.6}\]
**Lemma 3.2**.: _The linear operator \(A\) is the infinitesimal general of \(\left\{T_{A}\left(t\right)\right\}_{t\geq 0}\) the strongly continuous semigroup of linear operators, defined by_
\[T_{A}\left(t\right)\left(\phi\right)(a)=\left\{\begin{array}{ll}\phi\left(a- t\right),&\text{ if }a>t,\\ 0,&\text{ if }a<t.\end{array}\right. \tag{3.7}\]
**Definition 3.3**.: _We will say that \(u\in C\left(\left[0,\infty\right),L^{1}_{+}\left(\left(0,\infty\right), \mathbb{R}\right)\right)\) is a **mild solution** of (3.6) if_
\[\int_{0}^{t}u(\sigma)\mathrm{d}\sigma\in D(A),\forall t\geq 0,\]
_and_
\[u(t)=u_{0}+A\int_{0}^{t}u(\sigma)\mathrm{d}\sigma+\int_{0}^{t}-\tau_{+}u( \sigma)+\tau_{+}B_{g_{+}}u(\sigma)\mathrm{d}\sigma,\forall t\geq 0.\]
**Theorem 3.4**.: _For each \(u_{0}\in L^{1}_{+}\left(\left(0,\infty\right),\mathbb{R}\right),\) the Cauchy problem (3.6) admits a unique mild solution which is the unique continuous function \(u\in C\left(\left[0,\infty\right),L^{1}_{+}\left(\left(0,\infty\right), \mathbb{R}\right)\right)\) satisfying the fixed point problem_
\[u(t)=T_{A-\tau_{+}I}(t)u_{0}+\int_{0}^{t}T_{A-\tau_{+}I}(t-\sigma)\tau_{+}B_{g _{+}}u(\sigma)\mathrm{d}\sigma,\forall t\geq 0, \tag{3.8}\]
_where_
\[T_{A-\tau_{+}I}(t)=e^{-\tau_{+}t}T_{A}(t),\forall t\geq 0. \tag{3.9}\]
**Remark 3.5**.: _The variation of constant formula (3.8) is an abstract reformulation of the formulas (3.4) and (3.5)._
**Moments' formulation of the PDE:** Define
\[E_{k}(u(t))=\int_{0}^{\infty}\sigma^{k}u(t,\sigma)\mathrm{d}\sigma,\forall k\geq 0,\]
with
\[E_{0}(u(t))=\int_{0}^{\infty}u(t,\sigma)\mathrm{d}\sigma,\]
assuming that the integrals are well defined.
**Theorem 3.6**.: _The rejuvenation model (3.1) has a unique non-negative mild solution. We have_
\[\frac{d}{dt}E_{0}(u(t))=0, \tag{3.10}\]
_and the model preserves the total mass (number) of individuals. That is_
\[\int_{0}^{\infty}u(t,\sigma)\mathrm{d}\sigma=\int_{0}^{\infty}u_{0}(\sigma) \mathrm{d}\sigma,\forall t\geq 0, \tag{3.11}\]
_Moreover, the higher moment satisfies the following system of ordinary differential equations_
\[\frac{d}{dt}E_{k}(u(t))=k\,E_{k-1}(u(t))-\chi_{k}\,E_{k}(u(t)),\forall k\geq 1, \tag{3.12}\]
_where_
\[\chi_{k}=\tau_{+}\left(1-\frac{1}{(1+\delta_{+})^{k}}\right)>0,\forall k\geq 1,\]
_and_
\[\lim_{k\to+\infty}\chi_{k}=\tau_{+}>0.\]
_If we denote the equilibrium solution of (3.12) by_
\[\overline{E}_{k}=\left(\prod_{j=1}^{k}\frac{j}{\chi_{j}}\right)\int_{0}^{ \infty}u_{0}(\sigma)\mathrm{d}\sigma,\forall k\geq 1,\]
_consequently from (3.12), we have the following convergence result_
\[\lim_{t\to\infty}E_{k}(u(t))=\overline{E}_{k},\forall k\geq 1.\]
Proof.: \[\int_{0}^{\infty}\sigma^{k}\partial_{t}u(t,\sigma)\mathrm{d}\sigma +\int_{0}^{\infty}\sigma^{k}\partial_{b}u(t,\sigma)\mathrm{d}\sigma =-\tau_{+}\int_{0}^{\infty}\sigma^{k}u(t,\sigma)\mathrm{d}\sigma\] \[+\tau_{+}\int_{0}^{\infty}\sigma^{k}u\left(t,g_{+}\sigma\right)g _{+}\mathrm{d}\sigma,\]
by integrating by parts the second integral, and by making a change of variable in the last integral, we obtain
\[\frac{d}{dt}E_{k}(u(t))=k\,E_{k-1}(u(t))-\tau_{+}\left(1-\frac{1}{(1+\delta_{ +})^{k}}\right)\int_{0}^{\infty}E_{k}(u(t)).\]
## 4 Aging model with premature aging mechanism only
To model the premature aging, we will use the following system of partial differential equations
\[\left\{\begin{array}{rl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=-\tau_{-}u(t,b)+ \tau_{-}\left(1-\delta_{-}\right)u\left(t,(1-\delta_{-})\,b\right),\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right),\end{array}\right. \tag{4.1}\]
where \(\tau_{-}>0\) is the rate of premature aging (i.e. \(\dfrac{1}{\tau_{-}}\) is for a single individual, the average time between two premature aging jumps), \(\delta_{-}\in(0,1)\) is the fraction of biological age \(b\) (after premature aging) that should be subtracted to \(b\) to obtain \(\widehat{b}=\left(1-\delta_{-}\right)b\) the biological age before premature aging.
In this above model, the flow of individuals with premature aging (i.e. having a jump in biological age) at time \(t\) is given by \(\tau_{-}\int_{0}^{\infty}u(t,b)\mathrm{d}b\). That is,
\[\int_{t_{1}}^{t_{2}}\tau_{-}\int_{0}^{\infty}u(t,b)\mathrm{d}b\mathrm{d}t\]
is the number of individual with premature aging in between \(t_{1}\) and \(t_{2}\).
More precisely, when a rejuvenation occurs an individual having a biological age \(b\) after a premature aging's jump, its biological age was
\[\widehat{b}=\left(1-\delta_{-}\right)b>b-\delta_{-}b<b.\]
before the premature aging's jump.
In other words, a premature aging individual with a biological age \(b\) after a premature aging's jump, was an younger individual with biological age \(\widehat{b}=\left(1-\delta_{-}\right)b\) before premature aging's jump. That is also equivalent to say that, premature aging individual starting from the biological age \(\widehat{b}\) end-ups with a biological age \(b=\dfrac{1}{1-\delta_{-}}\widehat{b}\) after premature aging. So this individual gains a fraction
\[\dfrac{1}{1-\delta_{-}}-1=\dfrac{\delta_{-}}{1-\delta_{-}}\]
of its biological age \(\widehat{b}\) before premature aging's jump.
Setting
\[g_{-}=1-\delta_{-}, \tag{4.2}\]
the system (4.1) becomes
\[\left\{\begin{array}{rl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=-\tau_{-}u(t, b)+\tau_{-}g_{-}u\left(t,g_{-}b\right),\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right).\end{array}\right. \tag{4.3}\]
## 5 Aging model
The full model with both rejuvenation and premature aging reads as follows
\[\left\{\begin{array}{lll}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=&-\left(\tau_{- }+\tau_{+}\right)u(t,b)\\ &&+\tau_{-}\,g_{-}\,u\left(t,g_{-}\,b\right)\\ &&+\tau_{+}\,g_{+}\,u\left(t,g_{+}\,b\right),\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right).\end{array}\right. \tag{5.1}\]
and we make the following assumption on the parameters of the system.
**Assumption 5.1**.: _We assume that_
\[\tau_{+}>0,\,\tau_{-}>0,\text{ and }g_{+}>1>g_{-}>0.\]
**Volterra integral formulation:** By integrating the first equation of (5.1) along the characteristics (i.e. \(t-a\) constant) we obtain the following
\[u(t,b)=\left\{\begin{array}{lll}e^{-(\tau_{-}+\tau_{+})t}u_{0}(b-t)+&v_{+}(t,b)+v_{-}(t,b),&\text{if }t<b,\\ &&v_{+}(t,b)+v_{-}(t,b),&\text{if }t>b,\end{array}\right.\]
where
\[v_{+}(t,b)=\left\{\begin{array}{lll}\int_{0}^{t}e^{-\left(\tau_{-}+\tau_{+} \right)(t-\sigma)}\tau_{+}g_{+}u\big{(}\sigma,g_{+}\left(b-t+\sigma\right) \big{)}\mathrm{d}\sigma,&\text{if }t<b,\\ \int_{0}^{b}e^{-\left(\tau_{-}+\tau_{+}\right)(b-\sigma)}\tau_{+}g_{+}u\big{(} t-b+\sigma,g_{+}\sigma\big{)}\mathrm{d}\sigma,&\text{if }t>b,\end{array}\right.\]
and
\[v_{-}(t,b)=\left\{\begin{array}{lll}\int_{0}^{t}e^{-\left(\tau_{-}+\tau_{+} \right)(t-\sigma)}\tau_{-}g_{-}u\big{(}\sigma,g_{-}\left(b-t+\sigma\right) \big{)}\mathrm{d}\sigma,&\text{if }t<b,\\ \int_{0}^{b}e^{-\left(\tau_{-}+\tau_{+}\right)(b-\sigma)}\tau_{-}g_{-}u\big{(} t-b+\sigma,g_{-}\sigma\big{)}\mathrm{d}\sigma,&\text{if }t>b.\end{array}\right.\]
**Abstract Cauchy problem reformulation:** The problem (5.1) can be reformulated as an abstract Cauchy problem
\[\left\{\begin{array}{lll}u^{\prime}(t)=Au(t)-\left(\tau_{-}+\tau_{+}\right) u(t)+\left(\tau_{-}B_{g_{-}}+\tau_{+}B_{g_{+}}\right)u(t),\text{ for }t\geq 0,\\ \text{with}\\ u(0)=u_{0}\in L^{1}_{+}\left(\left(0,\infty\right),\mathbb{R}\right).\end{array}\right. \tag{5.2}\]
**Definition 5.2**.: _We will say that \(u\in C\left(\left[0,\infty\right),L^{1}_{+}\left(\left(0,\infty\right), \mathbb{R}\right)\right)\) is a **mild solution** of (5.2) if_
\[\int_{0}^{t}u(\sigma)\mathrm{d}\sigma\in D(A),\forall t\geq 0,\]
_and for each \(t\geq 0,\)_
\[u(t)=u_{0}+A\int_{0}^{t}u(\sigma)\mathrm{d}\sigma+\int_{0}^{t}-\left(\tau_{-} +\tau_{+}\right)u(\sigma)+\text{ }\left(\tau_{-}B_{g_{-}}+\tau_{+}B_{g_{+}}\right)u(\sigma) \mathrm{d}\sigma.\]
**Theorem 5.3**.: _Let Assumption 5.1 be satisfied. For each \(u_{0}\in L^{1}_{+}\left(\left(0,\infty\right),\mathbb{R}\right),\) the Cauchy problem (5.2) admits a unique mild solution \(u\in C\left(\left[0,\infty\right),L^{1}_{+}\left(\left(0,\infty\right),\mathbb{ R}\right)\right)\) which is the unique continuous function satisfying the fixed point problem for each \(t\geq 0\),_
\[u(t)= T_{A-\left(\tau_{-}+\tau_{+}\right)I}(t)u_{0}\] \[+\int_{0}^{t}T_{A-\left(\tau_{-}+\tau_{+}\right)I}(t-\sigma) \left(\tau_{-}B_{g_{-}}+\tau_{+}B_{g_{+}}\right)u(\sigma)\mathrm{d}\sigma,\]
_where_
\[T_{A-\left(\tau_{-}+\tau_{+}\right)I}(t)=e^{-\left(\tau_{-}+\tau_{+}\right)t} T_{A}(t),\forall t\geq 0.\]
**Moments formulation:** We obtain the following result using similar arguments in for Theorem 3.6.
**Theorem 5.4**.: _Let Assumption 5.1 be satisfied. The rejuvenation and premature aging model (5.1) has a unique non-negative mild solution. We have_
\[\frac{d}{dt}E_{0}(u(t))=0,\]
_and the model preserves the total mass (number) of individuals. That is_
\[\int_{0}^{\infty}u(t,\sigma)\mathrm{d}\sigma=\int_{0}^{\infty}u_{0}(\sigma) \mathrm{d}\sigma,\forall t\geq 0. \tag{5.3}\]
_Moreover, the higher moment satisfies the following system of ordinary differential equations_
\[\frac{d}{dt}E_{k}(u(t))=k\,E_{k-1}(u(t))-\chi_{k}\,E_{k}(u(t)),\forall k\geq 1, \tag{5.4}\]
_where_
\[\chi_{k}=\tau_{+}\left(1-\frac{1}{\left(1+\delta_{+}\right)^{k}}\right)+\tau_ {-}\left(1-\frac{1}{\left(1-\delta_{-}\right)^{k}}\right),\forall k\geq 1, \tag{5.5}\]
_and_
\[\lim_{k\rightarrow+\infty}\chi_{k}=-\infty. \tag{5.6}\]
**Remark 5.5**.: _For \(k=1\), and \(\tau_{+}=\tau_{-}=\tau/2\), and \(\delta_{+}=\delta_{-}=\delta\in(0,1)\), then by using the formula (5.5) we obtain_
\[\chi_{1} =\tau_{+}\left(1-\frac{1}{\left(1+\delta_{+}\right)}\right)+\tau_ {-}\left(1-\frac{1}{\left(1-\delta_{-}\right)}\right)\] \[=\tau/2\left[\frac{\delta}{1+\delta}-\frac{\delta}{1-\delta}\right]\] \[=\tau/2\left[\frac{\delta(1-\delta)-\delta(1+\delta)}{1-\delta^ {2}}\right]\]
_and_
\[\chi_{1}=-\tau\frac{\delta^{2}}{1-\delta^{2}}.\]
_Next, by using formula (5.7), we obtain_
\[\frac{d}{dt}E_{1}(u(t))=E_{0}(u_{0})+\tau\frac{\delta^{2}}{1-\delta^{2}}\,E_{1}(u (t)),\forall k\geq 1. \tag{5.7}\]
_The means value of the distribution is_
\[M(u(t))=\frac{E_{1}(u(t))}{E_{0}(u(t))}\]
_and since \(t\to E_{0}(u(t))\) is a constant function we obtain consequently, we obtain_
\[M(u(t))^{\prime}=1+\tau\frac{\delta^{2}}{1-\delta^{2}}\,M(u(t)).\]
_We conclude that,_
\[\lim_{t\to\infty}M(u(t))=+\infty,\]
_and the smaller \(\delta\) is, the closer the mean value \(M(u(t))\) growth like the time \(t\)._
**Lemma 5.6**.: _Let Assumption 5.1 be satisfied. Define_
\[x_{\max}=\frac{\ln\left(\tau_{+}\ln q_{+}\right)-\ln\left(-\tau_{-}\ln q_{-} \right)}{\ln g_{+}-\ln g_{-}}.\]
_Consider the function \(\chi:[0,+\infty)\to\mathbb{R}\) defined by_
\[\chi(x)=\tau_{+}\left(1-g_{+}^{-x}\right)+\tau_{-}\left(1-g_{-}^{-x}\right).\]
_There we have the following alternative_
* _If_ \(\tau_{+}\ln g_{+}+\tau_{-}\ln g_{-}>0\)_, then the map_ \(\chi\) _is first increases from_ \(\chi(0)=0\) _to_ \(\chi(x_{\max})>0\) _on_ \([0,x_{\max}]\)_, and then decreases from_ \(\chi(x_{\max})>0\) _to_ \(-\infty\) _on_ \([x_{\max},\infty)\)_._
* _If_ \(\tau_{+}\ln g_{+}+\tau_{-}\ln g_{-}\leq 0\)_, then the map_ \(\chi\) _decreases from_ \(0\) _to_ \(-\infty\) _on_ \([0,\infty)\)_._
**Proposition 5.7**.: _Assume that there exists \(k_{0}\in\mathbb{N}\), such that_
\[\chi_{k}>0,\forall k=1,\ldots,k_{0},\]
_and_
\[\chi_{k}<0,\forall k>k_{0}.\]
_If we denote the equilibrium solution of (5.7) by_
\[\overline{E}_{k}=\left(\prod_{j=1}^{k}\frac{j}{\chi_{j}}\right)\int_{0}^{ \infty}u_{0}(\sigma)\mathrm{d}\sigma,\forall k=1,\ldots,k_{0}, \tag{5.8}\]
_consequently from (5.7), we have the following convergence result_
\[\lim_{t\to+\infty}E_{k}(u(t))=\overline{E}_{k},\forall k=1,\ldots,k_{0}, \tag{5.9}\]
_and_
\[\lim_{t\to+\infty}E_{k}(u(t))=+\infty,\forall k>k_{0}. \tag{5.10}\]
**Equilibrium solution:** An equilibrium solution satisfies some delay equation with both advance and retarded delay. That is,
\[\left\{\begin{array}{ll}\overline{u}^{\prime}(b)&=\,-\left(\tau_{-}+\tau_{+} \right)\overline{u}(b)\\ &\,+\tau_{-}\,g_{-}\,\overline{u}\left(g_{-}\,b\right)\\ &\,+\tau_{+}\,g_{+}\,\overline{u}\left(g_{+}\,b\right),\\ \overline{u}(0)&=0,\end{array}\right. \tag{5.11}\]
and the difficulty of solving such an equation comes from the following
\[g_{+}\,b>b>g_{-}\,b,\,\forall b>0.\]
It follows that, even the existence of equilibrium solution is not a classical problem to investigate.
**Non existence result for exponentially decreasing equilibrium solution of (5.1):** Assume that \(b\mapsto\overline{u}(b)\) is a non-negative continuously differentiable map satisfying system (5.11).
**Assumption 5.8**.: _Assume that the map \(b\mapsto\overline{u}(b)\) is non-negative, and non null, and continuously differentiable map, and satisfies the system (5.11). Assume in addition that \(\overline{u}(b)\) is exponentially decreasing. That is, there exist two constants \(M>0\), and \(\gamma>0\), such that_
\[\overline{u}(b)\leq Me^{-\gamma b},\forall b\geq 0.\]
By using the first equation of (5.11), we deduce that
\[|\overline{u}^{\prime}(b)|\leq\widetilde{M}e^{-\gamma\,\min(1,g_{+},g_{-})\,b },\forall b\geq 0,\]
for some suitable \(\widetilde{M}>0\).
So, under Assumption 5.8, all the moments of \(\overline{u}(b)\) and \(\overline{u}^{\prime}(b)\) are well defined. Moreover, by using the first equation of (5.11), we obtain for each \(k\geq 1\)
\[\int_{0}^{\infty}\sigma^{k}\overline{u}^{\prime}(\sigma)\mathrm{ d}\sigma =-\left(\tau_{-}+\tau_{+}\right)\int_{0}^{\infty}\sigma^{k} \overline{u}(\sigma)\mathrm{d}\sigma+\tau_{+}\int_{0}^{\infty}\sigma^{k} \overline{u}\left(t,g_{+}\sigma\right)g_{+}\mathrm{d}\sigma\] \[+\tau_{-}\int_{0}^{\infty}\sigma^{k}\overline{u}\left(t,g_{-} \sigma\right)g_{-}\mathrm{d}\sigma,\]
and since \(\overline{u}(0)=0\), we obtain by integrating by parts
\[-k\int_{0}^{\infty}\sigma^{k-1}\overline{u}(\sigma)\mathrm{d}\sigma =-\left(\tau_{-}+\tau_{+}\right)\int_{0}^{\infty}\sigma^{k} \overline{u}(\sigma)\mathrm{d}\sigma+\tau_{+}\int_{0}^{\infty}\sigma^{k} \overline{u}\left(t,g_{+}\sigma\right)g_{+}\mathrm{d}\sigma\] \[+\tau_{-}\int_{0}^{\infty}\sigma^{k}\overline{u}\left(t,g_{-} \sigma\right)g_{-}\mathrm{d}\sigma,\]
and we obtain
\[kE_{k-1}(\overline{u})=\chi_{k}\,E_{k}(\overline{u}),\forall k\geq 1,\forall k \geq 1, \tag{5.12}\]
where \(\chi_{k}\) is define by (5.5).
Under Assumption 5.8, we must have
\[E_{k-1}(\overline{u})>0,\forall k\geq 0,\]
and by using (5.6), we deduce that (5.12) can not be satisfied for all \(k\geq 1\) large enough. Therefore we obtain the following proposition.
**Proposition 5.9**.: _The system (5.11) has no exponential decreasing solution. That is a solution of (5.11) satisfying the Assumption 5.8._
Numerical simulations
### Simulation of PDE (5.1)
By setting
\[\tau_{+}=\tau\,p,\text{ and }\tau_{-}=\tau\,(1-p),\]
where \(\tau\) is the rate at which individual either rejuvenates of prematurely ages, and \(p\in[0,1]\) is the probability of rejuvenation and \((1-p)\in[0,1]\) is the probability of premature aging. The parameter \(p\) can be interpreted as the probability of being cured in the event of illness or injury. The parameter \(1-p\) at the opposite is the probability of getting injured or sick.
By using these new parameters, we obtain a probabilistic interpretation of the model (5.1), and the model (5.1) becomes
\[\left\{\begin{array}{rl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=-\tau\,u(t,b) \\ &\quad+\tau\,(1-p)\,g_{-}\,u\,(t,g_{-}\,b)\\ &\quad+\tau\,p\,g_{+}\,u\,(t,g_{+}\,b)\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right),\end{array}\right. \tag{6.1}\]
and
\[g_{+}>1>g_{-}>0.\]
In order to run a simulation of model (6.1), we use stochastic simulations. We consider a population composed of a finite number \(N=100\,000\) of individuals. We start the simulation a time \(t=0\) with all individuals with the same age \(a=20\) years. The time to the next event (rejuvenation or premature aging) follows an exponential law with parameters \(1/\tau\). The principles of the simulations are as follows: When an event occurs we choose random one individuals; and we compute a random value between \(0\) and \(1\). If this value is \(p\) rejuvenation occurs, and premature aging occurs otherwise. At each time step the biological age increases by one time step.
In Figure 3, we present some simulation of the model (6.1) whenever \(p=0.5\), \(1/\tau=1\) years, \(g_{+}=1+\delta_{+}=1.1\) and \(g_{-}=1-\delta_{-}=0.9\). In Figure 3, we can observe that starting from a single cohort of individuals with biological age \(20\) at time \(t=0\), the mean value of the distribution follows the chronological age (thanks to the fact that \(p=1/2\), and \(\delta_{+}=\delta_{-}=0.1\), and the remark 5.5). But the density of the population deviates more and more with time. One also needs to interpret the biological age by saying that the larger the biological age is, the more people are likely to die. Therefore the more the population is dispersed around the means value, the more people (with larger ages) are likely to die earlier. People with a large biological age can be understood as people suffering from a lack of treatment for their illnesses or injuries.
In the Figures 4 and 5, we investigate the influence of the parameter \(p=0.25\) in Figure 4, and \(p=0.75\) in Figure 5. In Figures 4 and 5, we can see that due to the dissymmetric value of \(p\) the mean value no longer follows the chronological age. We can observe that the parameter \(p\) plays very important in the aging process.
Figure 3: _In this figure, we plot some stochastic numerical simulations of the model (6.1) whenever \(p=0.5\), \(1/\tau=1\) years, \(g_{+}=1.1\) and \(g_{-}=0.9\). We start the simulations with a cohort of \(100\,000\) individuals all with biological age \(20\) years old. The figures (a) (b) (c) (d) are respectively the distribution after \(1\) year, \(10\) years, \(20\) years and \(30\) years._
### Simulation of the moments equation (5.7)
We choose
\[\tau_{+}=\tau_{-}=0.1,\delta_{+}=0.1,\delta_{+}=0.01,\]
Figure 4: _In this figure, we plot some stochastic numerical simulations of the model (6.1) whenever \(p=0.25\), the rest is the same as in Figure 3._
Figure 5: _In this figure, we plot some stochastic numerical simulations of the model (6.1) whenever \(p=0.75\), the rest is the same as in Figure 3._
and we obtain
\[g_{+}=1+\delta_{+}=1.1,\text{ and }g_{-}=1-\delta_{-}=0.99.\]
We use the initial distribution with compact support
\[u_{0}(b)=\frac{2}{5}\max\left(0,b\left(1-b/10\right)\right).\]
In Figure 6, the initial values \(k\mapsto E_{k}(u_{0})\) increase with \(k\) (see (b)). Moreover, the components \(k\mapsto E_{k}(u(t))\) keep the same order for all \(t>0\) (see (d)). Moreover by using (c) and the Theorem 5.3, we deduce that \(k_{0}\in[60,80]\), such that if \(k\) is below \(k_{0}\) the components converge to \(\overline{E}_{k}\), and if \(k\) is above \(k_{0}\) the components go to \(+\infty\).
## 7 Discussion
This article proposes a new class of models describing the aging process. The model is based on the notion of biological age, which is a quantity reflecting the aging due to cells failing to repair DNA damages, illness, injuries, or, more generally speaking, corresponding to the body's aging. The key features of the model are the following:
1. a drift term with a constant velocity describing the aging process at the cellular level;
Figure 6: _We plot (a) \(b\mapsto u_{0}(b)\); (b) \(k\mapsto E_{k}(u_{0})\) for \(k=0,\dots,100\) (with y axis log scale); (c) \(k\mapsto\chi_{k}\) for \(k=0,\dots,100\); and (d) \(t\mapsto E_{k}(u(t))\) for \(k=0,\dots,100\) (with y axis log scale);_
2. a rejuvenation operator describing the repair, recovery and healing processes during life;
3. a premature aging operator corresponds to the medical care of injuries and illness.
In this work, we consider the simplest version of the model. The model can be extended in several directions. To conclude the paper, we propose some possible extensions.
### Aging model with birth and death processes
The full model with both rejuvenation and premature aging with birth and death processes is the following
\[\left\{\begin{array}{rl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=-\mu(b)\,u(t, b)+\beta(t)\,\Gamma(b),\\ &-(\tau_{-}+\tau_{+})u(t,b)\\ &+\tau_{-}\,g_{-}\,u\left(t,g_{-}\,b\right)\\ &+\tau_{+}\,g_{+}\,u\left(t,g_{+}\,b\right)\\ u(t,0)&=0,\\ u(0,b)&=u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right).\end{array}\right. \tag{7.1}\]
In the above model, the function \(-\mu(b)\) is the mortality rate for individuals with biological age \(b\). That is
\[\exp\left(-\int_{b_{1}}^{b_{2}}\mu\left(b\right)\mathrm{d}b\right)\]
is the probability for an individual to survive from the biological \(b_{1}\) to the biological age \(b_{2}\).
The term \(\beta(t)\) is the flow of new born at time \(t\), that is
\[\int_{t_{1}}^{t_{2}}\beta\left(\sigma\right)\mathrm{d}\sigma\]
is the number of newborn between \(t_{1}\) and \(t_{2}\).
The map \(b\to\Gamma(b)\) is the density of probability to have a biological age \(b\) at birth. That is
\[\int_{b_{1}}^{b_{2}}\Gamma(b)db\]
is the probability to obtain a newborn with biological age \(b\) between \(b_{1}\) and \(b_{2}\). Moreover,
\[\int_{0}^{\infty}\Gamma(b)db=1.\]
**Assumption 7.1**.: _We assume that \(b\mapsto\mu(b)\) and \(t\mapsto\beta(t)\) are constant functions. We also assume that_
\[\mu>0\text{ and }\beta>0.\]
_We assume that the biological age of newborns follows a gamma density of probably. That is,_
\[\Gamma(b)=\frac{\delta^{\alpha}\,b^{\alpha-1}\exp\left(-\delta\,b\right)}{(\alpha -1)!}\text{ for }b>0,\]
_where \(\alpha>1\), \(\delta>0\), and the gamma function \((\alpha-1)!\) corresponds here to a constant of normalization, and is defined by_
\[(\alpha-1)!=\int_{0}^{\infty}b^{\alpha-1}\exp\left(-b\right)\mathrm{d}b.\]
In Appendix A, we discuss a aging model with birth and death processes.
### Model with generalized jumps functions
The full model with both rejuvenation and premature aging reads as follows
\[\left\{\begin{array}{rcl}\partial_{t}u(t,b)+\partial_{b}u(t,b)&=&-(\tau_{-}+ \tau_{+})u(t,b)\\ &&+\tau_{-}\,f^{\prime}_{-}(b)\,u\,(t,f_{-}(b))\\ &&+\tau_{+}\,f^{\prime}_{+}(b)\,u\,(t,f_{+}(b))\\ u(t,0)&=&0,\\ u(0,b)&=&u_{0}(b)\in L^{1}_{+}\left((0,\infty),\mathbb{R}\right).\end{array}\right. \tag{7.2}\]
We can extend the above model by setting
\[f_{-}(b)=\left(1-\delta_{-}(b)\right)b\leq b\leq\left(1+\delta_{+}(b)\right)b =f_{+}(b).\]
To assure the total mass preservation of the model we assume that
\[f_{-}(0)=f_{+}(0)=0\]
and to preserve the positivity of the solutions we assume that
\[f^{\prime}_{-}(b)\geq 0\text{ and }f^{\prime}_{+}(b)\geq 0,\forall b\geq 0.\]
For example, we could use
\[f_{+}(b)=(1+\delta_{+}b^{m})\ b,\text{ with }m\geq 0,\]
where \(\delta_{+}(b)=\delta_{+}b^{m}\) could be any positive polynomial in \(b\), that would model the average amplitude of rejuvenation jumps.
The premature aging jumps \(f_{-}(b)\) must remain below \(b\) (the biological age after the jump) therefore, we could use
\[f_{-}(b)=\left(1-\frac{\delta_{-}}{1+\chi\,b^{m}}\right)\,b,\text{ with }m\geq 0\text{ and }0\leq\chi\leq\delta_{-}<1.\]
### Model with both chronological age and biological age
It is unrealistic to get younger than 20 years because puberty and post-pubertal growth are irreversible events, nor after 80 years when the stock of undifferentiated stem cells is exhausted. Likewise, it is unreasonable to exceed the age
of more than 120 years, which is the physiological limit of the human lifespan. Hence the rate of rejuvenation or premature aging should not be constant in the function of the chronological age. Therefore it would be important for this problem to consider both the biological and chronological ages.
We could also combine the biological and chronological age. Consider \(a\) the chronological age (i.e. the time since birth) then we can combine both the chronological and the biological age and we obtain
\[\left\{\begin{array}{rl}\partial_{t}u+\partial_{a}u+\partial_{b}u&=-\mu(a,b)u (t,a,b)\\ &-\tau_{+}u(t,a,b)+\tau_{+}\left(1+\delta_{+}\right)u\left(t,a,\left(1+\delta_{ +}\right)b\right)\\ &-\tau_{-}u(t,a,b)+\tau_{-}\left(1-\delta_{-}\right)u\left(t,a,\left(1-\delta_{ -}\right)b\right)\\ u(t,0,b)&=b(t)\Gamma(b),\text{ for }b\geq 0,\\ u(t,a,0)&=0,\text{ for }a\geq 0,\end{array}\right. \tag{7.3}\]
with an initial distribution
\[u(0,a,b)=u_{0}(a,b)\in L^{1}_{+}\left((0,\infty)^{2},\mathbb{R}\right).\]
The function \(u(t,a,b)\) is the density of population at time \(t\) with respect to the chronological age \(a\) and the biological age \(b\). This means that if \(0\leq a_{1}\leq a_{2}\) and \(0\leq b_{1}\leq b_{2}\) then
\[\int_{a_{1}}^{a_{2}}\int_{b_{1}}^{b_{2}}u(t,a,b)\,da\,db\]
is the number of individuals with chronological age \(a\) in between \(a_{1}\) and \(a_{2}\) and the biological \(b\) in between \(b_{1}\) and \(b_{2}\).
This problem can be reformulated as follows
\[\left\{\begin{array}{rl}\partial_{t}u+\partial_{a}u&=Au(t,a,b)-\mu(a,b)u(t,a, b)\\ &-\tau_{+}u(t,a,b)+\tau_{+}\left(1+\delta_{+}\right)u\left(t,a,\left(1+\delta_{ +}\right)b\right)\\ &-\tau_{-}u(t,a,b)+\tau_{-}\left(1-\delta_{-}\right)u\left(t,a,\left(1-\delta_{ -}\right)b\right)\\ u(t,0,b)&=b(t)\Gamma(b),\text{ for }b\geq 0,\end{array}\right. \tag{7.4}\]
We refer to Magal and Ruan [14] for more results about age-structured models combined with an extra structuring variable.
**Appendix**
## Appendix A Aging model with birth and death processes
**Abstract Cauchy problem reformulation:** The problem (7.1) can be reformulated as an abstract Cauchy problem
\[\left\{\begin{array}{l}u^{\prime}(t)=Au(t)+Bu(t)-\mu u(t)+\beta\,\Gamma(.), \text{ for }t\geq 0,\\ \text{with}\\ u(0)=u_{0}\in L^{1}_{+}\left(\left(0,\infty\right),\mathbb{R}\right).\end{array}\right.\] (A.1)
where
\[Bu=\tau_{-}B_{g_{-}}u+\tau_{+}B_{g_{+}}u-\left(\tau_{-}+\tau_{+}\right)u.\]
**Theorem A.1**.: _Let Assumption 5.1 and Assumption 7.1 be satisfied. Then the mild solution (A.1) is given by_
\[u(t)=T_{A+B-\mu I}(t)u_{0}+\int_{0}^{t}T_{A+B-\mu I}(t-\sigma)\beta\,\Gamma(.) \mathrm{d}\sigma,\forall t\geq 0,\]
_where_
\[T_{A+B-\mu I}(t)=e^{-\mu t}T_{A+B}(t),\forall t\geq 0.\]
_Moreover_
\[\lim_{t\to\infty}u(t)=\overline{u}\geq 0,\]
_where_
\[\overline{u}=\int_{0}^{\infty}T_{A+B-\mu I}(\sigma)\beta\,\Gamma(.)\mathrm{d}\sigma.\] (A.2)
**Moments formulation:** We obtain the following result using similar arguments to Theorem 3.6.
By using change of variable, we obtain
\[E_{k}(\Gamma)=\int_{0}^{\infty}\frac{\delta^{\alpha}b^{\alpha+k-1}\exp\left(- \delta b\right)}{(\alpha-1)!}\mathrm{d}b=\delta^{-k}\frac{(\alpha+k-1)!}{( \alpha-1)!}.\]
**Theorem A.2**.: _Let Assumption 5.1 be satisfied. The rejuvenation and premature aging model (5.1) has a unique non-negative mild solution. We have_
\[\frac{d}{dt}E_{0}(u(t))=-\mu\,E_{0}(u(t))+\beta,\]
_and the model preserves the total mass (number) of individuals. That is_
\[\lim_{t\to+\infty}E_{0}(u(t))=\frac{\beta}{\mu}.\] (A.3)
_Moreover, the higher moment satisfies the following system of ordinary differential equations for each \(k\geq 1\),_
\[\frac{d}{dt}E_{k}(u(t))=k\,E_{k-1}(u(t))-(\mu+\chi_{k})\,\,E_{k}(u(t))+\beta\, \delta^{-k}\frac{(\alpha+k-1)!}{(\alpha-1)!}.\] (A.4)
**Proposition A.3**.: _Let \(k_{1}\) be an integer such that_
\[\chi_{k}<-\mu,\forall k>k_{1}.\]
_Then_
\[\lim_{t\to+\infty}E_{k}(u(t))=+\infty,\forall k>k_{1},\] (A.5)
_whenever \(E_{k}(u_{0})<+\infty\)._
**Equilibrium solution:** An equilibrium solution satisfies some delay equation with both advance and retarded delay. That is,
\[\left\{\begin{array}{rl}\overline{u}^{\prime}(b)&=\beta\,\Gamma(b)-(\mu+\tau_{ -}+\tau_{+})\,\overline{u}(b)\\ &\quad+\tau_{-}\,g_{-}\,\overline{u}\,(g_{-}\,b)\\ &\quad+\tau_{+}\,g_{+}\,\overline{u}\,(g_{+}\,b)\,,\\ \overline{u}(0)&=0,\end{array}\right.\] (A.6)
and the difficulty of solving such an equation comes from the following
\[g_{+}\,b>b>g_{-}\,b,\,\forall b>0.\]
It follows that, even the existence of equilibrium solution is not a classical problem to investigate.
**Non existence result for exponentially decreasing equilibrium solution of (5.1):** Assume that \(b\mapsto\overline{u}(b)\) a non-negative continuously differentiable map satisfying system (5.11).
**Assumption A.4**.: _Assume that the map \(b\mapsto\overline{u}(b)\) is non-negative, and non null, and continuously differentiable map, and satisfies the system (A.6). Assume in addition that \(\overline{u}(b)\) is exponentially decreasing. That is, there exist two constants \(M>0\), and \(\gamma>0\), such that_
\[\overline{u}(b)\leq Me^{-\gamma b},\forall b\geq 0.\] (A.7)
By using the first equation of (5.11), we deduce that
\[|\overline{u}^{\prime}(b)|\leq\widetilde{Me}^{-\gamma\,\min(1,g_{+},g_{-}, \delta/2)\,b},\forall b\geq 0,\]
for some suitable \(\widetilde{M}>0\).
So, under Assumption 5.8, all the moments of \(\overline{u}(b)\) and \(\overline{u}^{\prime}(b)\) are well defined. Moreover, by using the first equation of (5.11), we obtain for each \(k\geq 1\)
\[\begin{array}{rl}\int_{0}^{\infty}\sigma^{k}\overline{u}^{\prime}(\sigma) \mathrm{d}\sigma&=\int_{0}^{\infty}\beta\,\Gamma(b)\mathrm{d}b-(\mu+\tau_{-}+ \tau_{+})\int_{0}^{\infty}\sigma^{k}\overline{u}(\sigma)\mathrm{d}\sigma\\ &\quad+\tau_{+}\int_{0}^{\infty}\sigma^{k}\overline{u}\,(t,g_{+}\sigma)\,g_{+ }\mathrm{d}\sigma\\ &\quad+\tau_{-}\int_{0}^{\infty}\sigma^{k}\overline{u}\,(t,g_{-}\sigma)\,g_{ -}\mathrm{d}\sigma,\end{array}\]
and since \(\overline{u}(0)=0\), we obtain by integrating by parts
\[\begin{array}{rl}-k\int_{0}^{\infty}\sigma^{k-1}\overline{u}(\sigma) \mathrm{d}\sigma&=\beta\,\delta^{-k}\frac{(\alpha+k-1)!}{(\alpha-1)!}-(\mu+ \tau_{-}+\tau_{+})\int_{0}^{\infty}\sigma^{k}\overline{u}(\sigma)\mathrm{d} \sigma\\ &\quad+\tau_{+}\int_{0}^{\infty}\sigma^{k}\overline{u}\,(t,g_{+}\sigma)\,g_{+ }\mathrm{d}\sigma\\ &\quad+\tau_{-}\int_{0}^{\infty}\sigma^{k}\overline{u}\,(t,g_{-}\sigma)\,g_{- }\mathrm{d}\sigma,\end{array}\]
and we obtain
\[0=\beta\,\delta^{-k}\frac{(\alpha+k-1)!}{(\alpha-1)!}+kE_{k-1}(\overline{u})- (\chi_{k}+\mu)\,\,E_{k}(\overline{u}),\forall k\geq 1,\forall k\geq 1,\] (A.8)
where \(\chi_{k}\) is defined by (5.5).
Under Assumption A.4, we must have
\[E_{k-1}(\overline{u})>0,\text{ and }E_{k}(\overline{u})>0,\forall k\geq 0,\]
and by using (5.6), we deduce that (A.8) can not be satisfied for all \(k\geq 1\) large enough. Because \(-\left(\chi_{k}+\mu\right)>0\), for all \(k\geq 1\) large enough.
Therefore we obtain the following proposition.
**Proposition A.5**.: _The equilibrium \(\overline{u}\) defined by (A.2) is not an exponentially decreasing function. That is, for each \(M>0\), and \(\gamma>0_{\text{\tiny{$\pi$}}}\) the function \(\overline{u}\) does not satisfy (A.7)._
The above proposition is surprising because the proposition shows that it is sufficient to perturb an age-structured with a non-local term \(-\tau_{-}u(t,b)+\tau_{-}\left(1-\delta_{-}\right)u\left(t,\left(1-\delta_{-} \right)b\right)\) to obtain an equilibrium solution which is not exponential bounded.
|
2302.14769 | Membership Inference Attack for Beluga Whales Discrimination | To efficiently monitor the growth and evolution of a particular wildlife
population, one of the main fundamental challenges to address in animal ecology
is the re-identification of individuals that have been previously encountered
but also the discrimination between known and unknown individuals (the
so-called "open-set problem"), which is the first step to realize before
re-identification. In particular, in this work, we are interested in the
discrimination within digital photos of beluga whales, which are known to be
among the most challenging marine species to discriminate due to their lack of
distinctive features. To tackle this problem, we propose a novel approach based
on the use of Membership Inference Attacks (MIAs), which are normally used to
assess the privacy risks associated with releasing a particular machine
learning model. More precisely, we demonstrate that the problem of
discriminating between known and unknown individuals can be solved efficiently
using state-of-the-art approaches for MIAs. Extensive experiments on three
benchmark datasets related to whales, two different neural network
architectures, and three MIA clearly demonstrate the performance of the
approach. In addition, we have also designed a novel MIA strategy that we
coined as ensemble MIA, which combines the outputs of different MIAs to
increase the attack accuracy while diminishing the false positive rate.
Overall, one of our main objectives is also to show that the research on
privacy attacks can also be leveraged "for good" by helping to address
practical challenges encountered in animal ecology. | Voncarlos Marcelo Araújo, Sébastien Gambs, Clément Chion, Robert Michaud, Léo Schneider, Hadrien Lautraite | 2023-02-28T17:10:32Z | http://arxiv.org/abs/2302.14769v1 | # Membership Inference Attack for Beluga Whales Discrimination
###### Abstract
To efficiently monitor the growth and evolution of a particular wildlife population, one of the main fundamental challenges to address in animal ecology is the re-identification of individuals that have been previously encountered but also the discrimination between known and unknown individuals (the so-called "open-set problem"), which is the first step to realize before re-identification. In particular, in this work, we are interested in the discrimination within digital photos of beluga whales, which are known to be among the most challenging marine species to discriminate due to their lack of distinctive features. To tackle this problem, we propose a novel approach based on the use of Membership Inference Attacks (MIAs), which are normally used to assess the privacy risks associated with releasing a particular machine learning model. More precisely, we demonstrate that the problem of discriminating between known and unknown individuals can be solved efficiently using state-of-the-art approaches for MIAs. Extensive experiments on three benchmark datasets related to whales, two different neural network architectures, and three MIA clearly demonstrate the performance of the approach. In addition, we have also designed a novel MIA strategy that we coined as ensemble MIA, which combines the outputs of different MIAs to increase the attack accuracy while diminishing the false positive rate. Overall, one of our main objectives is also to show that the research on privacy attacks can also be leveraged "for good" by helping to address practical challenges encountered in animal ecology.
1
## 1 Introduction
In animal ecology, the ability to re-identify (re-ID) an individual animal across multiple encounters allows for addressing a broad range of questions such as ecosystem function, community, and population dynamics as well as behavioral ecology [2, 27]. In many cases, especially for aquatic species such as marine mammals, re-ID requires extensive training and practical experience for a human to acquire sufficient expertise to be able to accurately recognize a particular individual. To partially circumvent this issue, biologists usually rely on approaches such as tagging and photo-identification (photo-ID) [59, 27]. While accurate, the tagging approach is intrusive to animals and is often expensive and laborious. In contrast, the photo-ID approach uses visual identification from camera images (_e.g._, hand-held camera, camera trap, or drones), which is non-invasive for animals and has a lower cost. Nonetheless, there are some practical and methodological challenges associated with its use. First, even among experienced researchers, there is a non-negligible chance of human error and bias when reviewing photos [17]. Second, it is also time-consuming and expensive in terms of human involvement to manually filter through thousands of images.
To overcome these limitations, one possible strategy is to rely on computer vision techniques to standardize and automatize the animal re-ID process [54]. To realize this, for decades, "feature engineering", which can be defined as the process of selecting or transforming raw data into informative features, has been the most commonly used technique. Basically, it means that most of the algorithms for animal re-ID are designed and implemented to focus exclusively on predetermined traits, such as patterns of spots or stripes, to discriminate among individuals. However, feature engineering requires programming experience, sufficient familiarity with the species considered to identify relevant features. In addition, this approach lacks in generality as once a feature detection algorithm has been designed for one species, it is unlikely to be useful for others [21].
More recently, the last decade has witnessed the emergence of deep learning systems that make use of large data volumes to automatically learn discriminative features [34]. In particular, Convolutional Neural Networks (CNNs) have achieved state-of-the-art results in a variety of uses cases based on the assumption of a closed world (_i.e._, a fixed number of classes/identities), However, CNNs are known to lack robustness when deployed in real-world classification/recognition applications, in which incomplete knowl
edge of the world during training result in unknown classes being submitted to the model during testing. This corresponds for instance to the situation in which when used in the wild, the model will have to recognize individuals that it has not seen during training.
In marine ecology, one of the main challenges related to animal re-ID, such as wild whales, is the encounter of large populations in which there is frequently the addition of new individual appearing due to birth or migration, therefore creating an "open-set" setting [52] wherein the identity model must deal with "classes" (_i.e._, individuals) unseen during training. Thus, a desirable feature for an animal re-ID approach is to have the ability to identify not only animals that belong to the catalog but also recognize new individuals (_i.e._, previously unknown animals). To address this issue, we investigate the use of Membership Inference Attacks (MIA), which is a form of privacy leakage in which the objective of the adversary is to decide whether a given data sample was in a machine learning model's training dataset [55, 62, 50, 39, 32, 12]. Knowing that a specific data sample was used to train a particular model may lead to potential privacy breaches if for instance this membership reveals a sensitive characteristic (_e.g._, being part of the cohort of patients having a particular disease or being a member of a vulnerable group). The gist of our approach is we could leverage on a MIA to discriminate whether a new beluga whale was present or not in the training set. Then, this information can be used in the re-ID pipeline to take the decision to classify known individuals or to add a new entry in the catalog for an unknown individual.
To summarize, in this paper our main contribution is the proposition of a novel approach for whales discrimination through images (photo-ID), which relies on the use of MIAs. In particular, one of our objective is to show that by drawing on the significant body of work on MIAs, it is possible to efficiently address the "open-set" vs "closed-set" problem. To demonstrate this, extensive experiments have been conducted with three state-of-the-art MIAs that leverage different information produced by the model (_i.e._, prediction confidence, predicted and ground truth label, or both of them) as well as different attack strategies (neural network-based, metric-based and query-based attacks). More precisely, we have performed a comprehensive measurement of the success of MIAs to address the open-set problem over two model architectures (ResNet50 [19] and DenseNet-121 [23]), three benchmark image datasets related to whales species (GREMM [37], Humpback [15, 9] and NOAA [40]) along with three state-of-the-art MIAs, namely Yeom _et al._[62], Salem _et al._[50] and LabelOnly [14], thus building a total of 36 attack scenarios. In addition, previous works [55, 50, 14] assume the leak information is more likely for machine learning models on the influence of overfitting, we ensure this assumption by evaluating overfitted and non-overfitted models while monitoring the false positive rate as recommended in [8, 48] for the reliability of the results. Finally, we introduced a novel attack design for whale discrimination, which we coined as ensemble MIAs, which combines the outputs of different MIAs to increase the attack accuracy while decreasing the false positive rate.
The outline of the paper is as follows. First in Section 2, we review the relevant background on automated photo identification systems as well as on membership inference attack. Then in Section 3, we describe the St. Lawrence beluga whale re-id pipeline from side pictures, the training of the attack model as well as the different MIA strategies that we propose to implement the discrimination between known and unknown belugas. Afterwards in Section 4, we present the experimental setting used to evaluate our approach, which includes the datasets, the experimental configuration as well as the target and attack models. Finally in Section 5, we report on the performance of the approach under different scenarios, before discussing how the attack can generalize to different settings as well as the factors influencing its success and its robustness before concluding in Section 6.
## 2 Related Work
In this section, we first review the related work on re-identification and discrimination of marine mammals as well as the background on MIAs.
### Automated Photo Identification of Marine Mammals
The research on the individual identification of cetaceans using natural markings began in the early 1970s [61, 60], including the use of unique markings and coloration, or notches in the dorsal fin or fluke [58, 10]. For instance, Pollicelli, Coscarella and Delieux [44] have evaluated the opportunity to use image metadata (_i.e._, annotations describing the animal characteristics as well as the time and place at which the picture was taken) as an attribute for photo-ID to reduce the number of possible matches in the identification step. In this work, classical machine learning techniques, such as neural networks, Bayesian classifiers, decision trees and \(k\)-nearest neighbors, were applied on the metadata of 869 pictures taken of 223 Common's dolphin individuals taken over seven years. Overall, the decision tree classifier was able to correctly identify 90% of the individuals on the validation set based only on the metadata of their pictures. One clear limitation of this work is the reliance on metadata rather than on intrinsic visual characteristics of the animals. In addition, manual work is also required and the system has to be retrained to include new individuals.
In [47], a fully automated system called Smart Photo-ID of Risso's dolphin (SPIR) was developed to study the presence of Risso's dolphin in the Gulf of Taranto. This species is characterized by several distinctive scars over the dorsal fin present in the animal, a useful pattern for automated recognition. The dataset necessary for training this system was created with the general public involvement in research activities, side by side with experts. The first step of the system consists in preprocessing the input image to extract the dorsal fin segmentation employing Otsu's threshold technique [43] and morphological operators. Followed by detection, feature extraction is performed using Speeded Up Robust Feature (SURF) [3] and Scale-Invariant Feature Transform (SIFT) [33], which are methods for extracting local characteristics in images. To predict the identity of an unknown dolphin, the input image is compared with all of the images available in the database. Then, the picture with the highest number of matching features with the query image is selected as the best-matching dolphin. The results obtained demonstrate that SIFT outperforms the SURF feature detector, showing better performances and achieving a 90% accuracy in the validation experiment. Unfortunately, the application of SPIR cannot be extended easily to other
species, especially if these are not characterized by scars over the dorsal fin.
Recently, Maglietta and collaborators [35] have proposed a novel methodology called NNPool, dedicated to the automatic discrimination of unknown vs. known Risso's dolphins. More precisely, NNPool consists of a pool of \(n\) CNNs, each one being trained to recognize a particular known individual versus the rest of the dolphin (_i.e._, a form of one-versus-all classification). The models were trained on Risso's dolphins data and photos acquired between 2013-2018 in the Northern Ionian Sea (Central-Eastern Mediterranean Sea). The results obtained have also been validated using another dataset composed of unknown images of Risso's dolphins from the Northern Ionian Sea and the Azores, acquired in 2019. More precisely, their experiments considered 28 individuals to validate experimental results containing 300 images of Risso's dolphin fins detailed as 40 images belonging to some of the 23 known dolphins with the the remaining 260 belonging to the unknown dolphins. The discrimination accuracy of 87% was measured on a validation dataset, which can be used as preprocessing of SPIR [47] to detect an unknown dolphin before performing the photo-ID of known individuals. This work is the closest to our work, in the sense that it considers the discrimination task of distinguishing known vs unknown individuals, rather than only photo re-id. Nonetheless, it is applied on a species that is much more easier to discriminate because of its distinctive marks. In addition, the dataset used to conduct the experiments is not publicly available, which makes it impossible to compare our approach to theirs.
Deep learning approaches are relatively novel in the field of animal photo-ID [18, 38, 26, 41, 35]. For example, Bogucki and co-authors introduced a fully automated system based on three CNNS for photo-ID of North Atlantic right whales [6]. This system participated on the Kaggle platform in 2015, on the automation of the right whale recognition process using a dataset of aerial photographs of animals [15]. The training dataset provided for the competition consisted of 4544 images containing only one single right whale and were labeled with a correct whale. Submissions were evaluated on a test set of 2493 images, used to determine the rankings of the competitors. The numbers of pictures per whale varied considerably in this dataset (_e.g._, six individuals had only one photograph whereas there were two whales with eighty-two images each). This is a challenging setting for classification, whose performance depends on the number of images available for each individual. The proposed method uses a CNN that selects the region of interest and outputs a bounding box around the head of the whale, which is then used to crop the high-resolution image. The authors developed a network that automatically scales, rotates and crops the input image. This is achieved by training a CNN to locate two key points on the top of the whale's head from already labeled data. Data augmentation was applied, adding rotated and re-scaled versions of the images in the original dataset. Finally, another CNN was used to perform actual whale identification, obtaining an accuracy of individual right whale recognition of 87.44%. The authors explained that the wide variability in the number of images per individual whale impacted the performance of the last CNN devoted to individual recognition. More precisely, having more images per individual improves the recognition accuracy.
More recently, Bergler and co-authors [4] have developed a deep-learning-based framework for identifying killer whales. The approach, called FIN-PRINT, was trained and evaluated on a dataset collected over an 8-year period (2011-2018) in the coastal waters of western North America that consists of 367 individuals. First, object detection is performed to identify unique killer whale markings, which results in 94.1% of precision using the recent version of YOLO (YOLOv5) [25]. Second, all previously detected killer whale markings are extracted. The third step introduces a data enhancement mechanism by filtering between valid versus invalid (VVI) markings from previous processing levels, in which ResNet34 is used for binary classification between VVI identification images achieving 97.5% of precision. The fourth and final step involves multi-class identification, which assigns a label to the top 100 killer whales for a test sample. FIN-PRINT achieves an accuracy of 92.5% and 97.2% using respectively top-1 and top-3 for photo-identified killer whales. Note that the top-100 killer whales each have more than 325 images per individual while the remaining individuals have fewer images per class, which leads to an unbalanced dataset challenge.
In Cheeseman _et al._[9], the authors have developed a new CNN-based similarity algorithm for humpback whales individuals. The method relies on a Densely Connected Convolutional Network (DenseNet) to extract key-points of an image of the ventral surface of the fluke and then train the CNN model. The extracted features are then compared against those of the reference set of previously known humpback whales for similarity. The Arc-Face algorithm [16] uses fluke shape, edge pattern and surface markings to locate images in a hyper-sphere space in which proximity becomes the similarity measure. For testing, they evaluated the complete dataset of 15494 humpback whale individuals considering 33321 whale fluke images used in Kaggle competition. The authors argues that CNN-based image recognition is much faster and more accurate than traditional manual matching, reducing the time for identifying a picture by over 98% and decreasing the error rate from approximately 6-9% to 1-3%.
To the best of our knowledge, there is no automatic photo-ID systems for beluga whales re-identification available in the literature, due to individual challenges as they often lack unique or permanent pigmentation. In addition, they also do not have a dorsal fin, a feature common to other ice-inhabiting whales (_e.g_, humpback whales). Although photo-ID studies of beluga whales are being conducted in Cook Inlet, Alaska [36], the White Sea, Russia [13], and the St. Lawrence Estuary, Canada [37], a standardized and public database is not yet available for this task to be investigated by the computer vision scientific community.
### Membership Inference Attack
With respect to privacy, in addition to the sensitive inferences that can be drawn from the data itself, it is also important to understand how much the output of the learning algorithm itself (_e.g._, the model) leaks information about the input data it was trained on. For instance, privacy attacks (also called inference attacks) have been developed against machine learning models to reconstruct the training data from the model or to predict whether the profile of a particular individual known to the adversary was in the training dataset [55]. Generally, this membership inference is deemed problematic if revealing that a profile belongs to this database enables you to learn a sensitive information
about this individual (_e.g._, the training set is composed of individuals suffering from a particular disease or from particularly vulnerable subgroups).
More precisely, in a MIA an adversary that knows the particular profile of an individual tries to infer whether this profile was in the training dataset used to learn a particular model [22]. Generally, the adversary models considered in the MIA literature assume either a black-box or white-box access to the model being attacked. In a black-box setting, the term oracle is sometimes used to refer to the access of the adversary, since he can only submit requests to the model and observe the model outputs (_i.e._, he does not have access to the structure of the model). Such attacks need little information and as such are quite general and versatile but at the same time offer usually lower performances than attacks conducted in the white-box setting. In contrast, a white-box adversary is assumed to have a (partial or full) knowledge of the model such as its architecture, its parameters as well as its weights. The attacks conducted in this setting usually achieve better performances since they can be adapted to specific models and also have access to more information at inference time.
Usually, the success of MIA attacks is impacted by model overfitting. Indeed, if the model attacked has overfitted the training data, it will behave quite differently when an example contained in the training set is submitted (_e.g._, the confidence on its prediction will be higher). This means that the success of MIAs can be decreased by employing mechanisms classically used in machine learning to reduce overfitting as well as by using more training samples to avoid too precise memorization. In contrast in our case, we will exploit overfitting in a positive way as a manner to increase the success of the MIAs and thus the discrimination of known vs unknown belugas.
Standard machine learning metrics such as precision, recall and F1-measure can be used to quantify the success of MIAs. However, they might be interpreted differently. For instance, in the attack context, one might want to have a high precision even if it means to reduce recall (_e.g._, by tuning a confidence threshold). Indeed, realizing a MIA on few individuals with a high confidence can be considered a higher privacy breach than performing a MIA on a large number of individuals but with a low confidence [8]. More precisely, the false positive rate (also called sometimes the false alarm rate) should be reduced as much as possible to be a good indicator of an attack performance.
## 3 Proposed Approach
In this section, we first describe the generic beluga whale re-id pipeline before detailing the training process for the attack model as well as the different MIAs from the state-of-the-art that we have used to implement it. Finally, we also describe our novel approach to perform MIA based on an ensemble strategy.
### Beluga Whale Identification Pipeline
The general pipeline for beluga whale identification pipeline is illustrated in Figure 1. It consists of two phases: (1) discrimination for distinguishing between known and unknown whale individuals through a MIA and (2) re-identification (re-ID).
**Discrimination.** The attack model trained to conduct the MIA is used to determine whether the target sample \(x\) of a beluga is part of the training set of the target model. We describe how to build the attack model in Section 3.2.
**Re-ID.** Once it has been determined whether the target sample \(x\) corresponds to a known (_i.e._, within training set) or unknown beluga (_i.e._, out of training set) by the attack model, known examples can be immediately classified through a standard classifier. Otherwise for unknown belugas, the recognition has to be done manually through side-information. For instance, this side-information could be acquired through experts to confirm whether that individual is indeed new and otherwise to decide to which class the unknown individual will be assigned.
### Training of the Attack Model
We assume that the adversary has access to a local dataset, which we call the attack dataset \(D^{s}\). The attack dataset comes from a same distribution (_i.e._, the same population or the same individuals) than the one used to train the target model. To infer whether the sample \(x\) is in the training set of the target model, our core idea is to train an attack model \(M_{attack}\) that can detect whether a particular sample corresponds to the picture of a beluga that was part or not of the training set. Figure 2 provides a high-level overview of the training process of the attack model, which we describe in details hereafter.
**Training process.**\(D_{train}\) is the training dataset, which is used for training the target model using the learning algorithm \(A\). \(D^{s}\) is the attack dataset that is disjoint from the training dataset \(D_{train}\), which contains data points coming from the same data distribution as the training members in \(D_{train}\). The adversary first trains the attack model using the attack training dataset \(D_{s}\) and the learning algorithm \(A\), in such as way that the attack model mimics the behavior of the target model. \(T\) is the attack test dataset that is assumed to be both disjoint from \(D^{s}\) and \(D_{train}\), in the sense that it is composed by non-member individuals never seen before by \(D^{s}\) and \(D_{train}\). When the training of the attack model is completed, the adversary queries the attack model using the attack training and test datasets to obtain the outputted prediction vectors for each data point. More formally, we denote a prediction vector as \(\hat{p}(y\,|\,x)\), in which "member" are labelled as 1 and "non-member" as 0. Then, each "member" dataset and "non-member" dataset are represented as follows:
\[P_{i}^{m}=\{\hat{p}(y\,|\,x),1\} \tag{1}\]
\[P_{i}^{n}=\{\hat{p}(y\,|\,x),0\} \tag{2}\]
More precisely, the prediction vector of each point \(i\) in the attack training dataset is labeled "member" \(P_{1}^{m},\dots,P_{k}^{m}\) and the prediction vector of each point \(i\) in the attack test dataset is labeled "non-member" \(P_{1}^{n},\dots,P_{k}^{n}\). Thus, the
Figure 1: Overview of the whale identification pipeline.
adversary can build \(k\) "member" data points and \(k\) "non-member" datapoints, which jointly form the training dataset of the attack model.
Finally, the problem of recognizing the complex relationship between members and non-members is converted into a binary classification problem. Once trained, the adversary can use the attack model \(M_{attack}\) to implement MIAs on arbitrary data points. The attack model takes the prediction vector \(\hat{p}(y\,|\,x)\) of the target model of a data point \(x\) as input and outputs whether this point is in \(D_{train}\) of the target model or not.
### Attack Model Design
To instantiate the MIA attack, we have applied three state-of-the-art MIAs from the literature that leverage different types of information outputted by the target model (namely prediction confidence, ground truth label or both of them) and different attack strategies (_i.e._, neural network-based, metric-based and query-based) as shown in Table 1. These MIAs all consider an adversary with a black-box access to the model and thus are quite generic. Note that while we did not consider MIAs with a white-box access to the model [49], we leave as a future work their investigation to increase the attack success of beluga discrimination. In security, white-box access is usually considered less realistic than black-box access in many real-life situations (_e.g._, the use of a machine learning as service). However, this is not the case in our setting as we can fully control the implementation pipeline of the MIA.
**Yeom _et al._[62]** In contrast to neural network-based attacks like Salem _et al._[50] explained in the next paragraph, metric-based attacks leverage a certain metric and a predefined threshold over the metric (computed over the attack dataset by querying the attack model) to differentiate members and non-members. More precisely, Yeom _et al._[62] uses the prediction confidence of the correct class under the assumption that the confidence should be high for the member samples as the target model is optimized with this objective. The \(Metric_{conf}\) attack is defined as follows:
\[Metric_{conf}(\hat{p}(y\,|\,x),\,y)=-(\mathcal{L}(\hat{p}(y\,|\,x);\,y)\leq \tau), \tag{3}\]
in which \(\mathcal{L}\) is the cross-entropy loss function and \(\tau\) is a preset threshold. An adversary infers an input record as a member if its prediction loss is smaller than the average loss of all training members while otherwise, it is inferred as a non-member. The intuition for this attack is that the target model has been learnt on its training members by minimizing their prediction loss. Thus, the prediction loss of a training record should be smaller than the prediction loss of a test record. The threshold is an input hyperparameter to the attack and as such be could be learned by using an evaluation set for instance. To identify the threshold for optimal accuracy, we use the evaluation set from the target set and treat one-half as members, with the rest as non-members. We compute the AUC (Area Under the Curve) and precision/recall metrics, sweep over a range of values for the threshold \(\tau\) and measure the resulting attack's FPR/TPR and precision/recall trade-offs. We can then choose the best threshold \(\tau\) based on membership inference accuracy for this simulated setup.
**Salem _et al._[50]** This attack takes the prediction vector confidences as the input to the attack model. The adversary derives the training dataset of the attack model by querying
\begin{table}
\begin{tabular}{|c|c|c|} \hline MIA & Features & Attack strategy \\ \hline Yeom _et al._[62] & C, L & Metric-based \\ \hline Salem _et al._[50] & C & Neural network-based \\ \hline Label-only [14] & L & Query-based \\ \hline \end{tabular}
\end{table}
Table 1: Summary of MIAs investigated. In the features column, C denotes the use of confidence while L corresponds to the use of label.
Figure 2: Attack model training procedure.
the attack model with the attack training dataset (labeled as members) and attack testing dataset (labeled as non-members). With the attack training dataset, the adversary can learn the attack model, which is a multi-layer perceptron (MLP). A traditional 3-layer MLP with 64, 32 and 2 hidden neurons for each layer is used for neural network-based attacks. We use the same hyperparameters of overfitting setting as seen in Section 4.3. Once the attack model is learnt, the adversary can perform the attack over the target model to differentiate members and non-members with respect to \(D_{train}\).
**Label-only. [14]** Rather than using confidence predictions, query-based attacks restrict the attack to using only the predicted labels from the target model. Label-only attacks determine membership status by sending multiple queries to the target model, which concretely are generated by adding adversarial perturbations to the input sample until the predicted label has been changed. The attack measures the magnitude of the perturbation and considers the data sample as a member if its magnitude is larger than a predefined threshold. More formally, given some estimate \(dist(x,y)\) of a point's \(l2\)-\(distance\) to the model's boundary, the attack predict \(x\) a member if \(dist(x,y)\geq\tau\) for some threshold \(\tau\).
To estimate the distance, the attack starts from a random point \(x\), which is misclassified. Then, a "walk" along the boundary is performed while minimizing the distance to \(x\) using HopSkipJump [11], which closely approximates stronger white-box attacks. HopSkipJump is initialized with a sample blended with uniform noise that is misclassified over iterations by moving it along the decision boundary to get closer to the attacked image. For a given target model, the attack assumes that the robustness to adversarial perturbations is higher for a member sample compared to a non-member as the former was involved in the training of the model.
### Ensemble Membership Inference Attack
In this section, we propose a novel way to perform a MIA illustrated in Figure 3, which we coin as a ensemble membership inference attack. In the ensemble MIA, instead of a single one, \(n\) attack models are built using different subsets of the data. More precisely, the attack model \(M_{attack}\) is not trained directly using the whole dataset, but rather this dataset is split in disjoint subsets to create several attack models (\(M_{attack_{1}}\) to \(M_{attack_{1}}\)). For instance, when the dataset contains 60 individuals, it is split in 6 subsets with 10 individuals to train the \(M_{attack_{1}}\) to \(M_{attack_{6}}\).
At discrimination time, the MIA ensemble generates \(l\) predicted outputs for each new sample \(x\) that are combined using a combination rule \(E\). The combination rule generates the final output "member" or "non-member". In this paper, we have used the simple combination rule \(E\) that an input \(x\) is labelled as "member" if at least one \(M_{attack}\) prediction output assigned it as a "member", while otherwise it is considered to be a "non-member". Our rationale behind the design of the ensemble MIA is that training an attack model with fewer individuals makes the classifier more powerful to discriminate between similar classes. Indeed, smaller subsets decrease the complexity of discrimination as usually they give a higher prediction score for individuals seen during training that models built on bigger dataset. Moreover, it was confirmed in the experiments that the attack performance may vary across different individuals due to the different overfitting levels for each set of classes (see Table 6 in Section 5.4).
## 4 Experimental Setting
In this section, we present the experimental setting used to validate our approach of using MIA for beluga whale discrimination. More precisely, we first describe the datasets used in the experiments (Section 4.1), followed by the experimental configuration (Section 4.2) and finally by the target and attack models' architectures and training settings (Section 4.3).
### Datasets
The experiments were conducted on three distinct datasets in terms of visual characteristics: GREMM, Humpback and NOAA.
* The GREMM dataset is made of photos from handheld cameras taken during photo-identification surveys conducted from June to October between 1989 and 2007 as part of an ongoing long-term study of the social organization of the St. Lawrence Estuary beluga population in Quebec (Canada). This dataset is composed of 983 beluga individuals and thousands of side-view beluga images. However, the number of pictures per individual varies significantly with a lot of beluga having only a small number of pictures. Thus, we selected a part of this dataset that contains 3402 images distributed across 180 individuals. In addition, as a pre-processing step, we use the method previously proposed in [1] to detect and crop the images.
* Whale and Dolphin Identification competition dataset [15], which originally contains images of over 15000 unique individual marine mammals from 30 different species collected from 28 different research organizations. We selected just the Humpback species to evaluate our approach because they are known to be among one of the easiest species to recognize due to their very distinctive patterns on the flukes. For example, the first ranked solution [45] in Kaggle competition achieves 0.973 on the private leader-board in a competition that only identifies humpback whales [56]. The Humpback dataset contains 270 individuals with a total of 4814 images.
* Finally, the last dataset has been collected by the US federal agency National Oceanic and Atmospheric Administration (NOAA), which monitors five different populations of belugas across Alaskaan waters, with a
Figure 3: Ensemble MIA.
focus on the Cook Inlet belugas. More precisely, the NOAA dataset is composed by 380 individuals with 5158 images in total corresponding to top-view from belugas whales. Note that the top-view pictures that compose the NOAA dataset are considered more informative than pictures of beluga flanks taken from the side of animals that compose the GREMM dataset.
To summarize, Table 2 describes the numbers of individuals and sample pictures in each of these three datasets while their visual characteristics are presented in Figure 4.
### Experimental Configuration
Each dataset used to train MIAs is composed of individual whale, each one of them having many pictures associated to it. To assess the MIA for beluga discrimination, we have sampled three disjoint sub-datasets with equal or approximately equal number of identities. However, the number of images per individual (_i.e._, an individual beluga is an identity) is very diverse from one beluga to another. Therefore, the sampling cannot guarantee that each class has an equal number of data points in each sub-dataset unless increasing the number of data points in each sub-dataset. The augmentation process included operations such as random horizontal flip and brightness variation. More precisely, augmented images are obtained by rotating the original image 90, 180, 270 and 330 degrees clockwise and random brightness between 0.0 and 1.0 [5]. Based on this observation, we construct three disjoint augmented subsets to which we assign randomly the identities of belugas: the target set, the attack set and the evaluation set (see Table 3). After augmenting for those individuals that contains fewer samples, this leads to 75 images per ID. Thus, each of this dataset will contain approximately \(\frac{1}{3}\) of the identities (\(ID^{1}\), \(ID^{2}\) and \(ID^{3}\)). More precisely, the GREMM, Humpback and NOAA datasets are composed respectively of 60, 90 and 127 individuals with a number of images per subset of respectively 1500, 2250 and 3175 images for GREMM, Humpback and NOAA datasets respectively. In total, the evaluation set (\(ID^{1}\) and \(ID^{3}\)) is composed of 3000, 4500, and 6350 samples for respectively GREMM, Humpback and NOAA datasets.
As seen in Table 3, the target set is split into a training, validation and test set, each set being composed of approximately one third of the pictures for each id. The target training set is then used to train the target model while the validation and target test set are respectively used to validate the hyper-parameters and assess the accuracy of this model. Second, the attack set contains examples of non-members that are required to be able to train the attack model. In addition, pictures of the target validation are used as representatives of members. Finally, the evaluation set contains non-members whose identities are different from the one used to built the attack model. The objective here is to assess the generalization power of the attack. Indeed, as the identities of the belugas in this set are different from the ones in the attack set, we are avoiding the situation in which the attack model overfits the attack set with respect to non-members. Here, the target test set is used as examples of members for evaluating the success of the attack model. We balance each subset with the same number of individuals and samples to ensure that we can use the target validation set as members in our attack set and the target test set as members in the evaluation set.
Figure 5 provides an example of the experimental configuration for the GREMM dataset, which contains 180 individuals. Here \(ID^{1}\), \(ID^{2}\) and \(ID^{3}\) represent subsets of pictures of whale individuals whose identities are totally different from one another. For instance, \(ID^{1}\) contains individuals that belongs to the same IDs in target set but for which different samples (_i.e._, different pictures) are used to compose the training, validation and test set of the target model. Thus, \(ID^{1}\) individuals are considered as member in attack set (blue stroke rectangle) and evaluation set (red dotted rectangle). In contrast, \(ID^{2}\) and \(ID^{3}\) are individuals totally unknown by the target model (_i.e._, non-member). In a nutshell, the non-members of \(ID^{2}\) are used to train the attack model while the ones of \(ID^{3}\) are used to evaluate the attack performance of the attack model for IDs never seen before. This same schema is applied for Humpack and NOAA datasets, updating the numbers of individuals based on the information provided in Table 3.
### Target and Attack Models
We adopt two popular neural network architectures as our target model: ResNet50 [19] and DenseNet121 [23].
* _ResNet50_. The ResNet50 architecture contains 50 layers and uses a stack of three layers with 1\(\times\)1, 3\(\times\)3, and
\begin{table}
\begin{tabular}{|c|c|c|} \hline Dataset & Nb of individuals & Nb of samples \\ \hline GREMM & 180 & 3402 \\ \hline Humpback & 270 & 4814 \\ \hline NOAA & 380 & 5158 \\ \hline \end{tabular}
\end{table}
Table 2: Statistical information of datasets.
Figure 4: Visual characteristics of datasets. (A) GREMM, (B) Humpback and (C) NOAA.
1\(\times\)1 convolutions as the building residual block. The three-layer residual block is designed as a bottleneck to enhance computational efficiency, in which the 1\(\times\)1 layers are responsible for reducing and then boosting (restoring) the dimensions, leaving the 3\(\times\)3 layer as a bottleneck with small input and output dimensions [19]. Batch normalization (BN) [24] is applied after each convolution and before ReLU activation, and in addition the global average pooling (GAP) [30], is performed to form the final fully connected layer (\(fc\)) that contains the number of individuals of the respective dataset. After training, \(fc\) outputs floating-point values, which corresponds to the predicted result.
* _DenseNet121_. The DenseNet architecture is designed around a simple connectivity pattern of dense blocks and transition layers. A dense block is a module containing many layers connected densely with feature maps of the same size. In a dense block, each layer obtains additional inputs from all preceding layers, and it passes on its own feature maps to all the subsequent layers. The transition layer links two neighboring dense blocks and it reduces the size of the feature map through pooling. Compared with ResNet that connects layers through element-level addition, layers in DenseNet are connected by concatenating them at the channel level. Similar to ResNet, DenseNet uses a composite of three consecutive operations for each convolution: \(BN+ReLU+convolution\).
These target models were trained in two different settings.
* _No-overfitting_. In this setting, the optimization algorithm of CNNs is Stochastic Gradient Descent (SGD), with a learning rate of 0.0001 and a weight decay of 0.5. The batch size is set to 32, the number of training epochs to 200 and finally the batch-norm and dropout (0.5) are used to reduce the overfitting level.
* _Overfitting_. We use the same hyperparameters setting as the no-overfitting but we remove the use of batch-norm, weight decay and dropout techniques to ensure that the model overfits.
For neural network-based (_i.e._, Salem _et al._) and metric-based (_i.e._, Yeom _et al._) MIAs, the attack model use the same architectures and hyperparameters setting as the target model. For label-only attacks, we follow the implementation from Adversarial Robustness Toolbox (ART) [42], which is an open source project that provides Python tools for developers to assess the robustness of machine learning models against security threats.
Similarly to previous work in the literature [29, 50, 55, 14], we evaluate the attack performance using accuracy (_i.e._, attack success rate) as the main evaluation metric for both the original classification tasks and the MIAs. We also evaluate the False Positive Rate (FPR), being aware that only the attack accuracy is not a sufficient measure to compute the success of the attack in open-set problems [8]. As mentioned previously, the attack model is trained with the same architecture as the target model. However, in contrast to the standard setting of most of the MIAs in the literature [55, 50, 14], we assume that the non-members in the attack dataset come from a different distribution from the target dataset (_i.e_, individuals never seen before in target dataset are part of attack dataset as seen in \(D^{2}\)).
## 5 Results
In this section, we provide the results of our experiments and discuss the main findings that we can draw from them. More precisely in Section 5.1, we compare the attack performance of proposed MIA algorithms for the different whale datasets. After in Section 5.2, we discuss how the choice of the attack dataset and attack model's architecture to evaluate the generalization power of the MIA. Afterwards in Section 5.3, we explore the influence of different factors, such as overfitting, on the attack's performance. Finally in Section 5.4, we present the performance of our novel ensemble MIA for different real-world scenarios.
### Evaluation of MIAs
The attack performance of the MIAs was tested against different architectures for the target model, namely ResNet50 and DenseNet121. Figure 6 displays the performance of the different MIAs on different datasets. Overall, it can be observed that ResNet50+LabelOnly performs the best while ResNet50+Yeom and ResNet50+Salem have a lower performance. For example, on the NOAA dataset, the attack accuracy is 0.976 for ResNet50+LabelOnly against 0.913 for ResNet50+Yeom. This is expected as ResNet50+Yeom considers both confidence's prediction and labels while ResNet50 +LabelOnly considers the predicted label's correctness. More precisely, the predicted label's correctness used by ResNet50 +Yeom is relatively coarse as many non-members are misclassified as members if the predicted label is correct. In contrast, LabelOnly MIA provides a finer-grained metric as it relies on the magnitude of perturbation to change the predicted label, which helps to further distinguish between members and non-members. However, LabelOnly requires a larger query budgets and computation costs than other attacks as it needs to query the target model multiple times and craft the adversarial perturbation to change the predicted label.
Table 4 presents a comparison of the training and discrimination time for the proposed MIAs. More precisely, the training time indicates the time required to train the attack model while discrimination time refers to the average computational time for a m
Figure 5: Dataset distribution. The main dataset is split in \(\frac{1}{3}\) of individuals (_e.g._, 60 individuals in each subset: target set, attack set and evaluation set for GREMM dataset). The arrows indicate that the same individuals are re-used from one dataset to another.
single data point. Nonetheless, metric-based MIAs can often achieve a performance that is not too far from the best attack. For instance, on the GREMM dataset, the attack performance of ResNet50+LabelOnly is 0.744 against 0.695 for ResNet50+Yeom. Therefore, if the adversary has limited computation resources, metric-based MIAs may be a more appropriate choice than LabelOnly MIA.
In addition, it seems that the attack performance can be improved by changing the model's architecture. For instance, on the Humpback dataset, the attack performance for DenseNet121+LabelOnly is 0.942 while an accuracy of 0.976 is achieved when the ResNet50 model's architecture is applied. Thus, a more elaborated architecture is likely to boost the ability of the attack model to differentiate between members and non-members. This is in line with the findings of recent studies in the literature [55, 48, 57] that have shown that increasing the complexity of the model attacked is likely to increase the success of MIA due to the increase capacity of the model to memorize the training set.
Finally, we can observe that the success of the attacks is significantly lower for the GREMM dataset. Our intuition is that the hardness in discriminating beluga whale's in GREMM comes from the lack of discriminative characteristics. In particular, beluga individuals in the GREMM dataset are very similar to each other, forcing the attack model to misclassify non-members as members of the target model. In contrast, individuals from the Humpback and NOAA datasets normally have very distinctive features (_e.g._, detailed features present in fins, marks and shapes). For instance, as seen in Figure 4 in comparison with Humpback and NOAA, GREMM individuals have no marks present in dorsal ridge or detailed tail, resulting in beluga whales being the hardiest species to attack.
### Generalization of the Attack
Most previous works in the literature [29, 55] focused on the setting in which the adversary trains an attack model (of the same architecture as the target model) on an attack dataset that comes from the same distribution (_i.e._, members) as the target dataset. Generally, these works generate the target dataset and the attack dataset coming from the "same distribution" by splitting the original dataset into two parts. We depart from this assumption by creating an attack dataset composed for half of members and half of non-members totally different from the target dataset (\(D^{2}\) in Figure 5) in the sense that even for the members the pictures used are different from the one used in the target dataset. More precisely, for the identity of a particular beluga contains in the target set, we have several pictures associated to it.
When we built the different datasets as shown in Figure 5, we make sure that the pictures used for training the target model are different than the ones used for building the attack model or from the evaluation dataset. In this situation, a successful attack means that the MIA will be able to generalize to new pictures of members as well as to new non-members. In particular, we want to be able to guarantee that the attack model has learned generic features rather than simply distinguishing in an overfitted manner the members and non-members of the attack set. In this situation, even when new individuals emerge over time, the attack model will be able to identify whether it is an individual known by the target model or not.
In the following, we focus on the LabelOnly attack with ResNet50 architecture, which has shown the best performance in the experiments conducted and can handle the case in which the target and attack datasets come from different visual characteristics (_e.g._, beluga dorsal ridge in GREMM dataset, humpback tail in Humpback and beluga top-view in NOAA). First, we analyze the situation in which we relax the assumptions of a same-distribution attack dataset and same architecture attack model. To realize, we evaluate whether a MIA is still effective against an attack dataset composed of non-members that are issued from a different dataset.
**Attack performance on attack dataset coming from a different distributions.** So far, previous works [50, 55] have only considered the "same distribution" setting in which the attack dataset is based on images sampled from the same dataset. However, in reality, to construct an attack dataset for wild individuals, it might be the case that the system will unknown individuals that emerge over time. Figure 7 shows the MIA performance when the attack dataset contains non-members coming from a different distribution than the target dataset. In this situation, we can observe that the attack performance remains almost the same. For
\begin{table}
\begin{tabular}{|c|c|c|} \hline Attack & Training Time & Discrimination Time \\ (sec) & (sec) & (sec) \\ \hline Yeom _et al._[62] & 4.67 hr & 0.66 s \\ Salem _et al._[50] & 4.19 hr & 0.62 s \\ Label-only [14] & 6.41 hr & 1.57 s \\ \hline \end{tabular}
\end{table}
Table 4: Computational time for training an attack model and running a single test image (”discrimination”). The numbers were obtained on the biggest dataset (NOAA), which contains 6350 samples for the attack set and 6350 samples for the evaluation set (_i.e._, members and non-members).
Figure 6: Accuracy of the different MIAs for different datasets and target model architectures. As the evaluation set is balanced (_i.e._, composed of exactly half members and half non-members) a naïve attack model that produces a random prediction would have an accuracy of 0.5.
instance, when the target and attack datasets both originated from GREMM, the attack performance is 0.744 while that attack is still effective (0.719 and 0.721) when the attack dataset originated respectively from Humpback and NOAA. Such observation indicates that we can relax the assumption of a same-distribution attack dataset. In practice, this can have a big impact in the situation in which the target dataset is of limited size and we do not have the liberty to sacrifice some of its data to build the attack dataset.
The results obtained demonstrate that even if we add new individuals that have never been seen before by the attack model (_e.g._, from other dataset distributions), the attacks are still effective. For instance, all attacks reach over 0.922 accuracy when the target dataset is Humpback and the attack dataset is GREMM or NOAA, even in the cases in which attack and target models have different model architectures. To the best of our knowledge, we are the first to quantify the generation power of MIAs with an attack dataset that is composed of half known members from the target set and another half of non-members totally unknown (_i.e._, from a different distribution).
**Attack performance on different model's architecture.** Figure 8 shows that the attacks are still effective even when the target and attack models' architectures are different. For instance, on the Humpback dataset (Figure 8 b), the attack performance is 0.976 when ResNet50 is the model architecture for both target and attack models, and it decreases only to 0.962 when the attack model's architecture changes to DenseNet121. Such observation hints that we can relax the assumption that the attack model should necessarily follow the same-architecture as the target model.
### MIAs Influence Factors
This section explores the factors that influence the success of MIAs in our setting. To realize this, we study how factors such as the overfitting level and cross-entropy distance distribution correlates with the attack performance. During our evaluation, we focus on the ResNet50+Salem, ResNet50+Yeom and ResNet50+LabelOnly attacks, as the former performs the best using only confidence information while the latter two perform the best when having access to both confidence and ground-truth label information.
**Difference between overfitting and no-overfitting.** The traditional way of training machine learning models normally aims at avoiding the overfitting phenomenon [51, 46]. Indeed, the main concern about overfitting is that it occurs when the model performs well on the training data but generalizes poorly on unseen samples (_i.e._, test set). In the privacy domain, overfitting has also been shown to make the model more vulnerable to privacy attacks as it results in the model memorizing more information about the training set [57, 55].
In the following, we investigated how overfitting affects the performance of MIAs and more precisely whether overfitted models can more easily discriminate between known vs unknown individuals. When training the target models, we considered two different model settings: no-overfitting and overfitting as describe in Section 4.3. The results of the experiments are summarized in Figure 9.
As expected, models trained with overfitting displays a higher vulnerability against MIAs. For instance on GREMM, the best average attack accuracy for the original model (trai-ned with overfitting) was 0.706 while it was only 0.539 with no-overfitting, close to the 0.5 accuracy of a baseline random prediction. In terms of the utility of the target model with respect to MIAs, using overfitting always improves the MIA's generalization in all cases (_i.e._, for different datasets and model architectures). Thus as expected, overfitting is effective at increasing the leak of information and can be leverage to discriminate more efficiently between members and non-members.
**Impact of the overfitting level.** As seen previously, the attack performance varies on the dataset and model considered. Previous works [62, 20, 55] have also explored how
Figure 8: The performance of membership inference attack (LabelOnly) when the attack model has different architecture compared to the target model.
Figure 7: The performance of membership inference attack (LabelOnly) when the attack dataset comes from different distributions than the target dataset.
Figure 9: Success of MIAs in the overfitting vs no-overfitting settings. Note that we average the attack performance under different attacks for each dataset and show the standard deviations.
the level of overfitting impacts the success of privacy attacks. In a nutshell, the overfitting level of a given model can be defined by subtracting the testing accuracy from the training accuracy. We report the training/testing accuracy on the classification tasks for overfitted and non-overfitted models in Table 5.
Figure 10 shows the correlation of the overfitting level correlation with the attack performance. In particular, the MIAs vulnerability is associated with the increase of the overfitting level. For example, in Figure 9(a), the overfitting level goes from 0 to 0.61 when the target model's training epochs range from 0 to 80, which results in the attack success rate of the ResNet50+Label-only attack to vary from 0.55 to 0.69. This observation highlight the fact that the overfitting level contributes to the vulnerability of a model to MIAs. However, an unexpected outcome is that the attack performance is still increasing when the overfitting level stabilizes. As shown in Figure 10, when the overfitting level is around 0.6 (which corresponds to epochs ranging from 80 to 200), the attack performance still improves with the increase in the number of epochs. It shows that the overfitting level is not the only aspect related to MIA vulnerability. To address this issue, we additionally investigated the correlation between the distance in terms of cross-entropy between distributions for members and non-members and the vulnerability of the model to MIAs.
**Kullback-Leibler divergence.** We use the Kullback Leibler divergence (KL divergence) [28] to measure the distance between the distributions of members and non-members and compute the cross-entropy of each sample. KL is a widely used metric to measure the distance of two probability distributions as seen in Equation 4.
\[\mathcal{L}_{KL}(P,Q)=\sum_{x}P(x)\log\frac{P(x)}{Q(x)}, \tag{4}\]
in which \(P\) and \(Q\) are two probability distributions on events. The loss function includes both the prediction loss and the KL divergence loss. From this, we can compute cross-entropy distributions for members and non-members and normalize them into probability distributions [28]. Cross-entropy loss is one of the most common loss functions used for classification tasks, and it is defined as:
\[\mathcal{L}_{CE}(y,p)=-\sum_{i=1}^{k}y_{i}\log p_{i}, \tag{5}\]
which \(p\) is a vector that represents the confidence predictions of the sample over different pre-defined classes, with \(k\) being the total number of classes. \(y_{i}\) equals 1 only if the sample belongs to class \(i\) and 0 otherwise while \(p_{i}\) is the \(i\)-th element of the confidence posteriors.
More precisely, we computed the KL-divergence of the normalized cross-entropy distributions between members and non-members. Figure 9(a) shows the KL-divergence of cross-entropy distributions and the overfitting level under the target model trained with different epochs when the target model is ResNet50 trained on GREMM. We can see that the KL-divergence of cross-entropy is highly correlated with the attack performance. For example, in Figure 9(a), the KL-divergence of cross-entropy of the target model ranges from 0.0 to 0.40 when the epochs range from 0 to 120, with the attack success rate of ResNet50+LabelOnly varying from 0.55 to 0.72.
More interestingly, from Figure 9(a) and Figure 9(b), we can also see that there is a clear turning point after 120 epochs, in which both the KL-divergence and attack performance become stable. These results convincingly demonstrate that, compared to the overfitting level, KL-divergence of members' and non-member's cross-entropy has a higher correlation to the attack performance. Note that for LabelOnly attacks, we do not have confidence predictions but only the labels predicted by the target model. Thus, we can view the predicted label as the ground truth to calculate the cross-entropy loss instead of the KL-divergence loss in the distillation process.
### MIA Robustness and Performance of Ensemble MIA
Discrimination based on MIA might be impractical when the FPR is too high, which will lead to non-member samples being often erroneously predicted as members. We notice a smooth FPR for Humphback and NOAA datasets, with on average 0.03% of FPR. This means that most of members and non-members are well discriminated for those datasets. In contrast, GREMM has a extremely high FPR, which can be decreased to 0.34% using ResNet+LabelOnly attack under the overfitting influence.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & ResNet50 & \multicolumn{2}{c}{DenseNet121} \\ \cline{2-5} \multirow{-2}{*}{} & Overfitted & No & Overfitted & No \\ \hline \multirow{4}{*}{\(\mathcal{L}_{KL}(P,Q)\)} & \multirow{4}{*}{1.000 (0.282)} & \multirow{4}{*}{0.746 (0.406)} & \multirow{4}{*}{1.000 (0.222)} & \multirow{4}{*}{0.746 (0.356)} \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \hline \multirow{4}{*}{\(\mathcal{L}_{CE}(y,p)=-\sum_{i=1}^{k}y_{i}\log p_{i}\), & (5) & \multirow{4}{*}{which \(p\) is a vector that represents the confidence predictions of the sample over different pre-defined classes, with \(k\) being the total number of classes. \(y_{i}\) equals 1 only if the sample belongs to class \(i\) and 0 otherwise while \(p_{i}\) is the \(i\)-th element of the confidence posteriors.
More precisely, we computed the KL-divergence of the normalized cross-entropy distributions between members and non-members. Figure 9(a) shows the KL-divergence of cross-entropy distributions and the overfitting level under the target model trained with different epochs when the target model is ResNet50 trained on GREMM. We can see that the KL-divergence of cross-entropy is highly correlated with the attack performance. For example, in Figure 9(a), the KL-divergence of cross-entropy of the target model ranges from 0.0 to 0.40 when the epochs range from 0 to 120, with the attack success rate of ResNet50+LabelOnly varying from 0.55 to 0.72.
More interestingly, from Figure 9(a) and Figure 9(b), we can also see that there is a clear turning point after 120 epochs, in which both the KL-divergence and attack performance become stable. These results convincingly demonstrate that, compared to the overfitting level, KL-divergence of members' and non-member's cross-entropy has a higher correlation to the attack performance. Note that for LabelOnly attacks, we do not have confidence predictions but only the labels predicted by the target model. Thus, we can view the predicted label as the ground truth to calculate the cross-entropy loss instead of the KL-divergence loss in the distillation process.
### MIA Robustness and Performance of Ensemble MIA
Discrimination based on MIA might be impractical when the FPR is too high, which will lead to non-member samples being often erroneously predicted as members. We notice a smooth FPR for Humphback and NOAA datasets, with on average 0.03% of FPR. This means that most of members and non-members are well discriminated for those datasets. In contrast, GREMM has a extremely high FPR, which can be decreased to 0.34% using ResNet+LabelOnly attack under the overfitting influence.
Figure 10: The distance in terms of cross-entropy and attack performance against the target model ResNet50 on the GREMM dataset under different numbers of epochs for model training.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Number of**} & \multicolumn{2}{c}{ResNet50} & \multicolumn{2}{c}{DenseNet121} \\ \cline{2-5} & Overfitted & No & Overfitted & No \\ \hline \multirow{4}{*}{\(\mathcal{L}_{KL}(P,Q)\)} & \multirow{4}{*}{1.000 (0.282)} & \multirow{4}{*}{0.746 (0.406)} & \multirow{4}{*}{1.000 (0.222)} & \multirow{4}{*}{0.746 (0.356)} \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \cline{1-1} \cline{5-5} & & & & \\ \hline \multirow{4}{*}{\(\mathcal{L}_{CE}(y,p)=-\sum_{i=1}^{k}y_{i}\log p_{i}\), & (5) & \multirow{4}{*}{which \(p\) is a vector that represents the confidence predictions of the sample over different pre-defined classes, with \(k\) being the total number of classes. \(y_{i}\) equals 1 only if the sample belongs to class \(i\) and 0 otherwise while \(p_{i}\) is the \(i\)-th element of the confidence posteriors.
More precisely, we computed the KL-divergence of the normalized cross-entropy distributions between members and non-members. Figure 9(a) shows the KL-divergence of cross-entropy distributions and the overfitting level under the target model trained with different epochs when the target model is ResNet50 trained on GREMM. We can see that the KL-divergence of cross-entropy is highly correlated with the attack performance. For example, in Figure 9(a), the KL-divergence of cross-entropy of the target model ranges from 0.0 to 0.40 when the epochs range from 0 to 120, with the attack success rate of ResNet50+LabelOnly varying from 0.55 to 0.72.
We have investigated the FPR obtained using the best proposed attack in Figure 11 for the GREMM dataset. Some negative individuals are misclassified as positive due to hardness visual similarity between member and non-members in GREMM dataset. Interestingly, even the attack on an extremely overfitted model such as ResNet50 (green line) still suffers from high FPR. This type of error makes the predicted membership signal unreliable, especially since most samples are non-members in real world applications. To reduce this, we have proposed the novel MIA ensemble approach (described in Section 3.4) to enhance the MIA performance while reducing the attack's FAR.
While the gain in average attack success rate is modest, the success rate at low false-positive rates can be very high. For instance, looking at Table 6, we notice a variation in attack accuracy rate across different subsets of individuals. This suggests that there is a subset of examples that are easier to distinguish than others, which is also a phenomenon that has been observed in the literature on MIAs [31]. In our experiments, ensembles were design to explore the attack model using different sets of individuals. As seen in Figure 11 using ensemble composed by 2 subsets, we decreased the FPR to 0.35 while increasing the attack rate to 0.781. We further investigate whether as soon we increase the number of subsets the FPR might turns lower, which we observed as with 15 subsets we got 0.28 of FPR. The best results is when we create an attack model for each unique whale identity. The main insight behind the creation of attack models using unique individuals is that the member and non-member in the attack set are composed by two individuals being one known and the another unknown. The non-member is selected randomly from individuals never seen before to train the attack model. On this way, we guarantee the maximum overfitting level for unique individuals and merge the outputs of \(M_{attack_{1}}\) to \(M_{attack_{1}}\). For instance, \(M_{attack_{1}}\) to \(M_{attack_{60}}\) ensemble with 60 outputs for GREMM dataset achieved 0.26 of FPR and 86% in accuracy attack. In fact, a high overfitting level using fewer individuals acts in synergy to increase the success rate of the attack while decreasing the FPR. In addition, ensemble MIA combines the output of each attack model to better discriminate similar individuals.
Finally, we have also performed additional experiments to investigate how the overfitting levels varies for different subset of individuals to observe whether the attack performance varies across subsets. For instance in Table 6, we have split the GREMM dataset respectively in two and six subsets. For example, with two subsets composed by individuals whose identities range between 0-29 and 30-59, the overfitting level is respectively 0.619 and 0.624. This demonstrates that the membership leakage effect also varies among different individuals from the same dataset.
## 6 Conclusion
In this paper, we have performed MIAs against models trained on open-set datasets of whales with the objective of using it to be able to discriminate between known vs unknown individuals. More precisely, we have investigated three MIAs from the state-of-the-art using two popular model architectures as well as three whale benchmark datasets. Overall, the results obtained demonstrate that the combination of model architecture and MIA ResNet50+ LabelOnly performs the best and is able to discriminate members and non-members even when they have fine-grained visual similarity. We have shown that the assumption that the non-members should be from the same distribution can be relaxed. In particular, the non-members used to train the attack model could be taken from a different whale population without significantly impact the success of the discrimination. Additionally, the results also highlight that the architecture of the attack model does not need to be similar to that of the target model. Finally, from the observation that the overfitting level in small subsets leads to a higher leak of information than larger subsets, we have proposed a novel approach called ensemble MIA. Ensemble MIA leads to an enhancement of 12% in attack performance while decreasing the FPR by 13%.
As future works, we would like to explore the use of white-box MIA to further improve the accuracy of the discrimination while reducing the FPR, in particular for the GREMM dataset. We will also investigate how MIA-based approaches for discrimination compare to deep metric learning ones [7, 53]. In addition while in this paper, our focus was on discriminating between the member and non-member whales, we plan to integrate the proposed MIA into a full pipeline
\begin{table}
\begin{tabular}{|l|c|c|} \hline Class index & Overfitting Level & Subsets \\ \hline
0-29 & 0.619 & 2 \\
30-59 & 0.624 & \\ \hline
0-9 & 0.683 & \\
10-19 & 0.604 & \\
20-29 & 0.738 & \\
30-39 & 0.752 & \\
40-49 & 0.792 & \\
50-59 & 0.724 & \\ \hline \end{tabular}
\end{table}
Table 6: The overfitting level in different attack subsets using (LabelOnly) when the target model is ResNet50 trained on GREMM. Class Index is the number of individuals used for each subset (_e.g._, 2 subsets containing 30 individuals each and 6 subsets with 10 individuals).
Figure 11: True positive versus false positive rates for different settings of MIAs.
for beluga whale re-id that we plan to open source. Finally, we hope that our work, in which we leverage on privacy attacks to address practical challenges encountered in animal ecology, will foster further research at the crossing of these two domains.
|
2309.07301 | Limits to Fluctuation Dynamics | The fluctuation of an experimentally measured observable, along with its
mean, constitutes the fundamental ingredient of a non-equilibrium system
involving randomness. Despite previous efforts, a comprehensive framework for
characterizing the temporal dynamics of fluctuations of observables remains
elusive. In this manuscript, we develop a ubiquitous theory concerning rigorous
limits to the rate of fluctuation growth. We discover a simple principle that
the time derivative of the standard deviation of an observable is upper bound
by the standard deviation of an appropriate observable describing velocity.
This indicates a hitherto unknown tradeoff relation between the changes for the
mean and standard deviation, i.e., the sum of the squares for these quantities
cannot exceed certain cost determined by dynamical processes. The cost can be
kinetic energy for hydrodynamics, irreversible entropy production rate for
thermodynamic processes, energy fluctuations for unitary quantum dynamics, and
quantum Fisher information for dissipative quantum dynamics. Our results open
an avenue toward a quantitative theory of fluctuation dynamics in various
non-equilibrium systems, encompassing quantum many-body systems and nonlinear
population dynamics, as well as toward our understanding of how to control
them. | Ryusuke Hamazaki | 2023-09-13T20:49:48Z | http://arxiv.org/abs/2309.07301v1 | # Limits to Fluctuation Dynamics
###### Abstract
The fluctuation of an experimentally measured observable, along with its mean, constitutes the fundamental ingredient of a non-equilibrium system involving randomness. Despite previous efforts, a comprehensive framework for characterizing the temporal dynamics of fluctuations of observables remains elusive. In this manuscript, we develop a ubiquitous theory concerning rigorous limits to the rate of fluctuation growth. We discover a simple principle that the time derivative of the standard deviation of an observable is upper bound by the standard deviation of an appropriate observable describing velocity. This indicates a hitherto unknown tradeoff relation between the changes for the mean and standard deviation, i.e., the sum of the squares for these quantities cannot exceed certain cost determined by dynamical processes. The cost can be kinetic energy for hydrodynamics, irreversible entropy production rate for thermodynamic processes, energy fluctuations for unitary quantum dynamics, and quantum Fisher information for dissipative quantum dynamics. Our results open an avenue toward a quantitative theory of fluctuation dynamics in various non-equilibrium systems, encompassing quantum many-body systems and nonlinear population dynamics, as well as toward our understanding of how to control them.
_Introduction.-_ Many physical systems involve randomness caused by stochastic noises due to external environment or intrinsic quantum uncertainty. The fluctuation of experimentally measured observables, alongside their mean, provides the primary information for such systems. Previous efforts have demonstrated that fluctuations follow fundamental relations in non-equilibrium statistical mechanics, such as the fluctuation-dissipation theorem [1], the fluctuation theorem [2; 3], and the thermodynamic uncertainty relation [4; 5]. However, a comprehensive framework for understanding the temporal dynamics of fluctuations of observables remains an enigma. This problem is not only fundamentally crucial for advancing our knowledge of far-from-equilibrium statistical mechanics but also relevant for practical applications since it can predict the theoretical limits in dynamically controlling the fluctuations of the system.
Recently, rigorous bounds concerning the rate of the change for the mean value of the observable have intensively been studied [6; 7; 8; 9; 10; 11; 12; 13], while the first work on such "speed limits [14; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27]" dates back more than a half-century ago [14]. Those speed limits of observables can provide experimentally relevant and often tight constraints on non-equilibrium transitions characterized by the mean displacement. However, previous speed limits cannot apply to fluctuations, which are not simply given as a mean of an observable (see Ref. [12] for an exception). Such fluctuation growth plays an equally, or, sometimes even more important role compared with the mean in describing out-of-equilibrium dynamics. For example, the symmetric spreading of particles should be characterized by its fluctuation dynamics instead of the mean displacement [28; 29]. As another example, fluctuation dynamics of macroscopic observables can relate to dynamics of entanglement [30] and coherence [31] in quantum systems. It is thus a pivotal task to elucidate the dynamical behavior of this quantity.
In this manuscript, we develop a ubiquitous theory for rigorous bounds on dynamics of fluctuations (Fig. 1(a)). We argue that, for any time-independent observable \(A\), its standard deviation has a speed always smaller than the standard deviation of the suitably chosen "velocity observable" \(\mathcal{V}_{A}\) of \(A\), i.e.,
\[\left|\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right|\leq\sigma_{\mathcal{V}_{ A}}. \tag{1}\]
Here, \(\sigma\) denotes the standard deviation, and the velocity observable is chosen such that \(\frac{\mathrm{d}\left\langle A\right\rangle}{\mathrm{d}t}=\langle\mathcal{V}_ {A}\rangle\) is satisfied, where \(\langle\cdots\rangle\) denotes the mean. This inequality also indicates a novel tradeoff relation between the speeds of the mean and the standard deviation. That is, two quantities cannot be simultaneously fast, which we concisely represent as (Fig. 1(b))
\[\left(\frac{\mathrm{d}\left\langle A\right\rangle}{\mathrm{d}t}\right)^{2}+ \left(\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq C_{A}, \tag{2}\]
where the cost \(C_{A}=\langle\mathcal{V}_{A}^{2}\rangle\) can be further bound by some physical quantities. Crucially, the velocity observable is not unique, and an inappropriate choice of \(\mathcal{V}_{A}\), say \(\mathcal{V}_{A}=\frac{\mathrm{d}\left\langle A\right\rangle}{\mathrm{d}t}1\), breaks Eqs. (1) and (2). In contrast, choosing suitable velocity observables leads to various physically relevant bounds for the costs, such as kinetic energy in classical and quantum hydrodynamics, entropy production rate for thermodynamic processes, and energy fluctuations for unitary quantum dynamics. Table 1 summarizes the ubiquity of our results.
_Limits from the local conservation law.-_ We can verify our main results for several distinct categories of dynamics. As a first category, let us consider continuous dynamics under local conservation law of some normalized distribution \(\rho(\mathbf{x},t)\left(\geq 0\right)\) with \(\int d\mathbf{x}\rho(\mathbf{x},t)=1\)
We assume that the continuity equation \(\partial_{t}\rho(\mathbf{x},t)=-\nabla\cdot(\rho(\mathbf{x},t)\mathbf{V}(\mathbf{x},t))\) holds by introducing the normalized current \(\mathbf{V}(\mathbf{x},t)\), and that \(\rho\mathbf{V}\) vanishes in the limit \(|\mathbf{x}|\to\infty\). It is then natural to define the mean and the standard deviation of a space-dependent observable \(A\) coupled to \(\rho\) as \(\langle A(t)\rangle=\int d\mathbf{x}A(\mathbf{x})\rho(\mathbf{x},t)\) and \(\sigma_{A}(t)=\sqrt{\int d\mathbf{x}(A(\mathbf{x})-\langle A(t)\rangle)^{2} \rho(\mathbf{x},t)}\), respectively. Here, we assume that \(A\) is time-independent to simplify the discussion; however, inequalities (1) and (2) still hold for time-dependent observables, as discussed at the end of the manuscript. When we define the fluctuation observable of \(X\) as \(\delta X=X-\langle X\rangle\), \(\sigma_{A}\) can be written as \(\sigma_{A}^{2}=\langle\delta A^{2}\rangle\).
Under this setup, we find the following equality (see Supplemental Material [32]):
\[\frac{\mathrm{d}\left\langle\delta A^{2}\right\rangle}{\mathrm{d}t}=2\left\langle \delta A\,\delta\mathcal{V}_{A}\right\rangle, \tag{3}\]
where the velocity observable is given by the change of the observable in the direction of \(\mathbf{V}\),
\[\mathcal{V}_{A}(\mathbf{x},t)=\nabla A(\mathbf{x},t)\cdot\mathbf{V}(\mathbf{ x},t). \tag{4}\]
Note that \(\frac{\mathrm{d}\langle A\rangle}{\mathrm{d}t}=\langle\mathcal{V}_{A}\rangle\) is indeed satisfied.
Our main inequality in (1) is readily obtained from the equality (3) with the help of the Cauchy-Schwarz inequality. In this case, the tradeoff cost \(C_{A}=\langle(\nabla A\cdot\mathbf{V})^{2}\rangle\) in
Figure 1: Schematic illustrations of our limits to fluctuation dynamics. (a) The probability distribution at time \(t\), \(p(\mathbf{x},t)\), defines the probability \(P(A,t)\) for a random observable \(A\). After a short-time interval \(\Delta t\), the mean \(\langle A\rangle\) and the standard deviation \(\sigma_{A}\) change by \(\sim\frac{\mathrm{d}\langle A\rangle}{\mathrm{d}t}\Delta t\) and \(\sim\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\Delta t\), respectively. We define a suitable velocity observable \(\mathcal{V}_{A}\), which satisfies \(\frac{\mathrm{d}\langle A\rangle}{\mathrm{d}t}=\langle\mathcal{V}_{A}\rangle\). Then, the speed of the fluctuation of \(A\) is bound by the fluctuation of the velocity as \(\left|\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right|\leq\sigma_{\mathcal{V}_ {A}}\). (b) The bound leads to the tradeoff relation (2) between \(\frac{\mathrm{d}\langle A\rangle}{\mathrm{d}t}\) and \(\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\), i.e., they should be within the solid circle with magenta. We can tighten the bound for certain situations, where the allowed region becomes inside the dashed ellipse. As summarized in Table 1, the cost \(C_{A}\) is further bound by physical quantities, e.g., the kinetic energy.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Non-equilibrium system & Velocity observable & Physical quantity that bounds the cost \\ \hline Classical and quantum hydrodynamics\({}^{*}\) & \(\nabla A(\mathbf{x})\cdot\mathbf{u}(\mathbf{x})\) & Kinetic energy \(E_{\mathrm{kin}}\) \\ \hline Fokker-Planck dynamics\({}^{*}\) & \(\nabla A(\mathbf{x})\cdot\mathbf{v}(\mathbf{x})\) & Entropy production rate \(\dot{\Sigma}\) \\ \hline Macroscopic transport in discrete quantum systems\({}^{*}\) & \(\frac{1}{2}(\nabla A)_{ij}V_{ij}\) & Modified transition strength \(\mathcal{S}_{H}^{2}-E_{\mathrm{trans}}^{2}\) \\ \hline Evolutionary dynamics without mutation & \((\delta A)_{i}\) (\(\delta s)_{i}\) & Variance of the growth rate \(\sigma_{s}^{2}\) \\ \hline General (non)linear dynamics & \((\delta A)_{i}\) (\(\dot{p}_{i}/p_{i}\)) & Classical Fisher information \(\mathcal{F}_{C}\) \\ \hline Unitary quantum dynamics\({}^{*}\) & \(i[\hat{H},\hat{A}]/\hbar\) & Energy variance \(\Delta E^{2}\) \\ \hline Dissipative quantum dynamics & - & Quantum Fisher information \(\mathcal{F}_{Q}\) \\ \hline \end{tabular}
\end{table}
Table 1: Summary of the velocity observable and the physical quantity relevant for the bound on the cost \(C_{A}\) in each non-equilibrium system for a time-independent observable \(A\). Even for time-dependent observables, (1) and (2) can still hold when we add \(\hat{A}=\frac{\mathrm{d}A}{\mathrm{d}t}\) to the velocity observable in certain cases (indicated by asterisks).
Eq. (2) is further upper bound by
\[C_{A}\leq 2\|\nabla A\|_{\infty}^{2}\cdot\frac{1}{2}\int d\mathbf{x}\rho(\mathbf{ x},t)|\mathbf{V}(\mathbf{x},t)|^{2}, \tag{5}\]
where \(\|\nabla A\|_{\infty}=\max_{\mathbf{x}}|\nabla A(\mathbf{x})|\).
Let us provide some applications of our general argument above:
**Example 1: Classical and quantum hydrodynamics.** We first consider the classical hydrodynamics described by, e.g., the nonlinear Navier-Stokes equation. For the density \(\rho(\mathbf{x},t)\) of the fluid, we generally have a continuity equation \(\partial_{t}\rho=-\nabla\cdot(\rho\mathbf{u})\), where \(\mathbf{u}\) is the current of the fluid. Similarly, we can consider quantum hydrodynamics [33] that is described by the nonlinear Schrodinger equation. While the quantum fluid is described by the complex amplitude \(\Psi(\mathbf{x},t)\), its density \(\rho(\mathbf{x},t)=|\Psi(\mathbf{x},t)|^{2}\) obeys the continuity equation \(\partial_{t}\rho=-\nabla\cdot(\rho\mathbf{u})\).
Then, in both cases, our results Eqs. (1)-(3) hold for any function given by \(A(\mathbf{x})\) with \(\mathcal{V}_{A}=\nabla A\cdot\mathbf{u}\). The cost is bound by the kinetic energy \(E_{\rm kin}=\frac{M}{2}\int d\mathbf{x}\rho|\mathbf{u}|^{2}\) as \(C_{A}\leq\frac{2\|\nabla A\|_{\infty}^{2}E_{\rm kin}}{M}\), where \(M\) is the total mass. Note that the limits are universally applied despite nonlinear terms, and the tradeoff cost is bound only by the kinetic energy of the fluid.
**Example 2: Irreversible thermodynamics.** Let us consider thermodynamic systems of Brownian particles described by the overdamped Fokker-Planck equation [2]. In this case, the probability density \(P(\mathbf{x},t)\) satisfies the continuity equation as \(\partial_{t}P=-\nabla\cdot(P\mathbf{v})=-\nabla\cdot(P\mu(\mathbf{F}-T\nabla \ln\rho))\), where \(\mathbf{v}\) is the local velocity for the probability current, \(\mu\) is the mobility, \(T\) is the temperature, and \(\mathbf{F}\) is the force. Then, our results Eqs. (1)-(3) hold for any function \(A(\mathbf{x})\) with \(\mathcal{V}_{A}=\nabla A\cdot\mathbf{v}\). The cost is bound by the total entropy production rate [2]\(\dot{\Sigma}=\frac{1}{\mu^{2}}\int d\mathbf{x}P|\mathbf{v}|^{2}\) as \(C_{A}\leq\mu T\|\nabla A\|_{\infty}^{2}\dot{\Sigma}\). Thus, to change the mean and the fluctuation quickly, the irreversibility of the system should be significant; note that the previous classical speed limits only considered the speed of the mean [6; 7; 20].
While the above discussions are for standard deviations, we further discover that inequalities similar to that in (1) are obtained for even higher-order absolute central moments, \(\mu_{A}^{(n)}:=\left\langle|A-\langle A\rangle\,|^{n}\right\rangle\). As shown in Supplemental Material [32], we prove
\[\left|\frac{{\rm d}(\mu_{A}^{(n)})^{\frac{1}{n}}}{{\rm d}t}\right|\leq(\mu_{ \mathcal{V}_{A}}^{(n)})^{\frac{1}{n}}, \tag{6}\]
which reduces to (1) for \(n=2\). This inequality shows that the rate of the change for the higher-order fluctuations of an observable is smaller than the higher-order fluctuations of the change of the velocity of the observable defined in Eq. (3).
We note that the bound on the cost (5) is related to the Wasserstein geometry in the optimal transport theory [34]. Indeed, by minimizing the right-hand side of (5) in terms of \(\mathbf{V}\) appearing in the continuity equation, we find [35; 36] that \(C_{A}\) is further upper bound by a factor involving the order-2 Wasserstein distance. For certain (not general) \(A(\mathbf{x})\), our bound in (2) can be inferred from the known inequality of the Wasserstein distance, which can also provide the situations where the equality condition for (2) is satisfied [32]. However, our general results in (1), (2), and (6) are tighter and not directly understood from the Wasserstein geometry.
_Extention to discrete systems.-_ While we have considered continuous systems, a similar structure appears for discrete systems. To see this, let us consider a graph \(G\), which consists of vertices and edges, and can model general discrete systems [12]. Here, vertices are taken as the basis \(i\) of the system, and edges are connected between vertices if and only if there can be transitions between \(i\) and \(j\) upon dynamics. We define the normalized distribution \(p_{i}\) (\(\sum_{i}p_{i}=1\)) and an observable \(A=\{A_{i}\}\) defined on the vertices. We assume that the discrete continuity equation \(\dot{p}_{i}=-\sum_{j(\sim i)}J_{ji}\) holds, where \(J_{ji}\,(=-J_{ij})\) is a current from \(i\) to \(j\) and \(i\sim j\) means that \(i\) and \(j\) are connected by the edges.
Even in this case, we show that Eqs. (1)-(3) still hold, where each of the quantities is properly generalized to the discrete ones. For this purpose, we introduce a new probability measure \(Q_{ij}\), with which we can discuss the average of the velocity observable defined on the edges. While the details are given in Supplemental Material [32], we here present one nontrivial application that follows from our theory.
**Example 3: Macroscopic transport in quantum many-body systems.** As a prime application, let us consider macroscopic transport in quantum many-body systems. We consider a many-body basis set \(\{|i\rangle\}\) as a set of vertices for \(G\) and focus on an observable \(\hat{A}=\sum_{i}A_{i}\,|i\rangle\,\langle i|\), which is diagonalized in this basis. For example, we can take \(\hat{A}\) as the averaged position of \(M\) particles on a one-dimensional system, \(\hat{A}=\frac{1}{M}\sum_{l}l\hat{n}_{l}\), where \(\hat{n}_{l}\) is the number operator for site \(l\); then the changes of mean and fluctuation of \(\hat{A}\) correspond to how macroscopic transport of particles takes place. Now, the expectation value is given by \(\langle\hat{A}\rangle=\sum_{i}A_{i}p_{i}\), where \(p_{i}=\langle i|\hat{\rho}|i\rangle\) is the probability distribution. Here, \(p_{i}\) satisfies the continuity equation \(\dot{p}_{i}=-\sum_{j(\sim i)}J_{ij}^{q}\), where \(J_{ij}^{q}=-i(\langle i|\hat{H}|j\rangle\,\langle j|\hat{\rho}|i\rangle-{\rm h.c.})/\hbar\) and \(i\sim j\) if and only if \(\langle i|\hat{H}|j\rangle\neq 0\). In this case, we find inequality (2) with the bound on the cost \(C_{A}\leq\|\nabla A\|_{\infty}^{2}(\mathcal{S}_{H}^{2}-E_{\rm kin}^{2})/\hbar^{2}\), where \(\|\nabla A\|_{\infty}=\max_{i\sim j}|A_{i}-A_{j}|\) is the magnitude of the discrete gradient and \(\mathcal{S}_{H}=\max_{j}\sum_{i(\sim j)}|\,\langle i|\hat{H}|j\rangle\,|\) is the strength of the transition, which is easily known from the Hamiltonian [32]. The transition part of the energy, \(E_{\rm kin}=\langle\hat{H}\rangle-\sum_{j}p_{i}\,\langle i|\hat{H}|i\rangle\), is a globally defined macro
scopic observable, which is relevant for experiments. Our new inequality is tighter than those found in Ref. [12; 23].
_Limits from the information theory.-_ Yet another category that our fluctuation theory in Eqs. (1) and (2) applies is achieved by the information-theoretical insight. Although the theory is applicable to both continuous and discrete systems, we here present the case with discrete systems. For an arbitrary time-independent observable \(A=\{A_{i}\}\), we have Eqs. (1) and (2) with the average \(\langle X\rangle=\sum_{i}p_{i}X_{i}\) and the velocity observable [32]
\[(\mathcal{V}_{A})_{i}=\frac{(\delta A)_{i}\cdot\dot{p}_{i}}{p_{i}}. \tag{7}\]
It can easily be confirmed that \(\frac{\mathrm{d}(\dot{A})}{\mathrm{d}t}=\langle\mathcal{V}_{A}\rangle\). Note that this velocity observable is given as a change of an observable coupled to the surprisal rate \(\frac{\mathrm{d}}{\mathrm{d}t}\ln p_{i}\)[9]. Furthermore, in this case, we have stronger bounds than Eqs. (1) and (2), i.e., \(\left|\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right|\leq\frac{\sigma_{VA}}{2}\) and \(\left(\frac{\mathrm{d}(\dot{A})}{\mathrm{d}t}\right)^{2}+4\left(\frac{\mathrm{ d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq C_{A}\) since the equality is similar to Eq. (3) holds with a different coefficient, \(\frac{\mathrm{d}(\delta A^{2})}{\mathrm{d}t}=\langle\delta A\;\delta\mathcal{ V}_{A}\rangle\). Note that the equality conditions for these inequalities are satisfied for any two-level systems.
Now, using the Holder inequality, the tradeoff cost is upper bound for this information-theoretical relation as
\[C_{A}\leq\|\delta A\|_{\infty}^{2}\mathcal{F}_{C}, \tag{8}\]
where \(\|\delta A\|_{\infty}=\max_{i}|\delta A_{i}|\) and \(\mathcal{F}_{C}=\sum_{i}\frac{\dot{p}_{i}^{2}}{p_{i}}\) is the classical Fisher information. This means that, as in the conventional speed limit only for the mean [8; 9], the sum of the (squared) standard deviation and the mean obeys the time-information uncertainty relation, i.e., the times for the change of those fundamental quantities and the information content cannot be simultaneously small.
**Example 4: Nonlinear population dynamics.** Let us consider nonlinear population dynamics, where the number \(N_{i}\) of some types (such as species in an ecological system) labeled by \(i=1,\cdots,f\) obeys some dynamical equation, \(\dot{N}_{i}=F_{i}(N_{1},\cdots,N_{f},t)\). Note that \(F\) is nonlinear in general. While total number of the types \(N_{\mathrm{tot}}=\sum_{i=1}^{f}N_{i}\) changes in time, the proportion for each type \(i\), i.e., \(p_{i}=N_{i}/N_{\mathrm{tot}}\), satisfies \(\sum_{i=1}^{f}p_{i}=1\). We can then apply our general discussion above. Note that the condition for the velocity observable, \(\frac{\mathrm{d}(\dot{A})}{\mathrm{d}t}=\langle\mathcal{V}_{A}\rangle=\langle \delta A\frac{\mathrm{d}\ln p}{\mathrm{d}t}\rangle\) is known as the continuous-time Price equation (for time-independent \(A\)) [37], from which the speed limit of the mean is obtained [38; 39; 40]. However, we stress that our fluctuation relations \(\frac{\mathrm{d}(\delta A^{2})}{\mathrm{d}t}=\langle\delta A\;\delta\mathcal{ V}_{A}\rangle\) and \(\left|\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right|\leq\frac{\sigma_{VA}}{2}\) are fundamentally distinct from the Price equation.
We especially take \(F_{i}=s_{i}N_{i}\), which describes the evolutionary dynamics without mutation, where \(s_{i}\) is the growth rate for type \(i\). The dynamical equation for \(p_{i}\) then reads \(\dot{p}_{i}=\delta s_{i}\,p_{i}\), where \(\delta s_{i}=s_{i}-\langle s\rangle\). In this case, we can obtain \((\mathcal{V}_{A})_{i}=\delta A_{i}\delta s_{i}\) and \(\frac{\mathrm{d}(\delta A^{2})}{\mathrm{d}t}=\langle\delta A^{2}\delta s\rangle\), which leads to \(\left(\frac{\mathrm{d}(\dot{A})}{\mathrm{d}t}\right)^{2}+4\left(\frac{\mathrm{ d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq C_{A}=\langle\delta A^{2}\delta s^{2} \rangle\leq\|\delta A\|_{\infty}^{2}\sigma_{s}^{2}\). Thus, the tradeoff cost is bound by the variance of the growth rate, \(\sigma_{s}^{2}\).
As the simplest case, let us consider \(A=s\), for which we have \(\frac{\mathrm{d}(\dot{s})}{\mathrm{d}t}=\langle\mathcal{V}_{s}\rangle=\sigma_{ s}^{2}\), which is known as the Fisher's fundamental theorem of natural selection [41]. In this case, we furthermore find a nontrivial higher-order equality \(\frac{\mathrm{d}(\dot{s}^{2})}{\mathrm{d}t}=\langle\delta s^{3}\rangle\), meaning that the change of the variance of the growth rate is related to that of the skewness.
_Limits in unitary quantum dynamics.-_ Let us now turn to unitary quantum dynamics, described by the von Neumann equation of the density matrix, \(\frac{\mathrm{d}\rho}{\mathrm{d}t}=-i[\hat{H},\hat{\rho}]\). Even in this case, we have the quantized version of Eqs. (1)-(3). That is, they hold by replacing the mean with \(\langle\hat{X}\rangle=\mathrm{Tr}[\hat{\rho}\hat{X}]\) and the correlation in Eq. (3) with the symmetrized correlation function \(\langle\delta\hat{A},\delta\hat{\mathcal{V}}_{A}\rangle=\left\langle\frac{ \hat{A}\hat{\delta}\hat{\mathcal{V}}_{\hat{A}}+\hat{\Phi}\hat{\Phi}\hat{A}}{ \hat{\Phi}}\right\rangle\) (\(\delta\hat{X}=\hat{X}-\langle\hat{X}\rangle\)). Here, the velocity observable is naturally given by the change of an observable reminiscent of the Heisenberg representation:
\[\hat{\mathcal{V}}_{A}=\frac{i[\hat{H},\hat{A}]}{\hbar}. \tag{9}\]
The application of (2) means that the tradeoff between the speeds of the mean and the standard deviation is given by
\[\left(\frac{\mathrm{d}\left\langle\hat{A}\right\rangle}{\mathrm{d}t}\right)^ {2}+\left(\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq C_{A}=-\frac{ \langle[\hat{H},\hat{A}]^{2}\rangle}{\hbar^{2}}. \tag{10}\]
We note that another tradeoff between the speeds of the mean and the standard deviation holds, i.e.,
\[\left(\frac{\mathrm{d}\left\langle\hat{A}\right\rangle}{\mathrm{d}t}\right)^ {2}+4\left(\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq\frac{4\left\langle \delta\hat{H}\hat{A}\hat{A}^{2}\delta\hat{H}\right\rangle}{\hbar^{2}}\leq\frac{ 4\|\delta\hat{A}\|_{\infty}^{2}\Delta E^{2}}{\hbar^{2}}. \tag{11}\]
This means that the energy fluctuation can be the upper bound on the change of the standard deviations, similar to the spirit of conventional speed limits in isolated quantum systems. We stress that our result addresses such an energy-fluctuation-based bound on the fluctuation of observables for the first time.
**Example 5: Single-spin dynamics.** We consider \(\hat{H}=g\hat{s}^{x}\) and \(\hat{A}=\hat{s}^{z}\), where \(\hat{s}^{s,y,z}\) is the spin-\(1/2\) spin operator. Let us take an initial state given by \(\left|\psi_{0}\right\rangle=\cos(\theta/2)\left|\uparrow\right\rangle+i\sin( \theta/2)\left|\downarrow\right\rangle\) with arbitrary \(\theta\in\mathbb{R}\), where \(\left|\uparrow\right\rangle\) (\(\left|\downarrow\right\rangle\)) is the eigenstate of \(\hat{s}^{z}\) with eigenvalue \(+1\left(-1\right)\). We then find that \(\left(\frac{\mathrm{d}\langle\hat{A}\rangle}{\mathrm{d}t}\right)^{2}+\left(\frac{ \mathrm{d}\sigma_{A}}{\mathrm{d}t}\right)^{2}=-\left\langle[\hat{H},\hat{A}]^{2} \right\rangle/\hbar^{2}\) holds for arbitrary time, which means that inequalities (1) and (2) become equalities.
**Example 6: Quantum many-body systems.** Our results are useful even for quantum many-body systems. For example, let us first consider a spin-\(S\) system whose Hamiltonian reads \(\hat{H}=\sum_{ij}J_{ij}(\hat{S}_{i}^{x}\hat{S}_{j}^{x}+\hat{S}_{i}^{y}\hat{S}_{j} ^{y})+\Delta_{ij}\hat{S}_{i}^{z}\hat{S}_{j}^{z}+\sum_{i}h_{i}\hat{S}_{i}^{z}+g \hat{S}_{i}^{x}\), where \(J_{ij},h_{i}\) and \(g\) are arbitrary (\(\hat{S}_{i}^{x,y,z}\) is the spin-\(S\) spin operator at site \(i\)). Then, if we take \(\hat{A}=\hat{M}_{z}=\sum_{i}\hat{S}_{i}^{x}\), we have \(\left|\frac{\mathrm{d}\sigma_{M_{k}}}{\mathrm{d}t}\right|\leq|g|\sigma_{M_{y}}\), where \(\hat{M}_{y}=\sum_{i}\hat{S}_{i}^{y}\). Therefore, fluctuation dynamics of magnetization in \(z\)-direction is simply bound by the fluctuation of that in \(y\)-direction (or vice versa). We numerically demonstrate our limits to fluctuation dynamics for quantum many-body systems in Supplementary Material [32].
We also note that, while the bounds in (11) typically diverge for many-body systems, we can find alternative convergent bounds for a locally interacting Hamiltonian and an observable acting on a local subsystem X. In this case, \(\hat{H}\) (\(\Delta E\)) in the bounds in (11) can be replaced with \(\hat{H}_{\mathrm{X}}\) (\(\Delta E_{\mathrm{X}}\)), which are composed of local interaction terms that nontrivially act on X [32].
**Example 7: Semi-classical limit.** Importantly, our bound by the velocity observable remains useful in the semi-classical limit, in stark contrast with the Mandelstam-Tamm bound [14], which becomes meaningless for \(\hbar\to 0\). For simplicity, let us focus on the system parametrized by the canonical variables \(\hat{q}_{k},\hat{p}_{k}\) (\(1\leq k\leq f\)). We then find \(\left|\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right|\leq\sigma_{\{A,H\}_{ \mathrm{Po}}}\) and \(\left(\frac{\mathrm{d}\langle A\rangle}{\mathrm{d}t}\right)^{2}+\left(\frac{ \mathrm{d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq\langle\{A,H\}_{\mathrm{Po}}^ {2}\rangle\), by taking the semi-classical limit \(\hbar\to 0\) in Eq. (9). Here, the average is concerning the Wigner phase-space distribution, \(A=A(\vec{q},\vec{p})\) and \(H=H(\vec{q},\vec{p})\) are the semi-classical limit of the observable and the Hamiltonian, respectively, and \(\{X,Y\}_{\mathrm{Po}}=\sum_{k=1}^{f}\partial_{q_{k}}X\partial_{p_{k}}Y- \partial_{q_{k}}Y\partial_{p_{k}}X\) is the Poisson bracket. In Supplemental Material [32], we show another derivation of these inequalities from the classical setting, as well as higher-order generalizations as in (6) and simple examples.
_Limits in arbitrary quantum dynamics._ Finally, let us discuss arbitrary quantum dynamics, which may include dissipation and thus non-unitarity, via quantum information theory. Because of the non-commutative nature of quantum theory, we have not obtained a suitable velocity observable that satisfies Eq. (1) so far, unlike the classical case in Eq. (7). Nonetheless, we still find an information-theoretical bound for the cost of the tradeoff between the changes of the mean and the standard deviation (see Supplementary Material [32]):
\[\left(\frac{\mathrm{d}\left\langle\hat{A}\right\rangle}{\mathrm{d}t}\right)^{ 2}+4\left(\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\right)^{2}\leq\langle\hat{ L}\delta\hat{A}^{2}\hat{L}\rangle\leq\|\delta\hat{A}\|_{\infty}^{2}\mathcal{F}_{Q}, \tag{12}\]
where \(\hat{L}\) is the symmetric-logarithmic derivative satisfying \(\frac{\mathrm{d}\phi}{\mathrm{d}t}=\frac{1}{2}(\hat{\rho}\hat{L}+\hat{L}\hat{ \rho})\), and \(\mathcal{F}_{Q}=\langle\hat{L}^{2}\rangle\) is the quantum Fisher information [42]. We recover inequality (8) in the classical limit and (11) for the unitary dynamics. We note that inequality (12) is fundamentally distinct from the (generalized) quantum Cramer-Rao inequality \(\left|\frac{\mathrm{d}\langle\hat{A}\rangle}{\mathrm{d}t}\right|\leq\sigma_{A }\sqrt{\mathcal{F}_{Q}}\) since it involves the speed of the fluctuation \(\frac{\mathrm{d}\sigma_{A}}{\mathrm{d}t}\).
_Prospects._ To conclude, we discover a new ubiquitous law in far-from-equilibrium dynamics, stating that the speed of the fluctuation dynamics is always smaller than the fluctuation of the velocity observable. This can also be rewritten as the tradeoff relation that the sum of the squares of the mean and the standard deviation cannot exceed some cost, which is further bound by physical quantities. While we have discussed limits to instantaneous time derivative, we find the corresponding bounds for a finite-time interval \(t\in[t_{\mathrm{ini}},t_{\mathrm{inf}}]\). Indeed, integration of inequalities (1) and (2) respectively lead to \(\left|\sigma_{A}(t_{\mathrm{fin}})-\sigma_{A}(t_{\mathrm{ini}})\right|\leq(t_{ \mathrm{fin}}-t_{\mathrm{ini}})\overline{\sigma_{V_{A}}}\) and \(\left|\left\langle A(t_{\mathrm{fin}})\right\rangle-\left\langle A(t_{\mathrm{ ini}})\right\rangle|^{2}+\left|\sigma_{A}(t_{\mathrm{fin}})- \sigma_{A}(t_{\mathrm{ini}})\right|^{2}\leq(t_{\mathrm{fin}}-t_{\mathrm{ini}}) \overline{C_{A}}\), where \(\overline{\cdots}\) denotes the average over \(t\in[t_{\mathrm{ini}},t_{\mathrm{inf}}]\). From these inequalities, we find that the fluctuation \(\sigma_{A}(t_{\mathrm{fin}})\) (without time derivative) can also be upper bound by the physical costs (e.g., the Fisher information), while the conventional speed limit based on the Cramer-Rao bound only leads to the lower bound on the fluctuation.
Our results can be extended to several notable ways. First, although we have discussed time-independent observables, inequalities (1) and (2) hold even for time-dependent observables in some instances that are indicated by the asterisks in Table 1. In this case, we obtain the velocity observable by adding \(\hat{A}\) (\(=\frac{\mathrm{d}A}{\mathrm{d}t}\)) to the one in the case of time-independent observables. Such time-dependent observables include, e.g., the Hamiltonian \(H(t)\) for time-dependent processes. Second, instead of the actual time \(t\), we can parameterize the state by multiple parameters \(\vec{\lambda}=(\lambda_{1},\cdots,\lambda_{Z})\). We can show limits for the change of the fluctuation of an observable when these parameters are varied; in this case, the bounds are generalized to matrix inequalities due to the existence of multiple parameters [32]. As the Cramer-Rao inequalities for multiple parameters turn out to be useful in the field of (quantum) metrology, our inequalities may also be applied to such metrological purposes, as well as non-equilibrium statistical mechanics.
The newly discovered fluctuation relations can ubiquitously apply to classical and quantum systems, deterministic and stochastic systems, and even nonlinear and many-body systems. It is a crucial future task to harness these relations to control complicated systems at the level of fluctuations.
We thank Kyosuke Adachi and Takashi Mori for fruitful comments. The numerical calculations in Supplementary Notes were carried out with the help of QUSPIN [43; 44]. |
2301.13556 | Purposeful and Operation-based Cognitive System for AGI | This paper proposes a new cognitive model, acting as the main component of an
AGI agent. The model is introduced in its mature state, and as an extension of
previous models, DENN, and especially AKREM, by including operational models
(frames/classes) and will. In addition, it is mainly based on the duality
principle in every known intelligent aspect, such as exhibiting both top-down
and bottom-up model learning, generalization verse specialization, and more.
Furthermore, a holistic approach is advocated for AGI designing and cognition
under constraints or efficiency is proposed, in the form of reusability and
simplicity. Finally, reaching this mature state is described via a cognitive
evolution from infancy to adulthood, utilizing a consolidation principle. The
final product of this cognitive model is a dynamic operational memory of models
and instances. | Shimon Komarovsky | 2023-01-31T11:11:38Z | http://arxiv.org/abs/2301.13556v1 | # Purposeful and Operation-based Cognitive System for AGI
###### Abstract
This paper proposes a new cognitive model, acting as the main component of an AGI agent. The model is introduced in its mature state, and as an extension of previous models, DENN, and especially AKREM, by including operational models (frames/classes) and will. In addition, it is mainly based on the duality principle in every known intelligent aspect, such as exhibiting both top-down and bottom-up model learning, generalization verse specialization, and more. Furthermore, a holistic approach is advocated for AGI designing and cognition under constraints or efficiency is proposed, in the form of reusability and simplicity. Finally, reaching this mature state is described via a cognitive evolution from infancy to adulthood, utilizing a consolidation principle. The final product of this cognitive model is a dynamic operational memory of models and instances.
## 1 Introduction
Our consistent goal is to construct a basic realistic model for _AGI_ (Artificial General Intelligence). It is a gradual process with many versions along the way. Hence, this paper presents _MOM_ (Model Of Models), the next version of _AKREM_ (Associative Knowledge Representation) [15].
_AKREM_ is a mature-state knowledge representation model, based mainly on the assumption that communication is about encoding the sender's will into a sequence of words (a message), and then decoding it by the recipient. The model proposes a representation of any message, in a hierarchical form based on grouping, by generating some essence in a given level, from details in the lower level. The lowest level details are founded upon some DNN (Deep Neural Network), generating the basic concepts and actions (from which the details are made of) from unstructured input. Finally, while the will concept exists in _AKREM_, it will be expanded upon in this paper.
Following this, new additions to the presented model, including new associations, operationability, modeling, consolidation, and reusability, are introduced. First, while _AKREM_ assumes that the learned _elements_ are either objects or
static actions (verbs), new associations are introduced: object's attributes and relations. Next, these connections are all static representations of knowledge, i.e., the hierarchies cannot be changed. Therefore, operationability introduces a new type of association to objects: actions that act upon them, thus producing new knowledge _elements_. This makes the connections in _AKREM_'s hierarchies dynamic, hence it allows the freedom to update and create new hierarchies. Next, modeling introduces some basic cognitive operations, e.g. abstraction and grouping. Both gather many details into fewer. Grouping specifically is about connecting _elements_ via some common property. It could be for example a chronology in a plot or other common properties/actions grouped into classes. Finally, consolidation is a process in time, that collapses a huge amount of possibilities into small set of patterns, of any kind.
All the operations above are considered to be bidirectional, i.e. everything lies in some range between extremes, i.e. everything has its inverse (dualism). In grouping, it is from the whole to its parts and vice versa, and in abstraction, it is from instances to classes and vice versa. Consolidation is an operation in time, creating models and memory, while its inverse is forgetting, which is also an operation in time. Will also lies in the dichotomy of determinism and randomness.
The product of these cognitive operations is a dynamic memory of models, formed as a semantic network of _elements_. It encourages a holistic approach for AGI designing: one simple system for multiple functions, such as short-term and long-term memories, problem-solving, communication, learning, and any cognitive function. Moreover, if in the early epoch of AI, symbolic reasoning was dominant, and nowadays connectionism dominates, then we come to a new era, where we should combine and include many conflict perspectives, in cooperation and competition. Our holistic perspective embrace this duality and other dualities also.
Lastly, operational modeling presents _AGI_ agent's intelligence in its mature state, which is the state of how its knowledge should be represented. However, to accomplish this state, a cognitive evolution over time is required, which utilizes the consolidation principle.
## 2 Will
In this section, the importance of will as an essential element in human intelligence is elaborated upon, starting from the previously presented model, _AKREM_.
Will in _AKREM_ is represented in the levels of any specific hierarchy. Starting from the most detailed aspects of will, at the lowest level, and finishing at the most abstract will or its essence, at the top. The top level represents some kind of experience uniqueness to differentiate it from other memories that use the same low-level structures.
This hierarchical will is especially demonstrated in a constrained environment, as our reality, for topics like problem-solving and communication. In
problem-solving, the main will produces sub-wills in lower levels, till it reaches the final solution at the bottom level (Fig 1(a)). The final result is a plan or a sequence of actions. See more in Appendix A.2. Similarly, in communication, the sender encodes/converts his will into a sequence of actions (in a language form), while the recipient on the other side decodes the intention/will from this sequence (Fig 1(b)). In both cases, evaluation is necessary, hence this top-down process is cyclic and non-linear.
Figure 1: Two cases of will in a constraint environment
All the above are specific cases of will, but there is also a more general will. As recipients of reality, our main cognitive will is to find the most appropriate/simple model to fit all the pieces/details in the right place or to make the most sense of them1. This is similar to decoding a will from a message/mystery/riddle, and it can be rephrased as a general problem-solving task to comprehend reality. First, it is done internally, by reorganizing our models (mostly during a sleep phase), and later it is done externally, in any kind of problem-solving, or in understanding a message/story/riddle/situation/phenomena. The best model will allow us to move from place to place in it easily, perform new actions, and produce conclusions/solutions easily. This results with an understanding, or the ability to control any aspect in the complex model. So in a sense, we have two wills governing our cognition: controlling and subsequently making sense. Obviously, these wills are enforcing each other.
Footnote 1: Making sense is also important for explainability, especially storytelling modeling can provide this. Hence, explainability should be generative and flexible, in its most general conception
Additionally, there are different categories of will, such as chronology, causation, and purposefulness. In stories, they are very intertwined/mixed. It is because purposefulness is a higher manifestation of will (usually applied in humans), while causation is a lower one, usually applied to animals/objects (e.g. "A causes B"), and chronology is simply the way will is implemented: in a delay. You first want, and then you try to accomplish it. Or in the case of causation, there is a law, as a fixed kind of will (e.g. gravity), and then it is realized. See more in Fig 2.
In conclusion, purpose is everywhere, in all of our daily interactions. We try to figure out animals' intentions or people's hidden will, to make sense, or in our case to make our model complete, i.e. enable prediction. Since a specific human has his own will, he tries to figure out other people's will to enforce its own will over theirs. Sometimes it is via competition, while sometimes it is through cooperation.
However, how is will actually included in _MOM_? It is discussed in 5 and Appendix A.4.
Figure 2: Types of will
New Associations
Inspired by semantic nets, some general connections are proposed, such as inheritance (is-a relation), instance property (is-instance-of relation), part-whole property (part-of relation), an attribute of an object property (has relation), assigning a value to attribute (value relation), synonyms, antonyms/opposites and more, see [20]. All these connections are static and can represent stories as separate items as it is in _AKREM_. But to connect these details, as a sequence of applied actions, operationability is introduced.
## 4 Operationability
It is hypothesized that thinking is operational. Meaning it is a process, generating new facts along the way, via some set of actions. Hence, the first principle added to _AKREM_ is operationability, which turns it from static to dynamic knowledge representation. That is, unlike static connections as in _AKREM_ and knowledge/scene-graphs (representing facts or a scene shot) - action connections in _MOM_ can also be productive (produce new _elements_). It adds degrees of freedom to the current cognitive model, to move in new directions along the hierarchy, i.e. to create new hierarchies on the fly or update old ones, via admissible actions.
Subsequently, a minimal set of primitive operations is proposed, to function as basic operations, which can be the building blocks for more complex and composite operations/actions. Operations such as logical relations (AND, OR, NOT, all, each, (in)equalities, exists, count), flow operations like loop operations (while, for) and if-else conditionals, mathematical operations (+,-,*,/,min,max,norm,log), and other relations. This set of tools can replace DNN units and DNN's fixed structure, in a program-search process. It can be implemented for example via Reservoir network, a random mix of basic rule components/blocks, yielding an algorithm best describing an operation.
At the same time, prior knowledge is needed to be inserted, within all _elements_, by including: number of visits, uncertainty, rate of update, and measure of consolidation. The measure of consolidation is to prioritize different options associated to some _element_, to separate the relevant from the irrelevant, e.g. admissible actions, which can be used inversely in creativity mode - by picking the less expected directions to follow.
Moreover, action's admissibility is needed for two reasons. Firstly, due to the elimination of entry conditions necessary for an action to be performed, e.g. on which types (integer, string, etc). And secondly, it is due to the ability to use High-Order Logic, as in \(\lambda\)-calculus, which removes any restrictions on an object's slot or action's argument. Hence, relevancy is needed to constrain action's admissible space.
Modeling
If operationability is considered, as the addition of freedom to move in a 2D knowledge representation, then modeling is an addition of a new dimension, i.e. converting it to 3D. It is about extending the "is-a" operation into a programming abstraction, as in _OOP_ (Object-Oriented Programming), or in abstract mathematics, such as algebra or category theory2. Meaning, that while usually semantic networks represent this operation in a 2D graph, here the instances are totally separated from classes, and from classes of classes, and so on. Resulting with a multi-level of abstractions, while for simplicity two types of levels can be distinguished, in the final LTM (Long-Term Memory), see Fig. 3. In summary, at first all different associations including operationability are describing objects, and then abstraction extends objects as instances into classes, which represent models. Consequently, unlike _AKREM_, these full models are important for natural communication, e.g. for context-based conversations, where undelivered missing information (common sense) is needed to be filled.
Footnote 2: Group properties include identity object to use compositionality of an operation and its inverse operation. This implies duality.
This extension has several implications. First, in perception from senses: from the basic recognition of instances in _AKREM_, to a multi-level recognition of instances and classes. Next, every _element_ is learned and can be abstracted (into class), i.e. objects, actions, relations, and attributes. For example, action with attributes, relation with attributes such as strength, numeric attribute/values as class (integer, real), group types (sets, lists, arrays) with their group operations (slicing, union, sorting), and more. Next, this extension enables answering the question about how will is implemented in _MOM_, see 2. Alternatively to _AKREM_'s hierarchy by will, which it is not clear how it can be implemented, it could be generated by abstraction, while the will/intention will is serving as an additional and independent variable in the models that construct the hierarchy. Additionally, feeling measures could be included, as influencing will. Moreover, since there are several levels of will, correspondingly there are the main variable and secondary variables representing these wills, perhaps with different significance intensities, depending on the abstraction level.
Finally, the novelty here, is that unlike _DL_ which performs program-search in an un-interpretable way, here however, additional inductive bias is introduced: separating of models and performing program-search to relevant actions, in consistency with other models and actions. This makes _MOM_ both usable and interpretable.
### Learning the modeling
Here, model learning mechanism is proposed, where two contrary but completing learning approaches in AI are combined (Day, 2022): empirical, i.e. from examples (induction), and expertise (rule-based). This is a learnable symbolic manipulation, or can also be referred to as a hybrid approach or Neuro-Symbolic, see (Marcus, 2003). Empirical is bottom-up (from examples to rules), e.g. via
observation or passive interaction. Rule-based is straight from the top, via rules in abstract language, e.g. via conversation or observation. It can then descend to examples of these rules (deduction).
These approaches might contain models of concepts that do not belong to them both, but only to one of them. For example, concepts that are hard to define, like love, God, beauty, and tacit/unconscious knowledge like walking and breathing - all can be modeled simply by examples. Similarly are the sub-symbolic features, like audio and visual inputs - they do not have logic/linguistic/symbolic meaning, hence should be modeled by examples, as it is done in _DL_ nowadays. Hence, these non-symbolic concepts can be learned via usual non-interpretable _DL_. On the other hand, abstract concepts, like those in math and sciences, that appear less in the physical reality, can be learned solely in the top levels of LTM.
Moreover, in this hybrid approach, _DL_ (Deep Learning) is used twice. On the one hand, _DL_ is extended from its too constraint program-search to be much more flexible, if more operations are added as building blocks, see 4. Hence, symbolism is learned and adaptive just like in _DL_, differently from expert/rule-based AI. On the other hand, different input sensors are fused to represent specific symbols/concepts, i.e., the uninterpreted features in _DL_ become symbolic tokens (Fig. 3).
The reason the hybrid approach is preferred over _DL_-only, is that _DL_ usually does not implement compositionality, modularity and abstraction (Dickson, 2022). One of the effects of this shortcoming is the shortcut effect, where the _DL_ categorizes something due to the wrong reasons. It can be solved in a system, that learns a model with alignment and consistency-check with other models learned so far, which _MOM_ supplies (continual learning).
Figure 3: Proposed cognitive model basic diagram
Additionally, the topmost level is actually temporal and used for creativity and problem-solving. In this level, temporal new abstractions are created, by stripping off attributes/actions/relations, thus connecting distant or different abstractions to perform analogy or transfer learning between different domains. For example, the abstraction is via the number of edges in the polygon classes (Fig. 3).
Furthermore, learning can be divided into online and offline modes. In waking periods, i.e. when sensors are active, the learning is online and minimal since most resources are dedicated to fast response (e.g. fast optimization into local optimums). However, during sleeping periods, sensors are inactive, and previous memories can be used for improving models to make overall sense, e.g. larger time scale is used to generate causal relations in models. In such a case, it is a slow processing, implemented e.g. as a Neural Architecture Search or a Genetic Algorithm, to get out of local minima and search for a global one.
Finally, another issue precedes learning: how to obtain separate models at all. One way, is like the _DENN_ (Dynamic and evolving NN) idea [Komarovsky, 2022a], i.e. always learning the "model of everything" while refining it more and more with every new experience, such that new sub-models are produced. The idea here, is like Jeff's hierarchy [Hawkins and Blakeslee, 2007], where the top model is always reached, at perception from senses, and it decides which lower model will handle the situation. For example, in recognizing a specific type of problem, it chooses the most appropriate model for solving this problem. Or, when encountering new knowledge, it selects the most appropriate model to handle its assimilation in the current memory of models.
Another way to facilitate modeling evolution is via consolidation.
## 6 Consolidation
So far, the cognitive model has been presented in its mature state. Now, the discussion is about how to reach it. This is a process in time, which is mainly based on consolidation.
Consolidation is about transforming from chaos to some stable order of patterns, or from a continuous realm to a discrete one, as in quantum mechanics. An infinite amount of details is hard to handle (i.e. to understand and then to control), therefore consolidation to fewer patterns is required. Consolidation also allows for fuzzy logic and categories [Wyler, 1995].
Consolidation can be expressed in many forms, such as:
* in the conversion of sub-symbolic to symbolic, for any type of _element_
* in cognitive evolution: from flexible (at infancy) to less flexible (at adulthood)
* in modeling, at program search, from huge hypothesis space for possible programs to a small set of hypotheses (as in _DL_). It is both in the micro (within models) and in the macro (between models)
* in testing multiple versions of an unknown model, and finally converging into less/one version(s) that are/is consistent with evidence
* and in grouping/abstraction, where some separate elements become connected
Note that causality is a special case of modeling, a spatio-temporal one, where re-occurrence is consolidated. More generally, re-occurrence help in learning both static objects and dynamic basic/composite events (equivalent to scenarios/scripts in _OOP_).
Additionally, _MOM_ enables multiple parallel versions of the same thing, since any specific topic or subject can have multiple theories/models, sometimes in conflict. Hence, like in the quantum superposition realm, multiple-version combinations could be tryout, and consolidation can help in collapsing them into fewer versions. Those versions should make the most sense, i.e. to be consistent on different occasions or supporting the majority of evidence. Thus, it just maybe, that at infancy, a highly uncertain period, there are many versions created, and with time - only the most consistent ones survive (consolidate).
Lastly, two operations help in producing consolidation. On the one hand, to deal with a stochastic environment and ambiguous signals, **repetition** provides memory prioritized by relevancy. Repetition is never exactly over the same thing, but rather over many different examples of a thing. Repetition is needed also with guided tutoring of an AGI agent.Conversely, **sparsification** is about reducing irrelevant signals.
### Reusability
An additional form of consolidation is reusability, since the more learning progresses, the fewer new models are proposed in favor of using existing ones. Hence, reusability is expressed via exploration (mostly at early stages) verse exploitation (in stable or mature stages), as in Reinforcement Learning. In the beginning, many possible codes are generated for models, but as time goes by the process is less exploratory and more exploitative, i.e. there is more emphasis on retrieving known codes, while testing fewer new codes in parallel. In addition, reusability aligns perfectly with abstraction/grouping, in a constrained environment and limited resources. They are both needed to hold control of as much as possible, with minimum effort, i.e. without generating many models of each thing.
In practice, reusability is about using less of the initial available tools, as the learning evolves. Meaning, while regular _DL_ tools (if, sum, activation function) or the primitive tools 4 can be used for program search of basic action methods or relation methods, the new methods apply reusability. In such methods, less primitive tools are used while the current methods are used more, thus encouraging more connectivity in the network.
Moreover, Functional Programming can be applied to assist reusability. On the one hand, the general/outer structure is _OOP_, i.e. _elements_ are grouped in an _OOP_ fashion. On the other hand, methods are kept in a pure operational
immutable form (Chen, 2019; Van Roy et al., 2009). Meaning, having small and simple methods, which maximally reuse other functions, and without inner variables, due to objects-memory-only assumption. Meaning, methods that are comprised of other methods, as much as possible. This is compositionality/grouping applied in actions. See Fig. 4.
## 7 Cognitive model comparison
Here all models developed so far are compared, in Fig. 5.
In summary, _DENN_ stores each new combination of events, while _AKREM_ stores dynamically in episodic memory any newly encountered combination of basic events. Events which are stored separately in two types of memory. _MOM_ unites those separate memories into one dynamic operational memory, consisting of concepts, actions, relations and any instance of those.
Figure 4: Actions, objects, compositionality, and reusability
Figure 5: Comparison table of models discussed so far
Conclusion
The following is the summary of the paper including some key takeaways.
First, a new cognitive model is introduced to join the existing cognitive architectures, in the form of dynamic operational memory of models and instances. In it, a holistic approach is embraced, assuming that intelligence should be highly versatile and diverse, instead of "picking one side".
Next, our model includes will as an essential part of the modeling process. Thus, in case of its absence, it turns most of the learned models to be very partial. In the _OOP_ formulation, the will is an additional variable, and it is mostly significant in the top level while it is least significant in the lowest one.
Next, operationability turns static knowledge representation into a dynamic one, thus enabling cognitive processes. The actions are learned via regular program search, with either _DL_ or other tools, in a self-supervised manner.
Next, one way to ensure that the continual learning is consistent, is by implementing local learning, i.e., concentrating on updating only one/some model(s), while confirming compatibility with other models. A model can be learned either from examples or directly by using existing operations (logically). Another way to ensure that learning is consistent, is by a slow process of consolidation. It is ensured by maintaining a high level of flexibility over a long period, while pursuing more and more consistency within and between models.
Next, reusability is utilized to enhance connectivity between models, instead of learning them as separate entities.
Finally, the cognitive model is designed via inverse engineering. Meaning, starting from our highly aware and mature cognitive state of mind, and then tracking back in time to study its evolution.
## 9 Future Work
The main problem is how to implement a cognitive system, that produces the appropriate models, i.e. how grouping/clustering occur, to generate the right models. Also, how these models produce new ones by the correct compositions.
In addition, there is the issue of how sub-symbolic become symbolic. Perhaps the models produce objects and actions directly upon sub-symbolic data. Furthermore, if continuing this line of thought, then models may be removed at all, which converts this problem to pure DL-based approach.
Additionally, relevant to the last issue, is model evolving. Is it model refinement of some main model to sub-models, which controls how models of knowledge are used, or is it that all the models are separate. And if it is by refinement, is it one large DL-based model, and all the rest are knowledge models, or if we continue this line of thought - again, end up with pure DL-based one huge model, containing implicitly all the different models, their actions and attributes. But then how _elements_ and abstraction are implemented in such a model? More about the suggestion above see Appendix A.3.1.
Finally, though hierarchy by abstraction can be implemented, but how can it be implemented by will, i.e. how to decide when and what to group in such a hierarchy?
These are all open questions to deal with.
P.S.: Neuro-Symbolic AI for me is merely taking inspiration of class-based structure, to act as the final stage of learning, while DL is the main tool to reach it. So it is all about flexibility of DNNs, only that we use consolidation to finally reach symbols. Another implication of this, is memory. First, since it is vague and mostly reconstructed. It means it is not recorded accurately. And second, since it probably used in sleeping periods similarly in a vague form.
## Appendix A Appendix
### Examples of MOM in action
Fig. 6 contains examples of cognition processes. As seen, actions denoted as arrows, can perform changes in several objects to their different attributes. High admissibility expressed via salient color. Fig. 6(b) is a _MOM_ representation of evolving state, representing a story. A state is consisting of several objects (joining/leaving as the story evolves), including their attributes and actions that change the state. The story: _"David entered his room. He searched for something on the floor. Then he searched in the basket. Then he searched under his bed, and was thrilled to find the ball there. In the meanwhile, his mother entered home. She put her keys on the desk. Then she removed her shoes and put her sunglasses on the desk. Then she searched for David, found him, and they sat to eat lunch together"_. The same story is represented via _AKREM_, without the evolution of details, see the video link in [Komarovsky, 2022b]. Note, that if repeated often, the sequence of David or anybody searching in some place can be grouped/abstracted as an event class, also referred to as _Trans-Frame_[Minsky, 1988].
Next, Fig. 7 contains examples of model learning, based on Fig. 3. The assumption here, is that similarly to DNN pattern matching and then executing some task - here it is performed explicitly, via if condition as pattern matching, and execution following. Each sub-figure contains the type of interaction (passive or active) and the different modalities involved: A=audio, V=vision, P=physical act.
As seen, operation in _MOM_ involves time, i.e. it can be either immediate or include past/future of any scale, which enable causality modeling. Also, naming, or the inclusion of language to describe objects is not necessary in model learning. The learning still occurs, even before its name is introduced, or if it is forgotten for some reason.
Finally, the conditions in Fig. 7 could be alternatively grouped/abstracted as event classes instead of being learned as an action. Meaning, the pattern matching can be replaced by event class to be recognized, and the possible reaction to this pattern can be formed as an admissible action in such a class.
For example, "OR" can assign the same action to different objects, and "AND" assigns an action to a group of objects.
Figure 6: Examples of cognition processes
Figure 7: Examples of model learning
### Problem-solving and Designing Appendix
#### a.2.1 Problem-solving
Problem-solving is a broad topic, which is about handling any given situation, and not only solving puzzles/mysteries/science. In any such situation, we can either recognize a previous similar pattern (System 1), and apply automatic reaction, i.e. immediate resolution, or if it is not the case, try to generate a new solution (System 2). In this section, the latter option is discussed.
In this context, it is represented within some state space, where a problem is situated at some point or a region in the space. Additionally, a problem is expressing the current (problematic) situation, involving a general will to get out of this situation. Hence, a will is not yet formulated at this stage (Fig. 8(a)). Next, it is about deciding upon some goal states to be reached, to gain a desired resolution. When goal states are defined, the will become purposeful. Purpose is a more definite will, because it gives some "direction", either a vague direction or a strong one, to specific goal states (Fig. 8(b)). Since will derives action(s), it is represented similarly to an action in the state space - as a vector, transmitting one situation to another. Meaning, will is defining the direction the agent wants to move, before it found the admissible/legitimate/allowable way to realize it, in the given environment. Finally, the agent starts to plan how to solve the given problem, under given constraints, i.e. where one cannot fulfill its will directly, but instead look for some legitimate way to accomplish this, in the given circumstances.
After the will is refined and transferred to a purpose, the search for a solution is initiated, and it is depicted in Fig 1(a). This is the next phase of problem-solving: realization. The figure demonstrates a coarse-to-fine hierarchy, where the will along with its refinement, is placed at a top level. This level is vague, since nothing is perceived clearly about the ground level. However, descending the levels reveal more and more details, and get the will more closely to realization. It is similar to the process of zooming in on a geographical map. Higher
Figure 8: Phases of will refinement in problem-solving
levels propose general models as potential stations in a possible trajectory from a problem state to a goal state. Then at descending, finer models are proposed, consistent with the upper levels, to move from a problem to a goal state. The search at any level can be performed by any heuristic/learned model, such as back/forward chaining, Depth-First Search/Breadth-First Search techniques, or any combination of those. Note the mismatch between our model level (proposed solution) and data level (detailed solution), in Fig 1(a). It is due to our inclination towards abstracting, i.e. memorizing the essences and less the details, which is essential for efficient learning, as described also in 6, where it is better to learn several patterns than to lose yourself in a non-pattern realm, where all we see are details.
In summary, this approach is non-local, i.e. similar to Means-Ends Analysis, it is looking simultaneously at the whole region, only within different resolutions. It is also cyclic and non-linear, both in the will-refining stage and in the realization stage. At will refining, it is since sometimes the goal states cannot be reached, so other states are needed to be generated, sometimes as a compromise. At the realization stage, it is since descending in levels might result in conflicts or failures, due to misalignment between the lowest models of reality and the actual reality. Hence, returning to higher levels for trying different solutions is needed.
#### a.2.2 Designing
While in problem-solving, the will was generated from a problem, i.e. growing from an initial state, in designing it is the opposite. Here, instead, it is growing from the final state(s), searching for the best state to start the full solution from. It is like creating a story backward: starting from the end, to reach some beginning (Fig 9).
In designing there is a goal and a will to go there, but no specification of the problem or the initial states. So it is an iterative process, starting from searching for a problem to reach the goal, then continuing with a specific will connecting the problem with the goal, resulting with a problem to solve.
#### a.2.3 Summary of Problem-Solving and Designing
One can spot a duality here also. On the one hand, problem-solving is an analytical perspective, looking for a resolution or finishing a problem, by usually a systemic view, breaking it to parts and then looking for some appropriate solution, that serves as a better state than the one we started with (problem state). On the other hand, designing is based on a holistic perspective, where instead of finding a fast/analytical resolution to a problem ("to make everyone happy and go on with our lives") it is about empathy/consideration, i.e. it is about looking for the roots of the problem, and not just shutting it down quickly. It takes the opposite approach: instead of reducing the problem, it tries to track its sources and thus solving the causes that generated this problem state/situation. By doing so, it searches for a better problem to solve, which solves multiple other problems. Either way, will is either getting somewhere (goal) or getting out of something (problem).
Figure 9: Designing approach
### Implementation
#### a.3.1 Combining Will and Modeling
The final question, is how all the discussed above, i.e. in A.2, and in 2, i.e. will-related topics, are combined with the operational model. There are a few options, as discussed in 9, but we emphasize one of them here. One option, is to have one main supervising model, that depending on the category of the situation, assigns the proper model to handle it. E.g. model for problem-solving, for learning, and for story message (where it connects separate events sequentially). Conversation for example is about taking turns, waiting till me/other side finished, recognizing our models, etc. Perceiving fictional information is treated differently than factual information, and so on. In conclusion, this option derived since problem-solving and alike are very complex models, which is why the suggestion to separate them from the knowledge models. But it extends further - perhaps there is separation of model representation. May be some models can be represented as operational classes, but others cannot. These others could be not interpretable nor can be explained by the agent, since they are in the background of thinking itself, thus they are "hidden" or implicit. See Fig. 10.
Figure 10: Separating cognitive and knowledge models
#### a.3.2 Multi-scale Consolidation in Model Learning
Next question, is how models can be learned separately? One solution is by assuming some initial network of unlearned models, see Fig. 11(b). But before that we assume consolidation in multiple scales, as if there is consolidation also in scaling (discrete amount of them). It can be encountered through many phenomenon in nature, e.g. in the universe (consolidation into stars/solar-systems and galaxies), in fractals (such as snow-flakes), and in other recursive structures. See for example the nested structure in Fig. 11(a), and in the transition from Fig. 11(b) to Fig. 11(c), consolidation in multiple levels, both in micro (within models), and in macro (between models). We see that modeling or reorganization of inner elements, is occuring at many levels of models, i.e. from the basic models to the most complex ones.
Figure 11: Nested DNNs for model learning
### Attention
After assuming a bunch of unlearned models, Fig. 11(b), we can assume that will, acting like a flashlight or a beacon, produce consistent attention (over time) to learn/attend each model (or several of them) separately.
This explains why an infant is usually very focused over his toys (e.g. a ball), and tracking them is essential for this process. This effect stick till adulthood, also in the process of using the cognitive model (i.e. after the learning stage). It is the need to be attentive only to a limited set of models (\(7\pm 2\) items in WM). See examples in Fig. 12.
Next question is how this will is applied in story telling/hearing and in problem solving? For example, in my presentation, the problem-solving 3-layered slide actually showing these beams, searching for solutions!
We can see it in the following Fig. 13.
Figure 12: Local model learning via attention
Figure 13: Learning reframed as problem-solving task
We can say that learning, is making-sense type of problem-solving. So again, there is will, coming from the top, like a projector, focusing on one/few models, while tracking it in the real world. More accurately, we should have changed the top will, not as a purpose to reach but as the point-wise will at the start of a problem.
Moreover, we could say that the hierarchy is abstraction, as claimed earlier, and will is actually only on the top but "shining" directly on the models. Then, we could say that during waking hours, an infant is gathering instances of its current models, e.g. a ball, and at sleeping, he uses these instances to train his models, for the purpose of making sense. The waking hours do not do it, they can only perform cognitive operations, which is actions in these models. So first, he tries to figure out different models, then he tries to model them also in time, thus able eventually to track them, which is a validation of the correctness of his "ball" model for example. Because the final test of his model is prediction, hence temporal modeling is what enables prediction, or more specifically forecasting (prediction in time).
Note, that attention to a few models also implies that just as humans, AGI agent need not to understand and model everything, but only what it is focused on or interested with. Also, there is the idea of bidirectional attention, which is bottom-up (external) verse top-down (its own will), and describes the competition between having a (strong) will to be highly influenced by the outside. In AGI's case, it should be mostly navigated by external guidance, if will is not engineered into it.
In addition, attention can have different "focal length", like the theory of vision, having small pinhole perception at lower levels, and a bigger one at higher ones. Meaning, the ability to sometimes see small details and sometimes see the big picture. In model attention it is the same: we can both have low-level more detailed attention on smaller models, upto a high-level attention for more general or composite models. In comparison to classical object detection in computer vision, high-level concepts use only the higher-level features for the classification task, but more generally there is no reason not to be attentive to low-level features whenever is needed.
Finally, attention in our perspective is very similar to the attention in DL, only without regulation. Meaning, without considering the ideal state of consolidating into symbolic reasoning of models and operations and most importantly allowing for dynamic abstraction. However, DL's attention is similar in that it too allow for multiple implicit functions in a given learning NN, since it react differently depending on the input. In other words, the DNN can be regarded as a group of undeclared models/functions, generated by attention units, thus implicitly implement compositionality and reusability. |
2309.03167 | Split-Boost Neural Networks | The calibration and training of a neural network is a complex and
time-consuming procedure that requires significant computational resources to
achieve satisfactory results. Key obstacles are a large number of
hyperparameters to select and the onset of overfitting in the face of a small
amount of data. In this framework, we propose an innovative training strategy
for feed-forward architectures - called split-boost - that improves performance
and automatically includes a regularizing behaviour without modeling it
explicitly. Such a novel approach ultimately allows us to avoid explicitly
modeling the regularization term, decreasing the total number of
hyperparameters and speeding up the tuning phase. The proposed strategy is
tested on a real-world (anonymized) dataset within a benchmark medical
insurance design problem. | Raffaele Giuseppe Cestari, Gabriele Maroni, Loris Cannelli, Dario Piga, Simone Formentin | 2023-09-06T17:08:57Z | http://arxiv.org/abs/2309.03167v1 | # Split-Boost Neural Networks
###### Abstract
The calibration and training of a neural network is a complex and time-consuming procedure that requires significant computational resources to achieve satisfactory results. Key obstacles are a large number of hyperparameters to select and the onset of overfitting in the face of a small amount of data. In this framework, we propose an innovative training strategy for feed-forward architectures - called _split-boost_ - that improves performance and automatically includes a regularizing behaviour without modeling it explicitly. Such a novel approach ultimately allows us to avoid explicitly modeling the regularization term, decreasing the total number of hyperparameters and speeding up the tuning phase. The proposed strategy is tested on a real-world (anonymized) dataset within a benchmark medical insurance design problem.
Artificial intelligence, deep learning, machine learning, neural networks, hyperparameter tuning, regularization +
Footnote †: footnoteinfo
first are explicitly obtained. However, our work differs from traditional ELMs, where the parameters of the first layer are fixed a priori (and not updated) after initial randomization, see Huang et al. (2021). Additionally, backpropagation in ELMs is removed. Instead, we still perform backpropagation to train the first layer's parameters. Likewise, a one-shot optimization is performed (ridge regression) with the substantial difference in how the data are divided to parallelize the computation on 2 different batches to retrieve the parameters of the second layer explicitly. This strategy is expected to reduce the occurrence of overfitting without resorting to a regularization term.
The paper is organized as follows. In Section 2 a review of the state of the art, the main challenges, limitations, and the benchmark network structure are presented. In Section 3 the mathematical formulation of the training strategy of the _split-boost_ neural network is presented: in the first part of the section the mathematical notation is introduced, then in Section 3.1 the _split-boost_ optimization problem for optimal weights computation is defined, and finally in Section 3.2 the details on training procedure, the policy for best epoch retrieval and the learning rate switching strategy are described. In Section 4 a numerical comparison between the traditional feed-forward neural network training and the split-boost training is presented. The paper is ended by some concluding remarks.
## 2 Problem Statement
The goal of this work is to propose an alternative training strategy for a classic feed-forward neural network. Without modification of the network structure, we want to show that using the same amount of data in a different way allows us to improve training performances and achieve implicit regularization, namely to obtain a regularization effect without modeling it explicitly. This allows us to simplify the tuning procedure of the network, reducing the number of hyperparameters to be calibrated.
In this section, the basic structure of the architecture of a feed-forward neural network is introduced without going into details as this morphology is widely studied in the literature, see Rosenblatt (1958), Bishop (1995), Nair and Hinton (2010), Goodfellow et al. (2016). A feed-forward neural network is a deep learning model which, based on the interaction of several processing units, called neurons, introducing non-linearities on the inputs, can perform various tasks ranging from classification, regression and prediction of time series. The parameters that build this model are represented by the weight matrices that correspond to the different layers of neurons building the architecture. These weights are updated through epochs (e.g., several passes through the dataset) via gradient descent (or one of its variants) and the help of the well-known backpropagation algorithm Rumelhart et al. (1986), for the calculation of the gradient of the errors with respect to the weights of the network.
Neural networks are models with a high descriptive capacity but are characterized by the phenomenon of overfitting, which can cause negative repercussions on the generalization capacity. Overfitting is one of the curses of general statistical learning. It often discourages users of artificial intelligence as the lack of a sufficient amount of data makes the architectures prone to its onset. In the literature, there are several documents relating to the description of the problem (see Tieleman and Hinton (2009), Wan and et al. (2013), Keskar and et al. (2016), Zhang and et al. (2021)). In the case of feed-forward neural networks, there are several strategies that can counter overfitting such as dropout, early stopping, regularization, data augmentation, batch normalization (see, e.g., Srivastava and et al. (2014), Ioffe and Szegedy (2015), Hinton et al. (2015), Loshchilov and Hutter (2017)). The goal of these methodologies is to prevent the neural network from overly relying (e.g. fitting the noise) on the data used in training. To do this, the idea is to reduce the number of network hyperparameters (or limit their norm) or artificially increase the available data. In this study, we assume _regularization_ as a benchmark of anti-overfitting methodology. This technique consists in adding into the cost function that must be minimized (or equivalently the reward that must be maximized) a term that depends on the norm of the weights of the different layers of the neural network. During the training step, the neural network is forced to keep the norm of these weights limited to prevent an excessive increase in the cost term.
In Figure 1 a sketch of the neural network architecture in matrix notation used in this work is shown. The considered structure consists of 2 layers (hidden and output) and is designed to solve a regression problem. Each input is processed by each layer (and by each neuron per layer) through the non-linear activation function \(f_{1}\) (rectified linear unit, RELU). \(X\in\mathbb{R}^{N\times D}\) is the input data matrix, where \(N\) is the number of samples and \(D\) is the number of features. \(W_{1}\in\mathbb{R}^{D\times H}\) is the weight matrix of the first (hidden) layer, where \(H\) is the number of neurons. \(W_{1}^{b}\in\mathbb{R}^{H\times 1}\) is the bias vector of the first (hidden) layer. \(Z_{1}\in\mathbb{R}^{N\times H}\) are the pre-activations of the first layer. \(X_{1}\in\mathbb{R}^{N\times H}\) are the activations of the first layer. \(W_{2}\in\mathbb{R}^{H\times 1}\) is the weight matrix of the second (output) layer. \(W_{2}^{b}\in\mathbb{R}\) is the bias of the output layer. \(Y,\hat{Y}\in\mathbb{R}^{N\times 1}\) are respectively the matrix of the targets and the matrix of the prediction, and finally, \(J\) is the loss function. Layer biases \(W_{1}^{b}\) and \(W_{2}^{b}\) follow the same training procedure of the corresponding layer weights. For sake of compactness of notation, they are omitted.
The optimization problem that must be solved to find the optimal weights of a feed-forward neural network with 2 fully connected layers, following the traditional training procedure known in literature has the structure of the following unconstrained minimization problem:
\[\min_{W_{1},W_{2}}\frac{1}{2N_{t}}||Y_{t}-f_{1}(X_{t}\cdot W_{1})\cdot W_{2} ||_{2}^{2}+\frac{\lambda}{2}\sum_{i=1}^{2}||W_{i}||_{2}^{2} \tag{1}\]
where the subscript \(t\) refers to the training set and \(\lambda\) is a hyperparameter that controls the intensity of the regularization. In the next section, we present an alternative, novel, way to write down the same optimization problem for training a 2-layer fully connected neural network in a way that allows us to exploit at best the training information introducing an implicit regularizing effect and
Figure 1: Neural network architecture
simultaneously reducing the number of parameters to be tuned. Indeed, within this framework, the regularization factor is not needed anymore.
## 3 Mathematical Formulation of Split-Boost Networks
This section illustrates this alternative training strategy of the proposed Split-Boost Neural Network. The updating of the weight parameters takes place separately for the 2 layers (hidden and output) that characterize the network architecture. The idea is to formulate the training procedure as a bilevel optimization problem, whose outer optimization variables are the weights of the hidden layer \(W_{1}\), and the inner optimization variables are the weights of the output layer \(W_{2}\). For a regression problem, the optimal values of the parameters of the output layer \(W_{2}\) can be obtained in closed form by solving two least square problems. These least square problem solutions represent the constraints of the first optimization problem.
The algorithm involves first a _splitting step_, in which the training set is divided into two sub-sets (a reasonable choice is to divide it equally, we do not claim that this choice is the only possible one, however, it guarantees that both optimization problems see the same amount of data, avoiding the unbalancing towards one of the two partitions.). Both subsets are then used to solve, separately, two least squares problems. Once the least squares problems are solved (e.g. the optimal values for \(W_{2}\) with respect to the two sub-sets are found), the whole training set is used to update the values of \(W_{1}\).
From here, _the boosting idea_. The optimal values obtained with the first sub-set are used to generate the prediction for the data belonging to the second sub-set and vice-versa. Our goal is to show that this methodology can effectively replace the regularization term in a traditional feed-forward neural network, overcoming its performance. This step represents one of the main differences compared to traditional network training: dividing the training set into 2 batches, used to calibrate the parameters of the second layer independently, allows us to improve the information content extraction. In Table 1 the symbols and their description is summarized.
### Optimization problem, forward and backward propagations.
The heart of the training algorithm is represented by the optimization problem shown in Equation (2). Since this cannot be solved in closed form, it is solved iteratively through a descending gradient problem. The value of the \(W_{1}\) parameters is therefore updated as the epochs pass.
\[\min_{W_{1}} J =\frac{1}{2N_{b}}\|Y_{b}-f_{1}(X_{b}\cdot W_{1})\cdot W_{2a}^{*}(W_ {1})\|_{2}^{2}\] \[+\frac{1}{2N_{a}}\|Y_{a}-f_{1}(X_{a}\cdot W_{1})\cdot W_{2b}^{*} (W_{1})\|_{2}^{2} \tag{2a}\] \[W_{2a}^{*}(W_{1}) =\operatorname*{argmin}_{W_{2}}\frac{1}{2N_{a}}\|Y_{a}-f_{1}(X_{a }\cdot W_{1})\cdot W_{2}\|_{2}^{2}\] (2b) \[W_{2b}^{*}(W_{1}) =\operatorname*{argmin}_{W_{2}}\frac{1}{2N_{b}}\|Y_{b}-f_{1}(X_{b }\cdot W_{1})\cdot W_{2}\|_{2}^{2} \tag{2c}\]
In what follows, the subscripts \(k,j\) are used to refer both to training subsets A and B. Since the description of the equations referring to the two training sets mirror each other, the notation will follow as \(\forall k,j\in[a,b],k\neq j\). All the procedure is repeated for both admissible values of \(k\) and \(j\), training is _splitted_. The subscript notation is summarized in Table 2.
In the following equations, the forward propagation is described. We do not go into details as this step is well-known in literature (see Rumelhart et al. (1986), Werbos (1988), Fahlman and Lebiere (1989), Bartlett et al. (2006), LeCun and et al. (1998)).
\[Z_{1k} =X_{k}\cdot W_{1} \tag{3a}\] \[X_{1k} =f_{1}(Z_{1k})\] (3b) \[\hat{Y}_{k} =X_{1k}\cdot W_{2j}\] (3c) \[J_{W_{2k}} =\frac{1}{2N_{k}}\|Y_{k}-\hat{Y}_{k}\|_{2}^{2} \tag{3d}\]
We can compute explicitly the optimal values for the output layer parameters \(W_{2}\) solving a least square problem:
\[W_{2k}^{*}=(X_{1k}^{T}\cdot X_{1k})^{-1}\cdot X_{1k}^{T}Y_{k}. \tag{4}\]
Solving explicitly Equation (4) we derive the optimal values for the output layer parameters computed with the forward pass in the two separate training sub-sets. Notice that \(W_{2k}^{*}\) is function of \(W_{1}\) through the dependency on \(X_{1k}\). For sake of brevity in the following derivation, we write \(W_{2k}^{*}\) in place of \(W_{2k}^{*}(W_{1})\). For sake of compactness, the Jacobian expression is derived in scalar form. Nonetheless, it must be interpreted as the matrix of first-order partial derivatives of a vector-valued function with respect to its input variables. Since \(W_{2k}^{*}\) is an optimum computed explicitly, it is true that:
\[\frac{\partial J_{k}(W_{1},W_{2k})}{\partial W_{2k}}\bigg{|}_{W_{2k}=W_{2k}^{ *}}=\frac{\partial J_{k}(W_{1},W_{2k}^{*})}{\partial W_{2k}^{*}}=0 \tag{5}\]
Differentiating both sides with respect to \(W_{1}\) and using the chain rule according to the influence diagram in Figure 2:
\begin{table}
\begin{tabular}{|l|l|} \hline
**Symbol** & **Description** \\ \hline \(N\) & Number of samples \\ \(t,v,ts\) & Training, validation and test sets \\ \(a\) & Training set partition “A” \\ \(b\) & Training set partition “B” \\ \(D\) & Number of input features \\ \(X\in\mathbb{R}^{N\times D}\) & Input data \\ \(Y,\hat{Y}\in\mathbb{R}^{N\times 1}\) & Output data, prediction \\ \(Z\in\mathbb{R}^{N\times H}\) & Pre-activations \\ \(X_{1}\in\mathbb{R}^{N\times H}\) & Activations \\ \(H\) & Number of neurons per layer \\ \(W_{1}\in\mathbb{R}^{D\times H}\) & Weights of hidden layer \\ \(W_{2}\in\mathbb{R}^{H\times 1}\) & Weights of output layer \\ \(f_{1}:\mathbb{R}^{N\times H}\rightarrow\mathbb{R}^{N\times H}\) & Activation function \\ \(J\) & Cost function \\ \(\lambda\) & Regularization parameter \\ \(\gamma\) & Learning rate \\ \hline \end{tabular}
\end{table}
Table 1: Notation.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Subscript** & **Set A** & **Set B** \\ \hline \(k\) & \(a\) & \(b\) \\ \(j\) & \(b\) & \(a\) \\ \hline \end{tabular}
\end{table}
Table 2: Training sets notation.
\[\frac{\partial}{\partial W_{1}}\left(\frac{\partial J_{k}(W_{1},W_{2k} ^{*})}{\partial W_{2k}^{*}}\right)=0\] \[\frac{\partial}{\partial W_{1}}\left(\frac{\partial J_{k}(W_{1},W_ {2k}^{*})}{\partial W_{2k}^{*}}\right)+\] \[+\frac{\partial}{\partial W_{2k}^{*}}\left(\frac{\partial J_{k}(W_ {1},W_{2k}^{*})}{\partial W_{2k}^{*}}\right)\cdot\frac{\partial W_{2k}^{*}}{ \partial W_{1}}=0\] \[\frac{\partial^{2}J_{k}(W_{1},W_{2k}^{*})}{\partial W_{1}\partial W _{2k}^{*}}+\frac{\partial^{2}J_{k}(W_{1},W_{2k}^{*})}{\partial W_{2k}^{*} \partial W_{2k}^{*}\Upsilon}\cdot\frac{\partial W_{2k}^{*}}{\partial W_{1}}=0\]
Solving for the Jacobian \(\frac{\partial W_{2k}^{*}}{\partial W_{1}}\):
\[\frac{\partial W_{2k}^{*}}{\partial W_{1}}=-\left(\frac{\partial^{2}J_{k}(W_ {1},W_{2k}^{*})}{\partial W_{2k}^{*}\partial W_{2k}^{*}\Upsilon}\right)^{-1} \cdot\frac{\partial^{2}J_{k}(W_{1},W_{2k}^{*})}{\partial W_{1}\partial W_{2k}^ {*}} \tag{7}\]
And then with notation abuse (we use the approximation symbol to treat the Jacobian as in the scalar case even though, as commented earlier, it must be interpreted as the matrix of first-order partial derivatives):
\[\mathbf{J}_{W_{1}}W_{2k}^{*}\approx\frac{\partial W_{2k}^{*}}{\partial W_{1}} \tag{8}\]
The Jacobian \(\mathbf{J}_{W_{1}}W_{2k}^{*}\) described in Equations (7) and (8) is a fundamental ingredient in the evaluation of the gradient of the cost function \(J\), to update \(W_{1}\) values. To complete the training procedure, the backward propagation step must be performed. Similarly to forward propagation, backward propagation also takes place by working simultaneously on both sub-sets of the training set. This allows us to derive the expressions of the gradients necessary for the final calculation of the gradient of \(J\) with respect to \(W1\):
\[\nabla_{\hat{Y}_{k}}J_{k} =\frac{1}{N_{k}}(\hat{Y}_{k}-Y_{k}) \tag{9a}\] \[\nabla_{X_{1k}}J_{k} =\nabla_{\hat{Y}_{k}}J_{k}\cdot W_{2j}^{*\tau}\] (9b) \[\nabla_{W_{2j}^{\prime}}J_{k} =X_{1k}^{T}\cdot\nabla_{\hat{Y}_{k}}J_{k}\] (9c) \[\nabla_{Z_{1k}}J_{k} =\nabla_{X_{1k}}J_{k}\circ f_{1}^{\prime}(Z_{1k}) \tag{9d}\]
For a deeper understanding of how backpropagation works, refer to Hecht-Nielsen (1987), Bottou (1991), Glorot et al. (2011).
Merging together the results obtained by solving Equations (3), (4) and (9) we derive the expression of the gradient of the cost function (2a) with respect to the weights of the first layer, \(W_{1}\), described in the following equation:
\[\nabla_{W_{1}}J= X_{b}^{T}\nabla_{Z_{1k}}J_{b}+\mathbf{J}_{W_{1}}W_{2a}^{*\tau} \nabla_{W_{2a}^{*}}J_{b}+\] \[+X_{a}^{T}\nabla_{Z_{1a}}J_{a}+\mathbf{J}_{W_{1b}}W_{2b}^{*\tau} \nabla_{W_{2a}^{*}}J_{a} \tag{10a}\] \[W_{1} =W_{1}-\gamma\nabla_{W_{1}}J\] (10b) \[W_{2} =\frac{W_{2a}^{*}+W_{2b}^{*}}{2} \tag{10c}\]
Thanks to Equations (10) and (4) we can update network parameters between two consecutive epochs. Notice the difference: \(W_{1}\) is updated through gradient descent in Equation (10b) while \(W_{2}\) is obtained as the average of the optimal \(W_{2}^{*}\) computed explicitly by looking at the two training sub-sets separately in Equation (10c). Notice that, in general, this closed-form solution for \(W_{2}\) might not be practicable (e.g. in the case of classification problems, the concatenation of non-linearities through the different layers of the network would make it unattainable to formulate a least squares problem, as in this case, efficiently solved analytically). In that case, a similar expression to (10a) must be derived and update \(W_{2}\) with gradient descent. However, for this setting, it improves the computational time. Equations (10) represent the heart of the algorithm, defining the _boosting_ step. The mixing of the two sub-sets of the training set embedded in the gradient expression and in optimal values for \(W_{2}\) can improve training performance achieving the same training cost in a lower number of epochs and at the same time avoiding overfitting without the aid of regularization terms. This is critical for reducing the number of hyperparameters. With the updated weights obtained in Equations (10), the prediction is computed as follows:
\[\hat{Y}=f_{1}(X\cdot W_{1})W_{2} \tag{11}\]
### Details on training procedure
Early stopping procedure is used. The stop condition for training is obtained by monitoring the status of the validation cost, as follows:
\[|J_{v}(k)-J_{v}(k-1)|<\epsilon \tag{12}\]
where \(k\) is the epoch index and \(\epsilon\)) is the stop threshold. If the variation of the validation cost between two consecutive epochs is less than \(\epsilon\), training stops and the optimal number of epochs is retrieved. After the computation of the optimal number of epochs, the network is re-trained using as new training set the sum of the original training set, and the validation set. Performances are evaluated considering the test set.
The split-boost strategy has a learning rate varying with the number of epochs. This choice avoids oscillations in training cost (an excessively large learning rate leads to fluctuations and consequent degradation of performance). In the example reported in the next section, the following switch condition is adopted:
\[\begin{cases}\gamma=\gamma^{*}\ if\ J_{t}(k)-J_{t}(k-1)>0\\ \gamma=\frac{\gamma^{*}}{10}\ otherwise\end{cases} \tag{13}\]
where \(\gamma^{*}\) is the best learning rate, chosen after a sensitivity analysis carried on the validation cost.
## 4 An Experimental Case Study
In this section, we show the comparison between a traditional feed-forward neural network and the split-boost neural network applied to a real-world regression problem. The code is written in Python 3.7.13. The Python library used to develop the neural networks is PyTorch Paszke and et al. (2019). Simulations run on an Intel Core i7-8750H with 6 cores, at 2.20 GHz (maximum single core frequency: 4.10 GHz), with 16 GB RAM.
Figure 2: Influence diagram describing the interaction between network layer parameters.
### Dataset description
The case-study is the medical insurance forecast of patients living in the U.S., given a set of clinical features. Data are open-source and offered in Lantz (2013). In Table 4.1 data features and targets are summarized. The goal is to predict the medical insurance charge (\(\$\)) given a set of \(D=6\) features: age, sex, BMI, number of children, smoking condition, and region of residence.
The number of people in the study is \(N=1338\). Dataset is split into training, validation and test sets according to the proportions shown in Figure 3. Test set is of \(20\%\) of the total. The validation set is \(16\%\). The training set is \(64\%\). In the case of the split-boost neural network, the training set is further divided into two halves, \(32\%\) each.
### Networks Hyperparameters
In this section, we show the hyperparameter tuning procedure. Sensitivity analysis of the learning rate \(\gamma\) and the regularization factor \(\lambda\) is performed. In the case of the split-boost neural network, there is the advantage of not having a regularization term which, instead, is replaced by the boosting procedure. Sensitivity analysis with respect to \(\gamma\) is described in the upper plot of Figure 4. The validation cost \(J_{v}=\frac{1}{2N_{v}}(Y_{v}-\hat{Y}_{v})^{2}\) associated to different choices of \(\gamma\) is shown. The best choice for both networks is \(\gamma=0.1\). In the lower plot of Figure 4 the sensitivity analysis with respect to the regularization hyper-parameter \(\lambda\) is shown (only for the feed-forward neural network), evaluating the validation cost. The goal is to compare the best possible feed-forward regularized neural network with the split-boost network. The best choice is \(\lambda=0.01\). In Table 4 the network parameters are summarized. Network architecture is the same. Same number of layers (2), neurons per layer (\(H=10\)) and activation function (RELU). Split-boost strategy does not need any regularization parameter.
### Numerical Simulations
In this section we show numerical insights about the training procedure of the split-boost network. Figure 5 shows the trend of the training cost along the training epochs for the two networks. The split-boost network is represented by the solid red line, the feed-forward is represented by the blue lines for varying values of \(\lambda\). The solid blue line corresponds to the best \(\lambda\) identified (see Table 4). As \(\lambda\) increases, the training cost of the feed-forward network increases. For smaller values of \(\lambda\), it decreases. Split-boost training cost converges to values close to those of the feed-forward regime for a substantially lower number of epochs, \(E_{TB}^{*}=50\) epochs against \(E_{FF}^{*}=200\) epochs. This allows us to conclude that the split-boost procedure, in this case, can converge to the maximum information content extractable from the training set, in a smaller number of epochs, and with an implicit regularization effect.
Notice that the selection of the best number of epochs (implementing early stopping strategy) is performed looking at validation cost as discussed in Section 3.2 for both the networks. After the best epoch retrieval, both the networks are re-trained considering as training set the union of the previous training and validation sets.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Hyperparameter** & **FF** & **Split-Boost** \\ \hline \(L\) (Layers) & 2 & 2 \\ \(H\) & 10 & 10 \\ \(f_{1}\) & RELU & RELU \\ \(\gamma\) & 0.1 & 0.1 \\ \(\lambda\) & 0.01 & \(-\) \\ \hline \end{tabular}
\end{table}
Table 4: Networks Hyperparameters.
Figure 4: Hyperparameter tuning: sensitivity analysis with respect to the learning rate \(\gamma\) (upper panel); sensitivity analysis with respect to the regularization parameter \(\lambda\) (lower plot).
Figure 5: Training cost: comparison between _split-boost_ and _feed-forward_ network for different values of the regularization hyper-parameter \(\lambda\). Solid line is the best configuration of \(\lambda\).
\begin{table}
\begin{tabular}{|l|l|} \hline
**Features** & **Description** \\ \hline Age & Age of primary beneficiary \\ Sex & Insurance contractor gender \\ BMI & Body mass index \\ Children & Number of children \\ Smoker & Smoker condition \\ Region & Residential area in the U.S. \\ \hline
**Target** & **Description** \\ \hline Charge & Medical insurance bill [8] \\ \hline \end{tabular}
\end{table}
Table 3: Medical Insurance Dataset: features and target.
In Figure 6 the computational training time required by the 2 strategies is shown over 200 epochs. Split-boost strategy shows an average training time of \(T_{SB}=1.679\,s\) while the feed-forward of \(T_{FF}=1.179\,s\). On average, each epoch of the split-boost takes 42% more time than the feed-forward. However, considering the training convergence shown in Figure 5, which highlights that split-boost training cost converges at the regime in \(E^{*}_{SB}=50\) epochs against the \(E^{*}_{FF}=200\) of the feed-forward, leads to the average computational requirements of:
\[E^{*}_{SB}\cdot T_{SB}=83.95\,s\leq E^{*}_{FF}\cdot T_{FF}=235.8\,s. \tag{14}\]
In Figure 7 the re-training procedure of the split-boost neural network is shown. On the left the training and test costs are shown. If compared with Figure 5, the number of epochs to which the training cost regime is reached is higher. This depends on the fact that the training set is larger (it includes also the previous validation set). In the middle, the plot of the regression prediction versus target values for each person in the training set is shown. On the right plot, regression prediction versus target values for test data is shown. In the middle and right panels it is shown that the neural network can map successfully the non-linear relationship between the features collected in Table 1 and the regression target. There are some patients whose characteristics escape mapping: with similar features compared to the remaining patients, a higher medical insurance cost is attributed to them by the experts.
To evaluate the performance of the split-boost strategy with respect to the best regularized feed-forward neural network with \(\lambda=0.01\), derived from the sensitivity analysis carried on the regularizing term for the traditional FFNN in Fig. 4, 50 Monte Carlo randomizations of the dataset were performed, extracting 50 different combinations of training, validation and test sets. The results obtained on the test set were collected within the boxplots in Figure 8. The test cost \(J_{ts}=\frac{1}{2N_{ts}}(Y_{v}-\dot{Y}_{ts})^{2}\) obtained after the Monte Carlo randomizations in the case of the split-boost strategy is statistically lower. Split-boost test cost is lower in 72% of the cases. This proves that, with statistical evidence, the split-boost neural network overcomes the feed-forward neural network also in terms of prediction accuracy.
## 5 Conclusions
In this article, we have shown an alternative training approach for feed-forward neural networks. We have called this strategy "split-boost" to recall the idea that dividing (_split_) the dataset and combining the subsets _might_ lead to an improvement (_boost_) in performance.
In the considered real-world case study, the "split-boost" approach turns out to: lead to higher predictive performance than traditional training; and be computationally advantageous since in the training phase it converges within a smaller number of epochs, although the computational time per epoch is greater. The proposed strategy also implicitly counteracts overfitting.
Future activities will focus on an extensive validation of the proposed training strategy, as well as on its generalization and extension to multi-layer networks.
|
2309.05516 | Optimize Weight Rounding via Signed Gradient Descent for the
Quantization of LLMs | Large Language Models (LLMs) have demonstrated exceptional proficiency in
language-related tasks, but their deployment poses significant challenges due
to substantial memory and storage requirements. Weight-only quantization has
emerged as a promising solution, significantly reducing memory and storage
needs without sacrificing too much performance. In this study, we introduce
SignRound, a method that leverages signed gradient descent (SignSGD) to
optimize rounding values and weight clipping in just 200 steps. SignRound
integrates the advantages of Quantization-Aware Training (QAT) and
Post-Training Quantization (PTQ), delivering exceptional results across 2 to 4
bits while minimizing tuning costs and avoiding additional inference overhead.
For example, SignRound achieved absolute average accuracy improvements ranging
from 6.91% to 33.22% at 2bits, as measured by the average zero-shot accuracy
across 11 tasks. It also demonstrates strong generalization in recent models,
achieving near-lossless 4-bit quantization in most scenarios. The source code
is publicly available at https://github.com/intel/auto-round. | Wenhua Cheng, Weiwei Zhang, Haihao Shen, Yiyang Cai, Xin He, Kaokao Lv, Yi Liu | 2023-09-11T14:58:23Z | http://arxiv.org/abs/2309.05516v5 | # Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs
###### Abstract
Large Language Models (LLMs) have proven their exceptional capabilities in performing language-related tasks. However, their deployment poses significant challenges due to their considerable memory and storage requirements. In response to this issue, weight-only quantization, particularly 3 and 4-bit weight-only quantization, has emerged as one of the most viable solutions. As the number of bits decreases, the quantization grid broadens, thus emphasizing the importance of up and down rounding. While previous studies have demonstrated that fine-tuning up and down rounding with the addition of perturbations can enhance accuracy in some scenarios, our study is driven by the precise and limited boundary of these perturbations, where only the threshold for altering the rounding value is of significance. Consequently, we propose a concise and highly effective approach for optimizing the weight rounding task. Our method, named SignRound, involves lightweight block-wise tuning using signed gradient descent, enabling us to achieve outstanding results within 400 steps. SignRound competes impressively against recent methods without introducing additional inference overhead. The source code will be publicly available at [https://github.com/intel/neural-compressor](https://github.com/intel/neural-compressor) soon.
## 1 Introduction
Large language models (LLMs) have demonstrated exceptional proficiency on language-related tasks(OpenAI; Touvron et al., 2023). Nevertheless, the deployment of LLMs presents notable hurdles due to their extensive memory and storage needs. Moreover, the computational demands of these models leads to the challenges for real-time applications. Consequently, it becomes imperative to explore techniques like quantization to facilitate the efficient deployment of LLMs.
Quantization techniques can be broadly classified into two categories: quantization-aware training (QAT) (Esser et al., 2019; Zhuang et al., 2021; Lee et al., 2021; Liu et al., 2023) and post-training quantization (PTQ) (Nagel et al., 2019; Xiao et al., 2022; Frantar et al., 2022; Nagel et al., 2020). QAT involves training the model with quantization in mind. During QAT, the model is trained using simulated lower-precision representations, allowing it to learn and adapt to the effects of quantization. This approach often yields better accuracy compared to PTQ. However, QAT comes with certain drawbacks, including increased training complexity, longer training times, and the need to tune hyperparameters. Applying QAT to LLMs can be particularly costly, despite recent efforts (Hu et al., 2021; Dettmers et al., 2023) to improve the efficiency of fine-tuning LLMs. In contrast, PTQ directly quantizes the model without any simulated training or fine-tuning. While PTQ is a concise approach, it is susceptible to significant accuracy drops. This highlights the need for further advancements in PTQ methods to enhance their accuracy preservation capabilities.
Two types of tensors could be quantized: activations and weights. Weight-only quantization has gained prominence in recent times as it offers a favorable tradeoff for LLMs. Quantizing activations for LLMs can be challenging (Wei et al., 2022; Xiao et al., 2023; Bondarenko et al., 2023), making weight-only quantization a more practical choice. Additionally, the primary bottleneck in generating new tokens for LLMs often lies in memory bandwidth (Kim et al., 2023), further emphasizing the significance of weight-only quantization. In this work, we only focus on weight only quantization.
In order to quantize the weights, a rounding operation is necessary, with rounding-to-nearest (RTN) being the predominant method. RTN quantizes each element independently by simply rounding it to the nearest integer. However, RTN fails to consider the relationships between weights and weights, as well as weights and activations. The potential of an advanced rounding strategy to improve accuracy has been initially demonstrated by Nagel et al. (Nagel et al., 2020). They addressed the rounding task by formulating it as a quadratic unconstrained binary optimization problem and approximated the task loss by employing a Taylor series expansion. However, relying exclusively on the second-order term may not produce accurate results. This is because rounding can introduce considerable weight modifications that may make other order terms significant and non-negligible.
We prefer the signed gradient descent method to effectively tackle the issue of sub-optimal rounding solutions. This approach is inspired by the well-defined boundaries of the solution space, which are confined to the range of [-0.5, 0.5], where only the threshold for altering the rounding value is of significance. Figure 1 provides an overview of our method, SignRound. It utilizes signed gradient descent to fine-tune the up and down rounding through block-wise output reconstruction, resulting in enhanced flexibility and faster convergence. Our contributions are primarily threefold:
* We introduce a succinct and potent method for optimizing the weight-rounding task. Our approach utilizes a minimal amount of unlabeled data and executes quantization in a block-wise fashion. Moreover, it is worth noting that our method does not introduce any additional overhead during inference, further enhancing its general practicality.
* Our findings demonstrate that a mere alteration of approximately 5% of the rounding values can significantly enhance the performance of some quantization models.
* Our empirical results exhibit substantial performance enhancements over the established baseline of RTN, and our method contends favorably against recent techniques.
## 2 Related Work
Quantization Aware Training.QAT methods have gained widespread popularity in model compression, as they enable the fine-tuning process, often leading to superior accuracy compared to the PTQ method. In their work, (Esser et al., 2019) proposed a novel approach that estimates and scales the task loss gradient at each weight and activation layer's quantizer step size, allowing for joint learning with other network parameters. (Zhuang et al., 2021) put forward a progressive quantization scheme that involves quantizing activations after weights. Additionally, CPQ (Lee et al., 2021) effectively identified the optimal quantization grids while naturally encouraging the underlying full-precision weights to gather around those quantization grids cohesively during training. While QAT methods are popular in relatively small-scale models, their application in LLMs is limited due to the high computational cost associated with training or fine-tuning.
Post-training Quantization (PTQ).PTQ methods simplify the quantization process without the needs of additional training. (Nagel et al., 2019) focused on minimizing quantization error through weight equalization and bias correction techniques. (Liu et al., 2021) specifically addressed the quantization of vision transformers, introducing a ranking loss to preserve the relative order of self-attention results after quantization and exploring a mixed-precision quantization scheme. (Frantar and Alistarh, 2022) leveraged Optimal Brain Surgeon (Hassibi et al., 1993) to tune weights during model compression. Both Hawq (Yao et al., 2021) and HAQ (Wang et al., 2019) aimed to identify important layers and maintain higher precision for them. Given its low resource requirement, PTQ is particularly suitable for the quantization of Large Language Models (LLMs). We will next focus on the quantization methods designed for LLMs, most of which fall under the category of PTQ.
Large Language Models Quantization.Significant advancements have been made in addressing the pressing demand for quantizing large language models (LLMs). LLM.int8() (Dettmers et al., 2022) introduced a mixed precision approach to preserve essential channels in high precision. Zero-QuantV2 (Yao et al., 2023) employed low-rank matrices to enhance model quality recovery. RPTQ (Yuan et al., 2023) mitigated the impact of range differences between channel by rearranging the channels and quantizing them in clusters. Other methods, such as SPIQ (Yvince et al., 2023), SmoothQuant (Xiao et al., 2022), Outlier Suppression+ (Wei et al., 2023), utilized handcrafted equivalent transformations to mitigate quantization errors. While these approaches are effective,
their applicability is limited due to the performance overhead involved during inference, because there is no chance to fuse the transformation scale to the model itself on certain model architectures. LLM-QAT (Liu et al., 2023) employs QAT to enhance the performance of W4A8. In the context of weight-only quantization, GPTQ (Frantar et al., 2022) optimized weights using the Optimal Brain Surgeon (Hassibi et al., 1993) technique, achieving low-bit quantization on LLMs with minimal computational overhead. AWQ (Lin et al., 2023) followed the equivalent transformation approach with additional tuning in a constrained space, and has the similar limitations as SmoothQuant (Xiao et al., 2022). SqueezeLLM (Kim et al., 2023) employed sensitivity-based non-uniform quantization and dense-and-sparse decomposition to achieve lossless compression to ultra-low precision. While recent advancements in LLM quantization have made significant progress, there is still room for improvement in achieving minimal quantization loss without introducing inference overhead.
Rounding Methods.Adaptive Rounding (Nagel et al., 2020) has already showcased the potential of an advanced rounding strategy to enhance accuracy (Li et al., 2021; Wei et al., 2022). They used the rounding task as a quadratic unconstrained binary optimization problem by approximating the task loss through a Taylor series expansion. However, considering only the second-order term may not yield accurate results. This is because the rounding value gets multiplied by a scaling coefficient during de-quantization, potentially introducing significant weight changes that make other order terms non-negligible. FlexRound (Lee et al., 2023) introduces a more flexible approach to rounding by incorporating element-wise division. This allows for simultaneous learning of a shared quantization grid size and individual scales for each pre-trained weight. However, it's not easily scalable to apply to LLMs due to the needs of specialized hyperparameters for each specific model and task. AQuant (Li et al., 2022) introduced a dynamic approach where the border becomes a function dependent on the activation value to reduce the quantization error of activation. We specifically concentrate on the up and down rounding task for weight quantization in this work.
Signed Gradient Descent.Signed gradient descent is not commonly utilized and is typically applied in specific scenarios, such as reducing communication costs. This is because signed gradient carries significantly less information compared to original gradient. Recent studies have shed light on the advantages of sign-based methods over gradient descent in certain conditions. Safaryan et al. (Safaryan & Richtarik, 2021) found that sign-based methods are preferable when the Hessian matrix is concentrated on its diagonal and the maximal eigenvalue is much larger than the average eigenvalue. Li et al. (Li et al., 2023) investigated a variant of sign-based gradient descent that exhibits faster convergence. Additionally, Safaryan et al. (Safaryan & Richtarik, 2021) proposed a stochastic sign descent with momentum, which converges under the standard bounded variance assumption with the optimal asymptotic rate. These findings contribute to a better understanding of the potential benefits and applications of signed gradient descent methods.
## 3 Methodology
We provide an overview of quantization before diving into the details of our approach. To quantize and de-quantize the weights, the following operation as shown in Eq.1 is used (disregarding zero point for simplicity).
\[\widetilde{W}=s*clip(\left\lfloor\frac{W}{s}\right\rceil,n,m),n,m\in\mathbb{N} \tag{1}\]
where \(s\) is the quantization scale, which is a positive scalar value. However, it is important to mention that our method can be easily extended to cases where \(s\) is a vector or tensor. And the rounding operation \(\left\lfloor\cdot\right\rceil\) is typically performed using the RTN method. While RTN is a concise approach, it quantizes each element independently, thereby losing the ability to model the correlation among different weights or activations.
To introduce more flexibility into the rounding operation, a tensor \(V\) with the same shape of \(W\) is introduced. Each element of \(V\) falls within the range of \([-B,B]\), in which \(B\) is set to 0.5 in all of experiments to ensure that the changes made only impact the rounding value.
\[\widetilde{W}=s*clip(\left\lfloor\frac{W}{s}+V\right\rceil,n,m),n,m\in\mathbb{N} \tag{2}\]
This adjustment allows for a more adaptable and context-aware quantization process. If we try to reconstruct the output of layers, the loss could be formulated as
\[L=||WX-\widetilde{W}X||_{F}^{2} \tag{3}\]
where \(X\) is the input of the layer and \(||\cdot||_{F}\) denotes the Frobenius norm. Then the final optimization task is described as the following
\[\operatorname*{arg\,\mathit{min}}_{V}||WX-\widetilde{W}X||_{F}^{2} \tag{4}\]
### SignRound
Since \(V\) has a clear boundary, i.e. \([-0.5,0.5]\), and only the threshold for altering the rounding value is of significance, we prefer scaled signed gradient descent instead of normal gradient descent to optimize this task. Figure 1 shows an illustration of our method. More precisely, we follow the below optimization to approach the sub-optimal solution of Eq. 4.
\[\begin{split}& V_{t+1}=V_{t}-lr*sign(\frac{\partial L}{\partial V })\\ &\mathrm{s.t.}|\sum_{t}lr*sign(\frac{\partial L}{\partial V})| \leq B\end{split} \tag{5}\]
where \(t\) is the optimizing step, lr is the learning rate, \(|\cdot|\) is the absolute operation and B is the boundary we use, which is set to 0.5 in all our experiments.
Further, by employing straight-through estimator (STE) (Bengio et al., 2013), it can be easily demonstrated that \(sign(\frac{\partial L}{\partial V})=sign(\frac{\partial L}{\partial W})\) in Eq. 5 as following since elements of \(s\) are all positive.
\[\frac{\partial L}{\partial W}=-2(WX-\widetilde{W}X)X^{T} \tag{6}\]
\[\frac{\partial L}{\partial V}=-2s(WX-\widetilde{W}X)X^{T} \tag{7}\]
So our optimization could be simplified as
\[\begin{split}& V_{t+1}=V_{t}-lr*sign(\frac{\partial L}{\partial W })\\ &\mathrm{s.t.}|\sum_{t}lr*sign(\frac{\partial L}{\partial W})| \leq B\end{split} \tag{8}\]
Figure 1: An illustration of SignRound. Unlike the direct rounding in RTN, SignRound performs signed gradient descent to fine-tune the up and down rounding through block-wise output reconstruction. After lightweight forward and backward steps, WINT4 has been well optimized towards the minimal loss, therefore ready for the final inference deployment. Note that Quant and Dequant are two standard operations for quantization and dequantization respectively.
Moreover, as Eq 3 averages the loss of each element, which presumes that each one contributes equally to the network, that basically is not true. To alleviate this issue, we optimize the rounding task blockwise. To clarify, in our context, we use the term 'layer' to refer to a linear/convolution layer, while 'block' denotes a transformer block that typically consists of several linear layers.
The above pseudocode 1 presents more details of SignRound.
## 4 Experiments
In this section, we conduct a comprehensive evaluation of SignRound from various perspectives. Firstly, we provide a brief overview of the LLM architectures and tasks that are included in our evaluation. Secondly, we present a detailed comparison between our method and some other existing approaches, highlighting the unique features and advantages of SignRound. Thirdly, we conduct additional experiments to further demonstrate the validity of our choices, assess the sensitivity of hyperparameters, and explore other relevant factors. Finally, the runtime is reported in Appendix D for reference.
### Experimental Settings
Evaluation and Datasets.We make assessments on several language tasks to satisfy the task-agnostic setting. Specifically, we report average accuracy results on four common sense reasoning tasks including HellaSwag (Zellers et al., 2019), WinoGrande (Sakaguchi et al., 2021), PIQA (Bisk et al., 2020) and LAMBADA (Paperno et al., 2016).Additionally, we benchmarked our models on MMLU (7), which encompasses 57 tasks spanning STEM, humanities, social science, and more. Evaluation for all these tasks was performed using the lm-eval-harness (Gao et al., 2021). Furthermore, we complement our evaluation with perplexity (ppl) analysis on Wikitext2 (Merity et al., 2016) and C4 (Raffel et al., 2020), by following the source code 1 of GPTQ.
Footnote 1: [https://github.com/IST-DASLab/gptq](https://github.com/IST-DASLab/gptq)
Quantization Configurations.In line with the approach taken in GPTQ (Frantar et al., 2022), we specifically concentrate on weight-only quantization, targeting the linear layers within transformer blocks. Other layers, such as the embedding layer and typically the last layer like lm-head, are excluded from the quantization process. We initially intended to utilize the pile (7) dataset for calibration, following AWQ (Lin et al., 2023) and SmoothQuant (Xiao et al., 2022). However, due to its large size, we have opted to use the readily available pile-10k dataset 2, which consists of the
first 10k samples from pile, for both GPTQ and our method. We employ standard uniform per-row asymmetric quantization on the min-max grid. Our evaluation primarily focuses on W4, W4G128, and W3G128, where W4 indicates quantizing weights with 4 bits and G represents finer-granularity grouping as described in (Park et al., 2022; Frantar et al., 2022).
Large Language Models.Our experimental evaluation encompasses a range of widely adopted LLM architectures, such as LLaMAs (Touvron et al., 2023a), LLaMAs v2 (Touvron et al., 2023b), BLOOMs (Scao et al., 2022), and OPTs (Zhang et al., 2022). We cover a wide range of LLM parameters, ranging from millions to billions, to ensure comprehensive coverage and analysis.
SignRound Hyperparameters.We selected 512 samples randomly from pile-10k and truncated each sample to a sequence length of 512. The tuning process involves adjusting each block for 400 steps using a learning rate of 2.5e-3, a batch size of 8, and employing a linear learning rate decay. We set the value of B in Eq. 8 to 0.5. Besides, we adopted automatic mixed precision(AMP) to accelerate the tuning. It's worth noting that adjusting the sequence length to 2048 yielded improvements in numerous scenarios. However, we did not adopt this as the default setting due to the associated runtime overhead. For models \(\geq\) 30B, we made configuration adjustments to strike a balance between runtime and performance. Specifically, we reduced the sample count to 256, shorted the sequence length to 256, and disabled AMP.
### Comparing With Other Methods
We conducted a comprehensive benchmarking of our results against RTN and GPTQ (Frantar et al., 2022). However, it is important to highlight that act-order was not enabled in GPTQ due to the kernel overhead it introduces (Lin et al., 2023), although it has the potential to improve accuracy for certain models. When evaluating perplexity (ppl), we prioritize reporting the ppl on C4 dataset as our primary focus, taking into consideration the potential occurrence of NaN values when assessing perplexity for Wikitext2 and ptb datasets, both for SignRound and GPTQ. Furthermore, we conducted a limited and non-rigorous comparison between our approach and AWQ Lin et al. (2023) in Appendix A.1.
We begin by presenting the average accuracy results for the HellaSwag, WinoGrand, PIQA, and LAMBADA tasks across LLaMA, OPT, and BLOOM models with a size below 13B. These results
\begin{table}
\begin{tabular}{c|c|c c c c|c c c c c} \hline \multirow{2}{*}{nbits} & \multirow{2}{*}{methods} & \multicolumn{4}{c|}{LLaMA} & \multicolumn{4}{c}{OPT} \\ & & 7b & 13b & 7bv2 & 13bv2 & 125m & 1.3b & 2.7b & 6.7b & 13b \\ \hline
16 & FP16 & 68.80 & 71.14 & 69.02 & 71.20 & 45.09 & 57.66 & 61.04 & 64.92 & 65.49 \\ \hline \multirow{3}{*}{W4} & RTN & 67.38 & 68.82 & 66.98 & **70.17** & 39.41 & 47.22 & 58.61 & 62.99 & 64.08 \\ & GPTQ & 64.70 & 70.00 & 66.89 & 69.24 & 43.58 & 56.15 & 59.92 & 63.09 & 64.83 \\ & Ours & **68.05** & **70.58** & **67.74** & 70.03 & **44.13** & **56.17** & **60.58** & **64.34** & **65.05** \\ \hline \multirow{3}{*}{W4G128} & RTN & 67.85 & 70.84 & 68.32 & 70.72 & **45.27** & 56.47 & 60.70 & 64.03 & 64.84 \\ & GPTQ & 66.32 & 70.92 & **68.90** & 70.68 & 42.88 & 56.99 & **61.23** & 64.75 & 65.37 \\ & Ours & **68.09** & **71.43** & 68.65 & **70.81** & 44.23 & **57.30** & 60.86 & **64.76** & **65.67** \\ \hline \multirow{3}{*}{W3G128} & RTN & 64.94 & 67.70 & 65.92 & 68.70 & 39.11 & 42.61 & 36.99 & 56.09 & 49.56 \\ & GPTQ & 58.29 & 68.73 & 65.51 & 68.73 & 39.78 & 54.43 & 58.47 & **62.98** & **64.68** \\ \cline{1-1} & Ours & **66.62** & **69.59** & **66.88** & **69.70** & **43.31** & **55.46** & **59.12** & 53.42 & 63.61 \\ \hline \end{tabular}
\end{table}
Table 1: Average % accuracy(\(\uparrow\)) of HellaSwag, WinoGrand, PIQA and LAMBADA for LLaMA & OPT.
\begin{table}
\begin{tabular}{c|c c c c|c c c|c c c c} \hline & \multicolumn{4}{c|}{W4} & \multicolumn{4}{c|}{W4G128} & \multicolumn{4}{c}{W3G128} \\ \hline Size & 560m & 1b7 & 3b & 7b1 & 560m & 1b7 & 3b & 7b1 & 560m & 1b7 & 3b & 7b1 \\ \hline FP16 & 45.50 & 52.31 & 55.48 & 60.22 & 45.50 & 52.31 & 55.48 & 60.22 & 45.50 & 52.31 & 55.48 & 60.22 \\ \hline RTN & 43.10 & 49.97 & 53.16 & 57.73 & 44.28 & **52.08** & 54.86 & 59.31 & 40.83 & 47.98 & 52.51 & 57.59 \\ GPTQ & 43.95 & 50.91 & **54.65** & 58.27 & 44.79 & **52.08** & **55.68** & 59.59 & 42.74 & 48.81 & 53.41 & 58.12 \\ Ours & **45.00** & **51.47** & 54.63 & **59.52** & **45.40** & 51.85 & 55.40 & **59.83** & **44.08** & **50.52** & **53.64** & **58.69** \\ \hline \end{tabular}
\end{table}
Table 2: Average % accuracy(\(\uparrow\)) of HellaSwag, WinoGrand, PIQA and LAMBADA for BLOOM.
are shown in Table 1 and 2. In conclusion, our method outperforms RTN in 36 out of 39 scenarios, showcasing its effectiveness. Additionally, when comparing our approach to GPTQ, we surpass it in 32 out of 39 scenarios, further highlighting the strengths of our method. While our method showcases overall effectiveness, it is important to acknowledge the presence of outliers, such as OPT6.7B at W3G128. Although the root cause for this has not been identified yet, it could be mitigated by fine-tuning hyperparameters, as discussed in the following sections. For detailed results of LLaMA7B, LLaMA13B, LLAMA7B-V2, and LLAMA13B-V2, please refer to Appendix E. The results in Appendix E also highlight that changing the sequence length to 2048 could bring noticeable improvement in many scenarios.
We then present the perplexity (ppl) results for C4 in Table 3, along with the detailed results for Wikitext2 in Appendix A.2. In conclusion, we achieve better or comparable performance in 9 out of 12 models. In certain cases where the results may not be optimal, we can still fine-tune the hyperparameters to achieve better results, as demonstrated in the subsequent sections.
Next, we present a comprehensive breakdown of the accuracies achieved by MMLU for LLaMA-7B and LLaMa-7B-V2 in Table 4. By analyzing the average accuracies, we observe that SingRound outperforms RTN and GPTQ in 4 out of the 6 scenarios when the best model-wise setting is applied.
We also provide the results for models with a capacity of 30B or greater at W3G128 in Table 5 and W4 in Appendix A.3. Additionally, we discovered that recovering the sequence length to 512 of the calibration dataset yielded improvements in certain scenarios, and thus we include these results. In summary, our approach achieves comparable performance to GPTQ for the given accuracy task. However, we slightly lag behind GPTQ in terms of ppl tasks.
and disable AMP. Based on the below results, block-wise tuning outperformed layer-wise tuning in the majority of scenarios.
### The analysis of hyperparameters sensitivity
We conducted a hyperparameters sensitivity analysis, the results of which are summarized in Table 7. In the "steps100" configuration, we used 100 steps, and a learning rate of 1e-2. In the "lr4e-3" configuration, we set the learning rate to 4e-3. We also changed the sequence length of the calibration dataset from 512 to 2048, denoted by "seq2048". Please note that all other hyperparameters not mentioned in each configuration were kept the same as the default configurations, as detailed in Section 4.1. Overall, our method exhibits robustness to hyperparameters in common sense reasoning tasks, with the exception of the perplexity of LLaMA-7b-v2. However, we did discover that certain hyperparameters, such as the sequence length of the calibration dataset, can significantly impact performance in some scenarios, as demonstrated in Table 4 and 5.
### The Analysis of Gradients and Their Effects on Rounding
In this analysis, we dive into the distribution of the magnitude of \(V\) in Eq. 2 and its impact on rounding values across approximately 7 billion models at W4. The visual representations of these
\begin{table}
\begin{tabular}{c|c c c c|c c c c} \hline & \multicolumn{4}{c|}{W4} & \multicolumn{4}{c}{W3G128} \\ \hline Size & 7b & 7bv2 & 6.7b & 7b1 & 7b & 7bv2 & 6.7b & 7b1 \\ \hline layer-acc-seq256 & 67.50 & 67.78 & 63.46 & 58.72 & 65.96 & 66.09 & **61.60** & 58.24 \\ block-acc-seq256 & **67.64** & **67.96** & **64.55** & **59.08** & **66.31** & **66.63** & 57.76 & **58.34** \\ \hline layer-c4-ppl-seq256 & 8.02 & **7.92** & 13.44 & 15.73 & 8.81 & **8.69** & **16.83** & 16.15 \\ block-c4-ppl-seq256 & **7.81** & 8.19 & **13.10** & **15.71** & **8.34** & 10.84 & 25.44 & **16.05** \\ \hline \end{tabular}
\end{table}
Table 6: Comparing block-wise and layer-wise tuning for around 7B models, the models LLaMA7b, LLaMA7bv2, OPT6.7b, and LLaMA7bv2, OPT6.7b, and BLOOM7b1 are denoted by 7b, 7bv2, 6.7b, and 7b1 respectively. The accuracy is the % average accuracy(\(\uparrow\)) of HellaSwag, WinoGrand, PIQA and LAMBADA. Perplexity (PPL) (\(\downarrow\)) is evaluated using the C4 dataset.
\begin{table}
\begin{tabular}{c|c c|c c|c c|c c} \hline & \multicolumn{4}{c|}{Accuracy} & \multicolumn{4}{c}{PPL on C4} \\ \hline Type & \multicolumn{2}{c|}{LLaMA} & \multicolumn{2}{c|}{OPT} & \multicolumn{2}{c|}{LLaMA} & \multicolumn{2}{c}{OPT} \\ \hline Size & 30b & 65b & 30b & 66b & 30b & 65b & 30b & 66b \\ \hline FP16 & 73.46 & 75.48 & 67.87 & 69.54 & 6.13 & 5.98 & 11.46 & 10.99 \\ \hline RTN & 72.17 & 73.69 & 62.83 & 38.00 & 6.85 & 6.52 & 30.81 & 285.41 \\ GPTQ & 72.09 & **73.97** & **66.76** & 67.87 & 6.80 & 6.52 & **11.74** & **11.87** \\ Ours-seq256 & **72.45** & 73.71 & 66.51 & **68.00** & 6.83 & **6.52** & 13.00 & 13.34 \\ Ours-seq512 & 71.95 & 73.78 & 66.70 & 67.26 & **6.79** & 6.53 & 12.50 & 13.97 \\ \hline \end{tabular}
\end{table}
Table 5: Average % accuracy(\(\uparrow\)) of HellaSwag, WinoGrand, PIQA and LAMBADA and C4 ppl(\(\downarrow\)) for LLaMA & OPT with \(\geq\) 30B at W3G128. ”Ours-seq512” indicates that we have modified the sequence length of the calibration dataset from 256 to 512.
distributions are provided in Appendix B. Our investigation reveals that the majority of V values are concentrated within the range of [-0.3, 0.3]. Additionally, we observe an interesting pattern in the distribution of V across different layers. The middle layers exhibit a more tightly clustered distribution compared to the other layers. This observation aligns with the common understanding that the head and tail layers tend to be more sensitive to compression, while the middle layers are relatively more robust.
Figure 2 illustrates the impact of the rounding value introduced by the \(V\) in Eq. 2 for models around 7B at W4. The red line represents "up rounding", indicating that while RTN rounds the value to the floor, SignRound changes it to the ceiling. Conversely, the green line represents "down rounding" indicating that while RTN rounds the value to the ceiling, SignRound changes it to the floor. It is worth noting that SignRound modifies only a small percentage of weight rounding values for each of the four models, namely 5.27%, 5.29%, 4.14%, and 4.10%.
We were also intrigued by the possible correlation between rounding and activation, as previous research has shown that keeping only 0.1%-1% of the channels corresponding to larger activation can significantly improve the quantized performance in AWQ (Lin et al., 2023). We shown the result in Appendix C.
## 5 Conclusions and Limitations
In this paper, we present a highly effective and concise approach to optimize the weight rounding task. Our method, SignRound, leverages lightweight block-wise tuning using signed gradient descent, achieving remarkable results within a mere 400 steps. Extensive experiments demonstrate the superior performance of our approach. As part of our future work, we plan to apply our approach to more diverse LLM models (e.g., Code LLaMA (Roziere et al., 2023), LLaMA v2 Chat (Touvron et al., 2023b)), and contribute our recipes and implementations to the open source community. On the other hand, although our method is generally effective, there are a few outliers in certain scenarios, where we plan to mitigate the issue by fine-tuning the hyperparameters.
Figure 2: The impact of the rounding value introduced by the \(V\) in Eq. 2 |
2309.05270 | CONFLATOR: Incorporating Switching Point based Rotatory Positional
Encodings for Code-Mixed Language Modeling | The mixing of two or more languages is called Code-Mixing (CM). CM is a
social norm in multilingual societies. Neural Language Models (NLMs) like
transformers have been effective on many NLP tasks. However, NLM for CM is an
under-explored area. Though transformers are capable and powerful, they cannot
always encode positional information since they are non-recurrent. Therefore,
to enrich word information and incorporate positional information, positional
encoding is defined. We hypothesize that Switching Points (SPs), i.e.,
junctions in the text where the language switches (L1 -> L2 or L2 -> L1), pose
a challenge for CM Language Models (LMs), and hence give special emphasis to
SPs in the modeling process. We experiment with several positional encoding
mechanisms and show that rotatory positional encodings along with switching
point information yield the best results.
We introduce CONFLATOR: a neural language modeling approach for code-mixed
languages. CONFLATOR tries to learn to emphasize switching points using smarter
positional encoding, both at unigram and bigram levels. CONFLATOR outperforms
the state-of-the-art on two tasks based on code-mixed Hindi and English
(Hinglish): (i) sentiment analysis and (ii) machine translation. | Mohsin Ali, Kandukuri Sai Teja, Neeharika Gupta, Parth Patwa, Anubhab Chatterjee, Vinija Jain, Aman Chadha, Amitava Das | 2023-09-11T07:02:13Z | http://arxiv.org/abs/2309.05270v2 | CONFLATOR: Incorporating Switching Point based Rotatory Positional Encodings for Code-Mixed Language Modeling
###### Abstract
The mixing of two or more languages is called Code-Mixing (CM). CM is a social norm in multilingual societies. Neural Language Models (NLMs) like transformers have been effective on many NLP tasks. However, NLM for CM is an under-explored area. Though transformers are capable and powerful, they cannot always encode positional information since they are non-recurrent. Therefore, to enrich word information and incorporate positional information, positional encoding is defined. We hypothesize that Switching Points (SPs), i.e., junctions in the text where the language switches (L1 \(\rightarrow\) L2 or L2 \(\rightarrow\) L1), pose a challenge for CM Language Models (LMs), and hence give special emphasis to SPs in the modeling process. We experiment with several positional encoding mechanisms and show that rotatory positional encodings along with switching point information yield the best results.
We introduce CONFLATOR: a neural language modeling approach for code-mixed languages. CONFLATOR tries to learn to emphasize switching points using smarter positional encoding, both at unigram and bigram levels. CONFLATOR outperforms the state-of-the-art on two tasks based on code-mixed Hindi and English (Hinglish): (i) sentiment analysis and (ii) machine translation.
## 1 Code-Mixing: Juxtaposition of two Languages
Code-mixing is defined as the alternation of two or more languages during articulation. Recently, code-mixing has gained a lot of attention in the area of NLP due to the prevalence of language mixing in multilingual societies such as India, Europe, US, South Africa, Mexico, etc. In such societies, code-mixing is fairly commonplace, especially in informal conversations, where the native language is often romanized and code-mixed with an auxiliary language. This effect occasionally manifests in posts originating from the aforementioned sources on social media platforms such as Twitter, Facebook, etc. An example of Hindi and English code-mixing is shown in the following phrase where an English word, _dance_, is mixed with Hindi romanized words: _Gauge, aur, kare_.
\[\textit{Gauge}_{\texttt{H1}}\textit{aur}_{\texttt{H1}}\textit{ dance}_{\texttt{EN}}\textit{kare}_{\texttt{H1}}\]
**English translation:** sing and dance
With the proliferation of code-mixing on the internet, it is important to study language processing and language modeling for code-mixed languages. While language modeling using neural networks has come a long way, replacing n-gram language models with distributed neural representations (Bengio et al., 2003) to recent large transformer-based pre-trained language models (LMs) such as GPT-x (Radford et al., 2019), BERT (Devlin et al., 2018) etc., code-mixed language modeling using state-of-the-art (SoTA) Transformer-based models is still under-explored.
The biggest hindrance in the adoption of SoTA Transformer-based LMs for code-mixing can be attributed to data scarcity. While Transformer-based (Vaswani et al., 2017) architectures such as BERT and GPT have set new benchmarks in the domain of language modeling, they are infamous for their low sample efficiency. In other words, the voracious data appetite of Transformers and the lack of substantial code-mixed datasets in the community is the primary reason for the technological hindrances in the area of code-mixed language modeling compared to vanilla language modeling.
To corroborate the aforementioned arguments, we experiment with Transformer-based models such as GPT-2 and BERT for code-mixing. We empirically observe that these models perform poorly on tasks involving code-mixed data. Our hypothesis is as follows: Since information related to switch
ing point is a major component in the context of code-mixed content, it should thus be incorporated in downstream processing. Switching points are a bottleneck for a model's processing of code-mixed data and the reason for poor performance using SoTA neural language models Chatterjee et al. (2020). Switching points play a crucial factor when dealing with CM data. In the next few sections, we discuss various positional encoding approaches, switching points, and our approaches for language modeling on code-mixed data. Our key contributions are:
* We propose _CONFLATOR_, an LM system that incorporates switching point related positional information.
* Our system improves the performance of existing models and achieves a new SoTA on two tasks.
* We investigate, experiment with, and introduce various switching point based positional encoding techniques.
* We introduce a novel Switching Point based Rotary matrix for Rotary Positional Encoding (RoPE).
* We curate a new dataset of code-mixed tweets.
## 2 Related Work
It is important to study code-mixing as it is a part of most multilingual societies and prevalent in social media. It is more complex to process code-mixed text than monolingual text for NLP tasks Verma (1976). Similar line of work was followed by Bokamba (1988) and Singh Singh (1985) on the complexities of multi-languages on the basis of syntactics and grammar. The difficulties of processing code-mixed languages on social media is further exacerbated by unusual spellings, many unique ways of writing the same word, unnecessary capitalization etc Das and Gamback (2014); Laddha et al. (2020).
With the growing popularity on social media, Various tasks like sentiment analysis Patwa et al. (2020); Chakravarthi et al. (2020), translation Dhar et al. (2018); Srivastava and Singh (2020), hate-speech detection Bohra et al. (2018); Banerjee et al. (2020), POS tagging Vyas et al. (2014), etc. have been performed on code-mixed data. Methods to handle code-mixing for text classification include the use of CNNs Aroyehu and Gelbukh (2018); Patwa et al. (2020), Transformer or BERT like models Samghabadi et al. (2020); Tang et al. (2020), ensemble models Tula et al. (2021); Jhanwar and Das (2018), focal loss Tula et al. (2022); Ma et al. (2020) etc.
Vaswani et al. (2017) proposed transformers for neural language modeling using masked language modeling (MLM) and next sentence prediction, which achieved SoTA performance on many NLP tasks. Devlin et al. (2018) released mBERT, a model trained on multilingual corpus that includes 104 languages. A cross lingual language model XLM was proposed in Lample and Conneau (2019) which leveraged monolingual and crosslingual corpus for pretraining. Nayak and Joshi (2022) present a bert pretrained on CM data. However, they do not make changes to their language model or technique to handle code-mixed data in particular. Sengupta et al. (2021) propose a Hierarchical transformer based architecture that captures the semantic relationship among words and hierarchically learns the sentence level semantics of code-mixed data. Ali et al. (2022) Were one of the first to incorporate switching point information in positional encoding. They utilize dynamic positional encodings whereas our method, CONFLATOR infuses switching point information in rotatory positional encodings and also uses both unigram and bigram tokens to get the final embedding.
## 3 Data Extraction and Strategies
In this section, we discuss the details of code-mixed data extraction. Our primary aim is to extract naturally distributed code-mixed data.
### Qualitative and Quantitative Checkpoints for Hinglish Corpus
The performance of LMs is dependent on the training data size and quality, along with the vocabulary size. Code-mixed language modeling suffers from the following challenges: i) data scarcity, ii) Words from 2 (or more) languages in the same sentence, iii) _Hindi_ is written using _English_ letters (i.e. transliteration), hence, there is no standardization of spelling - which in effect proliferates word forms Laddha et al. (2020, 2022), iii) Code-mixing is usually found on social media and netizens often incorporate creativity in their mixing along with wordplay. We consider two fundamental questions to guide our data collection:
1. _The performance on any NLP task depends on the data complexity:_
**Empirical measurement:** Consider two 4-word tweets - i) \(T_{i}\) : \(w_{L1}w_{L1}w_{L2}w_{L2}\) and ii) \(T_{j}\) : \(w_{L1}w_{L2}w_{L1}w_{L2}\). Both the tweets have 2 words each from the languages \(L1\) and \(L2\). Thus the mixing ratio of both the tweets \(T_{i}\) and \(T_{j}\) is \((4-2)/4=0.50\). However, \(T_{i}\) only contains 1 code alternation point whereas \(T_{j}\) contains 3 switches. It is likely that \(T_{j}\) is harder to process. Hence, we need a metric for the level of mixing between the languages. We use _Code-Mixing-Index_Gamback and Das (2016) (CMI) to measure such complexity. Please refer to section 3.2 for more details on CMI.
2. _How much data is good enough?_ **Empirical measurement:** When two languages blend, it is quite natural that the number of unique word forms would be much higher in a Hinglish corpus in comparison to monolingual English or Hindi corpus. Therefore, we ask an essential question at the very beginning, _how much data is good enough?_ We decide to keep collecting data, until the Heaps' curve starts converging so that we cover most of the unique words. Heaps' law Gopalan and Hopkins (2020) states that the number of unique words in a text of \(n\) words is approximated by \(V(n)=Kn^{\beta}\) where \(K\) is a positive constant and \(\beta\) lies between \(0\) and \(1\), \(K\) invariably lies between \(10\) and \(100\) and \(\beta\) between \(0.4\) an \(0.6\). Heaps' law is often considered to be a good estimator to calculate the vocabulary size. To compare, from the figure 1, it can be seen that, for English Wiki, the flattening of the Heaps' law curve, starts at **40K-50K**, whereas for monolingual Hindi, it converges at **80K-90K**, but for _Hinglish_ the same behavior starts around **800K** vocabulary and 50M words.
### Code-Mixing Index (CMI)
As mentioned previously, we expect the difficulty of language processing tasks to increase as the level of code-mixing increases. To measure the level of code-mixing in our corpus, we use Code-mixing Index Gamback and Das (2016) :
\[\begin{split} C_{u}(x)&=w_{m}f_{m}(x)+w_{p}f_{p}(x) \\ &=w_{m}\frac{N(x)-\max_{L_{i}\neq L}(tLi)(x)}{N(x)}*100+w_{p}\frac{ P(x)}{N(x)}*100\\ &=100*\frac{w_{m}((N(x)-\max_{L_{i}\neq L}(tLi)(x))+w_{p}P(x)}{N(x) }\end{split} \tag{1}\]
Where x denotes utterance, N is the number of token in x belonging to any language \(L_{i}\), \(w_{m}\) and \(w_{n}\) are weights. Please refer to Gamback and Das (2016) for a detailed explanation of CMI.
### Data Acquisition Pipeline
We follow a pipeline similar to Chatterjee et al. (2020). We collect CM data from Twitter via the Twitter API. We need to use relevant keywords (words unique to Hindi) in our search to get CM tweets. Words with lexical overlap between Hindi and English should not be used for searching. for example, the word _do_ is confusing because it means two in Hindi. We start with the _ICON 2017_Hinglish sentiment analysis dataset Patra et al. (2018), which is annotated with word-level language. From this data, we create two vocabularies \(V_{HI}\) and \(V_{EN}\), and generate a vocabulary of unique Hindi words \(V_{HI-UNIQ}=V_{HI}-I\), where \(I=V_{HI}\bigcap V_{EN}\). \(V_{HI-UNIQ}\) set is then sorted in descending order, based on the word frequency, and is used as search words on the Twitter API. Once we get the tweets, we use a word-level language identifier Barman et al. (2014) (having 90%+ accuracy) on the tweets and calculate the CMI of the tweet. Once we get the word-level language labels, we can also know where the switching points are. Tweets with CMI = 0 are discarded. Finally, we are left with 87k tweets. The CMI distribution of our data is given in table 1. This dataset is used to pretrain our models.
**Training and Testing data:** We collect 87K sentences distributed over all C
Figure 1: Heaps’ plot on 50M word forms in English, Hindi and Hinglish corpora. The \(\beta\) values are \(0.58\), \(0.61\), \(0.74\) respectively.
collecting equal data across the CMI ranges, so that the resultant languages trained on this corpus would be able to handle real data. We maintain the same distribution over both our training and testing corpora (_4:1 ratio_), for our language models.
## 4 The Bottleneck of Code-mixed Language Modeling: Switching Points
Formally, Switching Points (SPs) are the tokens in text, where the language switches. For code-mixed languages, consisting of a pair of languages, there can be two types of switching points. Suppose the two languages as part of the code-mixed language are _L1_ and _L2_, a switching point occurs when the language in the text changes from L1 to L2 or L2 to L1. To explain it better, let us consider the following sample in _Hinglish_:
\[\textit{gaana}_{\text{HI}}\textit{enjoy}_{\text{EN}}\textit{ kare}_{\text{HI}}\]
**English Translation:**: Enjoy the song.
In the above example, when the language switches from _Hindi_ to _English_ (_gaana\({}_{\text{HI}}\)enjoy\({}_{\text{EN}}\)_) a **HI-EN** (HIndi-ENglish) switching point occurs. Similarly, a **EN-HI**(ENglish-HIndi) switching point occurs at - _enjoy\({}_{\text{EN}}\)kare\({}_{\text{HI}}\)_.
In the context of modeling code-mixed languages, switching points can be considered as ordinary bigrams, that occur with other monolingual bigrams in a corpus. It is easy to infer that particular SP bigrams will be relatively rare in a given corpus. Hence, such sparse occurrences of switching point bigrams make it difficult for any Language Model to learn their probabilities and context. Since the language changes at the switching point, LMs are likely to find it difficult to process these tokens. In order to counter this challenge, we partition our code-mixed data into **(i)**_switching points_, and **(ii)**_non-switching points_. We then build LMs specifically for switching points and non-switching points, as discussed in the following sections.
**CONFLATOR Hypothesis:**: The CONFLATOR is built on 2 hypotheses. i) Positional information is important for language models, especially when dealing with CM text. ii) Switching points are the bottleneck for code-mixed language models (CMLM). We incorporate positional information of switching points into our CMLM.
## 5 Positional Encoding Techniques
As discussed, SPs are a major bottleneck hence handling them separately is needed. Positional encoding are necessary for language models to learn dependencies between tokens. Positional embedding was first introduced by Vaswani et al. (2017). The proposed sinusoidal positional encoding is composed of sine and cosine values with position index as inputs. The encoding techniques are further improved by Liu et al. (2020) where a dynamic function is introduced to learn position with gradient flow and Shaw et al. (2018) learned positional representation of relative positions using a learnable parameter. We talk about different positional encoding techniques in detail in the following subsections.
We experiment with several contemporary techniques and find that rotary positional encoding Su et al. (2021) performs the best.
### Sinusoidal Positional Encoding (SPE)
Vaswani et al. (2017) introduced a pre-defined sinusoidal vector \(p_{i}\in R^{d}\) which is assigned to each position \(i\). This \(p_{i}\) is added to the word embedding \(x_{i}\in R^{d}\) at position \(i\), and \(x_{i}+p_{i}\) is used as input to the model such that the Transformer can differentiate words coming from different positions and this also assigns each token a position-dependent attention. - equation 2.
\[\small\begin{split} e_{ij}^{obs}=\frac{1}{\sqrt{d}}\left(\left(x_{ i}+p_{i}\right)W^{Q,1}\right)\left(\left(x_{j}+p_{j}\right)W^{K,1}\right)^{T} \end{split} \tag{2}\]
Where W is the weight matrix, Q is query, K is key, l in the layer.
### Dynamic Positional Encoding (DPE)
Instead of using predefined periodical functions like \(sin\), Liu et al. (2020), introduced a dynamic function \(\Theta(i)\) at every encoder layer. Improving upon sinusoidal PE, Dynamic PE learns \(\Theta(i)\) instead of a predefined \(p_{i}\) to bring dynamic behavior to the model. At each utterance, this learnable function \(\Theta(i)\) tries to learn the best possible representation for positional information with gradient flow.
\begin{table}
\begin{tabular}{c c c} \hline \hline
**CMI** & **\# Tweets** & **Percentage** \\ \hline
0-10 & 7,036 & 8.05\% \\
11-20 & 16,481 & 18.9\% \\
21-30 & 22,617 & 25.9\% \\
31-40 & 22,722 & 26.0\% \\
41-50 & 11,404 & 13.1\% \\
50+ & 7,036 & 8.05\% \\ \hline \hline Mean CMI: **28** & Total \# of tweets: **87,296** \\ \hline \hline \end{tabular}
\end{table}
Table 1: CMI distribution of the collected data. The total number of extracted tweets is 87K.
\(\Theta(i)\) is added to the word embedding \(w_{i}\) as given in equation 3.
\[e_{ij}=\frac{1}{\sqrt{d}}\left(\left(x_{i}+\Theta\left(i\right)\right)W^{Q,l} \right)\left(\left(x_{i}+\Theta\left(j\right)\right)W^{K,l}\right)^{T} \tag{3}\]
### Relative Positional Encoding (RPE)
In absolute PE, using different \(p_{i}\) for different positions \(i\) helps the transformer distinguish words at different positions. However, the absolute PE is not effective in capturing the relative word order. Shaw et al. (2018) introduced a learnable parameter \(a_{i-j}^{l}\) which learns the positional representation of the relative position _i-j_ at encoder layer \(l\). With the help of this, we can explicitly capture word orders in our model as follows:
\[e_{ij}^{rel}=\frac{1}{\sqrt{d}}\left(\left(x_{i}\right)^{l}W^{Q,l}\right) \left(\left(x_{i}\right)^{l}W^{K,l}+a_{i-j}^{l}\right)^{T} \tag{4}\]
### Switching Point-based Dynamic and Relative Positional Encoding (SPDRPE)
Ali et al. (2022) introduce a novel, switching point based PE. For illustration purposes, consider a code-mixed Hinglish text - \(ye_{H}\)\(gaana_{Hl}\)\(enjoy_{EN}\)\(kare_{Hl}\). SP-based indices (**SPI**) set the index to 0 whenever an SP occurs. Indexing would normally be _Index_ = (0, 1, 2, 3), but due to switching point incorporation, this gets changed to _SPI_ = (0, 1, 0, 0). In addition to this, they use a learning parameter \(a_{i-j}^{l}\), which encodes the relative position _i-j_ at the encoder layer \(l\). This encoding approach learns representations dynamically based on SPs along with the embedding \(a_{i-j}^{l}\) so that it can also capture relative word orders, as follows:
\[e_{ij}=\frac{1}{\sqrt{d}}\left(\left(x_{i}+\Theta\left(i\right)\right)W^{Q,l} \right)\left(\left(x_{i}+\Theta\left(j\right)\right)^{l}W^{K,l}+a_{i-j}^{l} \right)^{T} \tag{5}\]
### Rotary Positional Encoding (RoPE)
Analogous to the idea of electromagnetic waves going through a polarizer to preserve their relative amplitude, (Su et al., 2021) came up with the idea of Rotary Positional Encoding (RoPE). The idea is to use rotation matrices on the embedding vectors to generate the positional values. The rotation negates any absolute positional information and only retains information about the relative angles between every pair of word embeddings in a sequence. We know that the dot product between two vectors is a function of the magnitude of individual vectors and the angle between them. Keeping this in mind, the intuition for RoPE is to represent the embeddings as complex numbers and the positions as pure rotations that we apply to them.
Mathematically, the formulations for a simple 2-dimensional case are defined as follows:
\[f_{Q}(x_{i},i)=(W_{Q}x_{i})e^{\sqrt{-1}i\theta} \tag{6}\] \[f_{Q}(x_{j},j)=(W_{K}x_{j})e^{\sqrt{-1}j\theta}\] \[g(x_{i},x_{j},i-j)=Re[(W_{Q}x_{i}){(W_{K}x_{i})}^{*}e^{\sqrt{-1} (i-j)\theta}]\]
where \(Re[]\) is the real part of a complex number and \({(W_{K}x_{i})}^{*}\) represents the conjugate complex number of \({(W_{K}x_{i})}\). \(\theta\in R\) is a preset non-zero constant. Formulating \(f_{Q,K}\) as a matrix multiplication, we get:
\[f_{Q}(x_{i},i)=\left(\begin{smallmatrix}cosm\theta_{1}&-simm\theta_{1}\\ sinm\theta_{1}&cosm\theta_{1}\end{smallmatrix}\right)\left(\begin{smallmatrix} W_{Q,K}^{(1)}&W_{Q,K}^{(12)}\\ W_{Q,K}^{(21)}&W_{Q,K}^{(21)}\end{smallmatrix}\right)\left(\begin{smallmatrix} x_{i}^{(1)}\\ x_{i}^{(2)}\end{smallmatrix}\right) \tag{7}\]
where \((x_{1}^{(1)},x_{1}^{(2)})\) is \(x_{i}\) expressed in the form of 2D coordinates. In the same way, we can turn function \(g\) into matrix form. By rotating the transformed embedding vector by an angle in multiples of its position index, we are able to incorporate relative position information. Due to this characteristic, it is termed as Rotary Position Embedding.
In order to generalize the result in 2D to any \(x_{i}\) in \(R_{\text{d}}\) where \(d\) is even, they divide the d-dimension space into \(\frac{d}{2}\) sub-spaces and combine them in merit of the linearity of inner product, turning the attention formulation:
\[f_{Q,K}=e_{ij}^{\text{\emph{sparse}}}=\frac{1}{\sqrt{d}}\left( \text{${\text{IM}}$}_{\text{d},\text{d},\text{d},\text{d},\text{d},\text{d}, \text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{ d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d}, \text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d}, \text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{d},\text{ d},\text{d},\text
### Switching Point-based Rotary Matrix
Switching points are a potential bottleneck for code-mixing language modeling and to address this problem, we incorporate switching point based rotary positional encoding in our architecture. The intuition behind RoPE is electromagnetic waves. The embeddings are represented as complex numbers and the positions are represented as pure rotations that are applied to them. Keeping this in mind, we address the problem of switching points (SP) with the help of angles that participate in RoPE. Whenever we encounter a switching point, we change the rotation, i.e., we change the direction of these angles. To implement the rotation change, we define a switching point matrix. The switching point matrix helps our model identify and learn the patterns of code mixing in the corpus. Our matrix is defined with 1s and -1s. When there is a language shift (L1 \(\rightarrow\) L2) or (L2 \(\rightarrow\) L1), i.e., when we encounter a switching point, we annotate the column value as -1 and for the successive words in L2, we annotate column values as 1 until another switching point occurs.
\[\begin{split} SPM\in R_{n\text{-}n}^{d}\\ \text{if i}==\text{SP:}\\ SPM_{i}=-1\\ \text{else:}\\ SPM_{i}=1\end{split} \tag{11}\]
The visual intuition of our approach is shown in Figure 2. The switching point matrix (SPM) with 1s and -1s is defined in such a way that it transposes the rotary matrix, intuitively inverting the rotation at every switching point encounter. Therefore, the final matrix, i.e., switching point rotary matrix (SPRM) is a result of element-wise multiplication of the defined switching point matrix (SPM) with rotary matrix (RM):
\[\textit{SPRM}=\textit{SPM}\times\textit{RM} \tag{12}\]
\[\begin{split}\textit{e}_{ij}^{\textit{SP
### CONFLATOR Architecture
The local dependencies for Unigram and Bigram (Word2Vec trained from scratch) along with unigram and bigram SPRM are fed to a 6-headed Multi-Head attention (MHA) in each encoder layer of the transformer separately, resulting in 2 attention matrices. We introduce 2 learnable parameters \(\alpha\) and \(\beta\) that are used as weight coefficient for the unigram and bigram matrix respectively. The final matrix is passed to the decoder layer. The embedding and architecture in depicted in figs. 3 and 4.
## 7 Experiments and Results
For our base models, each training step takes about 0.5 seconds. We train the base models for a total of 100,000 steps or 12 hours. For the big models like bigram and SPM-based models, the step time is 1.0 seconds. The big models were trained for 250,000 steps (2 days). We use ADAM optimizer with \(\beta_{1}\) = 0.9, \(\beta_{2}\) = 0.98 and \(\epsilon\) = 1e-9. We use the method of varying the learning rate over the course of training from Vaswani et al. (2017).
We use two types of regularization during our training process: We apply dropout to the output of each encoder and decoder layer followed by
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**CMI Range** & **Transformer** & **GPT-2** & **BERT** & **Conflator** \\ \hline
**0-10** & 1018.54 & 823.71 & 666.48 & 492.96 \\
**11-20** & 1210.11 & 967.01 & 782.19 & 501.44 \\
**21-30** & 1401.37 & 1334.72 & 1007.34 & 544.71 \\
**31-40** & 2688.00 & 2334.73 & 1007.34 & 800.62 \\
**41-50** & 4421.22 & 3905.87 & 4337.02 & 1095.12 \\ \hline
**Average** & 2147.85 & 1873.20 & 1701.49 & **578** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Perplexity comparison between different models based on ranges of CMI. Lower Perplexity is better.
Figure 4: CONFLATOR architecture within the encoder layer. It depicts how the unigrams and bigrams of the input statement are passed as inputs to our encoder decoder architecture. In this framework, we generate a rotary matrix and a switching point matrix. By performing element-wise multiplication of the aforementioned matrices, we get our proposed novel switching point based rotary matrix. We represent the embeddings as complex numbers and their positions as pure rotations that we apply to them with the help of our switching point based rotary matrix. Then, upon getting the output layers for unigram and bigram statements separately. We introduce weighted coefficients \(a\) and \(b\) for unigram outputs and bigram outputs, respectively. We get our final output layer by adding these weighted unigram and bigram outputs.
Figure 5: CONFLATOR is able to differentiate words coming from different positions and give high attention when a switching point occurs (at \(bag_{EN}\) and \(kidar_{HI}\)) while the other models cannot do so.
Normalization. In addition, we apply dropout and normalization to the sums of the word embeddings and the positional encodings in both the encoder and decoder layers. We use a rate of \(\text{P}_{\text{drop}}=0.2\).
**Intrinsic Evaluation:** The perplexity scores of baseline language models in comparison with CONFLATOR on code-mixed language modeling task are shown in 2. We see that our model performs much better than other models.
**Extrinsic Evaluation:** We evaluate our model on two downstream tasks: (i) sentiment analysis, and (ii) machine translation. For sentiment analysis, (Table. 3) we use the data provided by Patwa et al. (2020). CONFLATOR achieves 76.23% F1 score and outperforms the SOTA Ali et al. (2022). The main reason for this is learning SP by aggregating with the help of rotary positional encoding with a variable length MHA framework. For the machine translation (Table 4), we use the data provided by Dhar et al. (2018). We achieve 29.1 bleu score and outperform the SOTA Dhar et al. (2018) using the Unigram SPRoPE model which is able to learn the patterns of language mixing with the help of switching point based rotary positional encoding.
## 8 Conclusion & Takeaways
In this work, we report experiments on _Hinglish_ sentiment analysis and Machine translation problems through the lens of language modeling. Our contribution could be seen as following:
(i) We introduce the idea of switching point based rotary positional encoding. Whenever a switching point is encountered, we incorporate rotation change to learn the patterns of language mixing.
(ii) We introduce CONFLATOR, a neural language modeling approach for code-mixed languages. CONFLATOR tries to learn better representations by means of switching point-based rotary positional encoding, initially at unigram level and then at bigram level.
(iii) We empirically prove that CONFLATOR is learning the patterns of code-mixing which other models with different positional encodings prove unsuccessful, as shown in Figure 5.
(iv) It is also noteworthy that CONFLATOR achieves comparable to SOTA results even without any pre-trained heavy language model.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{6}{c}{Positional representation} & Bigram & BLEU \\ \cline{2-7} & Sin/Cos & Index & Dynamic & SPI & Relative & RM & SPRM & \\ \hline
3HA + Sinusoidal PE & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & 17.2 \\
3HA + Dynamic PE & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & 17.9 \\
3HA + Relative PE & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & ✗ & 18.4 \\
3HA + Rotary PE & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & 24.9 \\ \hline SOTA (IITTH-minaldhar) & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & 28.4 \\ Unigram SP Relative (USPR) & ✗ & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & 9.8 \\ Bigram SP Relative (BSPR) & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✓ & 7.6 \\ Unigram SPRoPE & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✗ & 29.1 \\ Conflator (BSPRoPE) & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & 25.16 \\ Conflator with StableLM & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & 29.06 \\ Conflator with Alpaca & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & 29.89 \\
**Conflator with ILaMA** & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & **30.15** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of position sensitive experiments for _Machine Translation_ on CM text. Higher BLEU is better.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multicolumn{6}{c}{Positional representation} & Bigram & F1 (\%) \\ \cline{2-9} & Sin/Cos & Index & Dynamic & SPI & Relative & RM & SPRM & \\ \hline Word2Vec + LSTM & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & 56 \\ BERT & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & ✗ & 60 \\ \hline
3HA + Sinusoidal PE & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & ✗ & 74.34 \\
3HA + Dynamic PE & ✗ & ✓ & ✓ & ✗ & ✗ & ✗ & ✗ & 75.02 \\
3HA + Relative PE & ✗ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & 75.32 \\
3HA + Rotary PE & ✓ & ✓ & ✗ & ✗ & ✗ & ✓ & ✗ & ✗ & 76.04 \\ \hline SOTA (PISTO) & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & 75.6 \\ Unigram SP Relative (USPR) & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✗ & ✗ & 75 \\ Bigram SP Relative (ISPRR) & ✗ & ✗ & ✓ & ✓ & ✓ & ✗ & ✓ & 75 \\ Unigram SPRoPE + Good Tuning & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✗ & 74.6 \\ Unigram SPRoPE & ✓ & ✓ & ✗ & ✓ & ✓ & ✓ & ✓ & ✗ & 75 \\ Conflator (BSPRoPE) & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & 76.23 \\ Conflator with StableLM & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & 76.11 \\ Conflator with Alpaca & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & 75.69 \\
**Conflator with ILaMA** & ✓ & ✓ & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ & **76.45** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of various position sensitive experiments for _Sentiment Analysis_ on CM text. _n_HA refers to n-headed attention.
## 9 Limitations
Although our bigram model achieves SOTA on sentiment analysis using unigram, it is slightly behind the bigram model when it comes to machine translation, where using bigram at the decoder level resulted in poor performance. Despite conducting extensive experiments, there lacks a detailed explanation on why the bigram-based approach for MT fails. Future experiments will focus on exploring or understanding the issue of bigrams for MT and coming up with a solution for the same.
|
2309.03601 | Probabilistic Pathways: New Frontiers in Quantum Ensemble Control | In this paper, we propose a novel probabilistic control framework for
efficiently controlling an ensemble of quantum systems that can also compensate
for the interaction of the systems with the external environment. The main
challenge in this problem is to simultaneously steer an ensemble of systems
with variation in their internal parameters from an initial state to a desired
final state. The minimisation of the discrepancy between the probabilistic
description of the dynamics of a quantum ensemble and a predefined desired
probabilistic description is the key step in the proposed framework. With this
objective, the derived solution will not only allow the transitioning of the
ensemble from one state to another, but will generally allow steering an
initial distribution of the ensemble to a final distribution. Numerical results
are presented, demonstrating the effectiveness of the proposed probabilistic
control framework. | Randa Herzallah, Abdessamad Belfakir | 2023-09-07T09:52:48Z | http://arxiv.org/abs/2309.03601v1 | # Probabilistic Pathways: New Frontiers in Quantum Ensemble Control
###### Abstract
In this paper, we propose a novel probabilistic control framework for efficiently controlling an ensemble of quantum systems that can also compensate for the interaction of the systems with the external environment. The main challenge in this problem is to simultaneously steer an ensemble of systems with variation in their internal parameters from an initial state to a desired final state. The minimisation of the discrepancy between the probabilistic description of the dynamics of a quantum ensemble and a predefined desired probabilistic description is the key step in the proposed framework. With this objective, the derived solution will not only allow the transitioning of the ensemble from one state to another, but will generally allow steering an initial distribution of the ensemble to a final distribution. Numerical results are presented, demonstrating the effectiveness of the proposed probabilistic control framework.
**Keywords:** Probabilistic control, Kullback-Leibler Divergence (KLD), quantum control.
## 1 Introduction
Designing and implementing control solutions to simultaneously steer a large quantum ensemble between states of interest is a critical aspect of many advanced quantum technologies. These technologies include laser cooling, nuclear magnetic resonance spectroscopy, magnetic resonance imaging, quantum computation [1], and long-distance quantum communication [2]. Quantum ensembles consist of a vast number of quantum systems (e.g., spin systems [3, 4, 5, 6]) with variations in their parameters. The primary challenge in controlling ensembles is to derive a common external control field capable of concurrently evolving each system in the ensemble from an initial state to a desired final state, by adjusting global parameters of the overall system, rather than controlling individual members.
Control strategies for quantum ensembles have been successfully applied to molecular ensembles [7] and have proven effective in quantum information processing [8]. Recent studies have focused on controlling inhomogeneous spin ensembles and stabilizing spin system ensembles using Lyapunov control methodology [9]. Ensemble controllability has been discussed in [10], emphasizing the importance of Lie brackets and noncommutativity in designing compensating controls. Further research has investigated the development of unitary control in homogeneous quantum ensembles for maximizing signal intensity in coherent spectroscopy [5] and employing the sampling-based learning control (SLC) approach [11, 12].
Environmental decoherence, a crucial aspect of quantum mechanics, arises from the unavoidable interaction between a system and its environment. While extensively studied in controlling and manipulating quantum systems [13, 14, 15, 16, 17, 18], the issue of environmental decoherence has not yet been adequately addressed within the context of quantum ensembles. Decoherence poses a significant challenge for quantum control development, as it demands precise control of system dynamics and results from the interaction between a quantum system and its noisy environment [19, 20, 21]. Protecting quantum system interferences forms the foundation of quantum information applications [22]. Consequently, the ability to control and manage environmentally induced decoherence would provide substantial benefits for controlling numerous physical and chemical phenomena. Prior work on quantum optimal control in the presence of decoherence has utilized adaptive feedback control methods [23] and control optimization algorithms to maximize population transfer fidelity or quantum gates. Genetic algorithms have been proposed to control and suppress decoherence effects [24], and experimental control of decoherence in molecular vibrational wave packets has been conducted in the laboratory [25].
Given the current state of the art in controlling quantum ensembles, there is an urgent need to develop more accurate control methods that enable the simultaneous steering of ensemble members to the same desired state in the presence of uncertainty or field noise introduced by the system's Hamiltonian or its interaction with the environment. One highly efficient control method for uncertain stochastic classical control systems is the fully probabilistic control approach [26, 27, 28, 29]. This approach characterizes the generative models of classical system dynamics using probabilistic descriptions. A control law is then derived to minimize the discrepancy between the probabilistic description of the joint distribution of the system dynamics and its controller and a predefined desired joint distribution [27, 28]. This method has been demonstrated on various classical systems and has proven effective in deriving accurate control laws under high levels of uncertainty and stochasticity [30, 31, 32]. Consequently, extending and adapting the fully probabilistic control method to control quantum systems is an appealing prospect.
In this paper, we focus on controlling ensembles of finite-dimensional quantum systems that interact with external environments using the fully probabilistic approach. Our goal is to develop a probabilistic control framework, building upon its classical counterpart, capable of accurately steering the members of a quantum ensemble to a desired state value while considering the interaction of the quantum ensemble with the external environment in the optimal control law design.
Extending the fully probabilistic control approach to the quantum mechanical regime presents numerous challenges. A significant challenge is the coupling of a quantum system with its external environment [33, 34, 35]. To address this issue, we employ the Lindblad master equation formalism. This formalism models the dynamics of open quantum systems, providing a framework that incorporates the effects of environmental decoherence and dissipation into the system's evolution [36, 33]. Another hurdle is the complex nature of quantum system state variables and operators. We tackle this by characterizing generative probabilistic models based on complex probability density functions, crucial for accurately capturing the unique features of quantum systems and ensuring the effectiveness of our probabilistic control framework. A key measure of success in our proposed control methodology is the ability to guide each member of a quantum ensemble to a predetermined target state. Achieving this necessitates the measurement of the final state of the quantum systems in the ensemble and its comparison with the desired state. However, measuring a quantum state is not as straightforward as in classical systems. Quantum systems, defined by their wave functions or density operators, are generally not directly observable. Instead, one must perform a series of measurements on many identically prepared quantum systems to estimate their state, a process known as quantum state tomography. This process aids in estimating the system's density matrix. Intricately tied to this measurement process is the phenomenon of measurement backaction, another significant challenge. This unique quantum trait, where the act of measurement can alter the state of the system, could influence the post-control state of the system, potentially affecting the control methodology's performance. While intriguing, we opt not to consider measurement backaction in this paper to maintain the simplicity of our presentation of the proposed fully probabilistic approach. However, it's worth noting that a similar approach to environmental interaction could be employed to account for measurement backaction [37, 38].
Consequently, our primary focus is on the mathematical formulation of fully probabilistic control within quantum settings, as well as the analysis of a class of bilinear quantum control systems. We aim to establish a rigorous probabilistic control framework for quantum ensembles that takes into account environmental decoherence, uncertainty, and the complex nature of quantum state variables. This framework will facilitate the development of more robust and accurate control methods for a variety of quantum technologies. By presenting our approach in a simplified manner and systematically addressing the key challenges, we provide a solid foundation for future work that could extend our proposed approach to include the effects of measurement backaction and other complex phenomena.
This paper is organised as follows: In Section (2) we briefly recall the evolution of open quantum systems and develop the corresponding state space model using the vectorization of their corresponding density operator. In Section (3), we introduce a general theory to fully control quantum systems affected by state dependent noisy environment using a probabilistic approach and demonstrate its general solution. Then, we apply the developed approach in Section (4) to systems described by Gaussian pdfs. In Section (5) we apply the method to control an ensemble of particular systems and show the effectiveness and the applicability of the method. Finally, we
conclude in Section (6).
## 2 Problem formulation and Quantum systems models
There are numerous dynamical models describing quantum systems interacting with external environments, depending on the type of system-environment coupling [33, 34, 35]. An alternative approach to describe the time evolution of the open quantum system is through master equations, the form of which depends on the character of the system-environment interaction. Under the Markovian assumption, an open quantum system can be described by the following master equation:
\[i\frac{d\rho(t)}{dt}=[H,\rho(t)]+\mathcal{L}(\rho(t)),\ \ \rho(0)=\rho_{0}, \tag{1}\]
where \(H\) is the Hamiltonian of the system and \(\mathcal{L}[\rho(t)]\) is the open system super-operator. In this paper, we set \(\hbar=1\) and consider Hamiltonians of the form:
\[H=H_{0}+u(t)H_{1}, \tag{2}\]
where \(H_{0}\) is the Hamiltonian of the isolated free system, and the term \(u(t)H_{1}\) is associated with the time-dependent control Hamiltonian defined through the interaction of the system with the external field \(u(t)\). In the Lindblad approach, the open system Liouvillian is:
\[\mathcal{L}(\rho(t))=i\sum_{s}\big{(}L_{s}\rho(t)L_{s}^{\dagger}-\frac{1}{2} \{L_{s}^{\dagger}L_{s},\rho(t)\}\big{)}, \tag{3}\]
where \(L_{s}\) are Lindblad operators and \(L_{s}^{\dagger}\) is the adjoint operator of \(L_{s}\). The time evolution of a physical property described by a Hermitian operator \(\hat{o}\) is given by:
\[o(t)=\langle\hat{o}\rangle=\text{Tr}(\rho(t)\hat{o}). \tag{4}\]
Given the definition of the master equation (1), we will now demonstrate that the open quantum system defined by this master equation can be equivalently described using a time-dependent equation that involves a state vector representing the vectorization of the density operator. For this purpose, the free Hamiltonian \(H_{0}\) can be expanded element-wise as follows:
\[H_{0}=\sum_{k=0}^{l-1}E_{k}\ket{k}\bra{k}, \tag{5}\]
where \(E_{k}\) is the \(k\)th eigenvalue of \(H_{0}\) associated with the eigenvector \(\ket{k}\), with \(k\in{0,1,\ldots,l-1}\). Similarly, the Lindblad operators appearing in (3) can be written element-wise as follows:
\[L_{s}=L_{j,k}=\sqrt{\Theta_{k\to j}}\ket{j}\bra{k}, \tag{6}\]
where \(\Theta_{k\to j}\) is the dissipative transition rate from the eigenstate \(\ket{k}\) to the eigenstate \(\ket{j}\). Following this element-wise presentation, the master equation (1) can be equivalently written as [34]:
\[\frac{d\rho_{n,q}(t)}{dt}=(-i(E_{n}-E_{q})-\gamma_{n,q})\rho_{n,q}(t)+\sum_{k=0} ^{l-1}\Theta_{k\to n}\rho_{k,k}(t)\delta_{n,q}+iu(t)\sum_{k=0}^{l-1}\bigg{(} \rho_{n,k}(t)\bra{k}H_{1}\ket{q}-\bra{n}H_{1}\ket{k}\rho_{k,q}(t)\bigg{)}, \tag{7}\]
where,
\[\gamma_{n,q}:=\frac{1}{2}\sum_{j=0}^{l-1}(\Theta_{n\to j}+\Theta_{q\to j}), \tag{8}\]
for \(n,q=0,1\ldots,l-1\). Using (7), the vectorization \(\tilde{x}\) of the density operator \(\rho(t)\) provided in Appendix (A) is shown to satisfy the following differential equation:
\[\frac{d\tilde{x}(t)}{dt}=(\tilde{A}+iu(t)\tilde{N})\tilde{x}(t), \hskip 14.226378pt\tilde{x}(0)=\tilde{x}_{0}, \tag{9}\]
where the matrix elements of the operators \(\tilde{A}\in\mathbf{C}^{l^{2}\times l^{2}}\) and \(\tilde{N}\in\mathbf{C}^{l^{2}\times l^{2}}\) can be easily found from (7). The vectorization of the initial density operator \(\rho_{0}\) is given by \(\tilde{x}_{0}\)[39, 40]. Considering the following transformation:
\[x(t)\rightarrow\tilde{x}(t)-x_{e}, \tag{10}\]
the time evolution equation (9) can be rewritten as follows:
\[\frac{dx(t)}{dt}=\tilde{A}x(t)+\tilde{B}(x(t))u(t),\hskip 14.226378ptx(0)= \tilde{x}_{0}-x_{e}, \tag{11}\]
where \(x_{e}\) is an eigenvector of \(\tilde{A}\) defined by \(\tilde{A}x_{e}=0\) and \(\tilde{B}(x(t))=i\tilde{N}(x(t)+x_{e})\). This equation, known as the input equation, describes the dependence of the system's dynamics, represented by \(x(t)\), on the input electric field \(u(t)\)[40]. Thus, by solving the ordinary differential equations (11), we can accurately determine the evolution of the open quantum system. The solution to (11) can be expressed as:
\[x(t+1)=Ax(t)+B(x(t))u(t), \tag{12}\]
where the matrices \(A\) and \(B(x(t))\) are defined as:
\[A=e^{\tilde{A}\Delta t},\hskip 14.226378pt\text{and} \hskip 14.226378ptB(x(t))=\bigg{(}\int_{0}^{\Delta t}e^{\tilde{A}\lambda} \tilde{B}(x(\lambda))\mathrm{d}\lambda\bigg{)}, \tag{13}\]
with \(\lambda=\Delta t-t\) and \(\Delta t\) representing the sampling period. Equivalently, relation (12) can be rewritten as:
\[x(t)=Ax(t-1)+B(x(t-1))u(t-1). \tag{14}\]
In practice, control of quantum physical systems can be achieved through the use of multiple electric fields in the control Hamiltonian within the von Neumann equation [11]. By denoting \(x_{t}\equiv x(t)\) and \(u(t)\equiv u_{t}\), the relation (14) can be reformulated as:
\[x_{t}=Ax_{t-1}+B(x_{t-1})u_{t-1}. \tag{15}\]
In this work, we aim to control an ensemble of inhomogeneous quantum systems, each characterized by a unique Hamiltonian due to the dispersion in the parameters that describe them. This inhomogeneity or dispersion results in each system responding slightly differently to the same control signal, a challenge that needs to be addressed in control problems. To account for this parameter dispersion and its impact on the state evolution of the systems in the ensemble, we introduce multiplicative noise into our model. Unlike additive noise, multiplicative noise varies with the state of the system, making it a suitable choice for modeling uncertainties or variations that scale with the system's state. Therefore, we add the multiplicative noise term, \(\eta(x_{t-1})\), to the discretized equation (15). The process or measurement noise levels in \(\eta(x_{t-1})\) depend on the system state vector, resulting in a state-dependent noise term. The final form of our model is given by:
\[x_{t}=Ax_{t-1}+B(x_{t-1})u_{t-1}+\eta(x_{t-1}). \tag{16}\]
where,
\[\eta(x_{t-1})=\zeta_{t}Ax_{t-1}, \tag{17}\]
and \(\zeta_{t}\) is a scalar noise. This modified equation (16) incorporates the effects of multiplicative noise and thus accommodates the parameter dispersion inherent in our ensemble of inhomogeneous quantum systems. By employing a vectorized description of the density matrix, we can now reformulate the time evolution of the observable \(\hat{o}\), given in (4), in terms of the vectorized density operator. These adaptations to the model, both in terms of multiplicative noise and the vectorized description of the density matrix, allow us to better capture the unique dynamics of each member of the quantum ensemble, enhancing the effectiveness of our control methodology. To further improve the precision of our model, we consider additional sources of noise and uncertainties that could affect the observed values \(o_{t}\). This consideration leads to the inclusion of the noise term \(\sigma_{t}\) in our output equation:
\[o_{t}=Dx_{t}+\sigma_{t}. \tag{18}\]
Here, \(D=(\text{vec}(\hat{o}^{T}))^{T}\) represents the matrix obtained by transposing the vectorized version of the observable operator \(\hat{o}\), with "vec" denoting the vectorization operator and \(\hat{o}^{T}\) being the transpose of \(\hat{o}\). The term \(\sigma_{t}\) is a multivariate Gaussian noise, introduced to model various sources of noise and uncertainties, such as measurement error, environmental disturbances, and process noise [38]. The integration of this term helps enhance the realism and robustness of our model, thereby improving its predictive accuracy and control performance.
By defining the quantum system's dynamics through both the input (16) and output (18) equations, we establish a bilinear state-space model that effectively characterizes the behavior of the quantum systems in our ensemble. This comprehensive model provides a strong foundation for the subsequent design of effective control strategies.
A key innovation introduced in this paper lies in the extension of the bilinear state-space model, originally formulated in previous works [38, 39, 40], to incorporate both multiplicative and additive Gaussian noise. By transforming the quantum open system's evolution, typically described by the von Neumann-Lindblad equation,
into this enhanced state-space model, we provide a novel representation that not only offers a more comprehensive picture of the quantum system's dynamics but also explicitly accounts for various sources of uncertainties such as manufacturing differences, different environmental conditions, the inherent uncertainty in quantum mechanics, sampling, parameter, and functional uncertainties. These are captured by the inclusion of noise terms in both the input and output equations. This innovative formulation serves as the foundation of the fully probabilistic control framework for quantum systems proposed in this paper, setting the stage for the introduction of advanced control strategies in the following sections.
In the sections to follow, we will demonstrate how utilizing this bilinear state-space model with noise terms enables the development of efficient control strategies within a fully probabilistic framework. This approach provides a robust characterization of the system's time evolution, specifically tailored for quantum systems, and allows for enhanced control and management of their dynamics. Given the precision and robustness of this methodology, it holds significant potential for a wide range of applications in quantum information processing, quantum communication, and quantum sensing. In these domains, precise control and estimation of quantum states are of paramount importance [41, 42]..
## 3 Fully Probabilistic Control for Quantum Systems
Building upon the stochastic bilinear state-space model formulated in the previous section, we now present the control objective and introduce the general framework for the fully probabilistic control of ensembles of quantum systems. This innovative framework is designed to account for various sources of uncertainty and stochasticity affecting the quantum systems in a constructive manner, ultimately yielding more accurate control laws under these challenging conditions.
### Objectives for the Fully Probabilistic Control of Quantum Systems
As previously discussed, this paper focuses on controlling ensembles of quantum systems, where individual elements are subject to dispersion in their parameters. These uncertainties are incorporated into the stochastic bilinear state-space model represented by equations (16) and (18). Consequently, the state of the system at each time step can be represented by the following conditional probability density function (pdf):
\[s(x_{t}\,|x_{t-1},u_{t-1}\,). \tag{19}\]
This representation provides a comprehensive description of the current state of the system \(x_{t}\), as influenced by the previous states of the system \(x_{t-1}\) and electric field \(u_{t-1}\). Similarly, due to the presence of the noise term \(\sigma_{t}\) in equation (18), the state of the measurement \(o_{t}\) can be characterized by the following pdf:
\[s(o_{t}\,|x_{t}\,). \tag{20}\]
Armed with these conditional pdfs, the objective of the fully probabilistic control can be stated as designing a controller's pdf, \(c(u_{t}|x_{t})\), that minimizes the Kullback-Leibler divergence (KLD) between the joint pdf of the closed-loop system dynamics, \(f(\mathcal{Z}(t,\mathcal{H}))\), and a pre-specified ideal joint pdf, \({}^{I}f(\mathcal{Z}(t,\mathcal{H}))\):
\[\mathcal{D}(f||^{I}f)=\int f(\mathcal{Z}(t,\mathcal{H}))\ln\big{(}\frac{f( \mathcal{Z}(t,\mathcal{H}))}{f(\mathcal{Z}(t,\mathcal{H}))}\big{)}d\mathcal{Z} (t,\mathcal{H}), \tag{21}\]
where
\[f(\mathcal{Z}(t,\mathcal{H}))=\prod_{t=1}^{\mathcal{H}}s(x_{t}|x_{t-1},u_{t-1} )s(o_{t}|x_{t})c(u_{t-1}|x_{t-1}), \tag{22}\]
and
\[{}^{I}f(\mathcal{Z}(t,\mathcal{H}))=\prod_{t=1}^{\mathcal{H}}{}^{I}s(x_{t}|x_{ t-1},u_{t-1})^{I}s(o_{t}|x_{t})^{I}c(u_{t-1}|x_{t-1}). \tag{23}\]
In these equations, \({}^{I}s(x_{t}|x_{t-1},u_{t-1})\) denotes the ideal pdf of the state vector \(x_{t}\), \({}^{I}s(o_{t}|x_{t})\) denotes the ideal pdf of the measurement \(o_{t}\), \(\mathcal{Z}(t,\mathcal{H})=\{x_{t},\ldots,x_{\mathcal{H}},o_{t},\ldots,o_{ \mathcal{H}},u_{t-1},\ldots,u_{\mathcal{H}}\}\) is the closed-loop observed data sequence and \(\mathcal{H}<\infty\) is a given control horizon, and \({}^{I}c(u_{t-1}|x_{t-1})\) represents the ideal pdf of the controller. Given that the state \(x_{t}\) is not directly observable and that the output pdf is conditioned on the state \(x_{t}\) of the quantum system, it is reasonable to allow the state of the quantum system to evolve naturally. Consequently, we assume that \({}^{I}s(x_{t}|x_{t-1},u_{t-1})=s(x_{t}|x_{t-1},u_{t-1})\). This assumption simplifies the problem and focuses the controller design on accounting for uncertainties in the measurements and control inputs, rather than attempting to manipulate the natural evolution of the quantum system. By doing so, the proposed fully probabilistic control framework remains compatible with the inherent characteristics of quantum systems while effectively addressing the challenges associated with uncertainties and parameter dispersion in ensembles of quantum systems.
Previous works, such as those cited in [26, 27, 28], have demonstrated that the minimization of the KLD equation (21) with respect to the control input is achieved by initially defining \(-\ln(\gamma(x_{t-1}))\) as the expected cost-to-go function. In the context of this work, we express \(-\ln(\gamma(x_{t-1}))\) as follows:
\[-\ln(\gamma(x_{t-1}))=\min_{c(u_{t-1}|x_{t-1})}\sum_{\tau=t}^{\mathcal{H}}\int f (\mathcal{Z}_{t},\ldots,\mathcal{Z}_{\mathcal{H}}|x_{t-1})\times\ln\bigg{(} \frac{s(o_{\tau}|x_{\tau})c(u_{\tau-1}|x_{\tau-1})}{Is(o_{\tau}|x_{\tau})^{I}c (u_{\tau-1}|x_{\tau-1})}\bigg{)}\,d(\mathcal{Z}_{t},\ldots,\mathcal{Z}_{ \mathcal{H}}), \tag{24}\]
for any arbitrary \(\tau\in 1,...,\mathcal{H}\). In this equation, \(\mathcal{Z}_{t}\) denotes the set \(x_{t},o_{t},u_{t-1}\). Importantly, observe that the argument of the \(\ln\) function does not include the conditional density of the quantum system state, \(s(x_{t}|x_{t-1},u_{t-1})\), nor its corresponding ideal distribution, \({}^{I}s(x_{t}|x_{t-1},u_{t-1})\). This omission is not an oversight, but instead reflects a fundamental quantum property: the state of the dynamic system, being unobservable, evolves naturally according to the laws of quantum evolution. This quantum characteristic is manifested by equating the actual distribution of the quantum system state to its corresponding ideal distribution, i.e., \({}^{I}s(x_{t}|x_{t-1},u_{t-1})=s(x_{t}|x_{t-1},u_{t-1})\). This equality underpins the unobserved natural evolution of the quantum state, a principle fundamental to quantum dynamics and a cornerstone of the proposed control methodology.
Minimization of the cost to go function (24) can then be performed recursively to give the following recurrence equation analogical to the dynamic programming solution:
\[-\ln(\gamma(x_{t-1})) =\min_{c(u_{t-1}|x_{t-1})}\int\bigg{[}s(x_{t}|x_{t-1},u_{t-1})s(o_{t }|x_{t})c(u_{t-1}|x_{t-1})\bigg{(}\ln\bigg{(}\frac{s(o_{t}|x_{t})c(u_{t-1}|x_{t -1})}{Ts(o_{t}|x_{t})^{I}c(u_{t-1}|x_{t-1})}\bigg{)}\] \[-\ln(\gamma(x_{t}))\bigg{)}\bigg{]}d(x_{t},o_{t},u_{t}). \tag{25}\]
This innovative approach for devising a controller pdf provides an elegant solution for addressing the inherent uncertainties in quantum systems. By minimizing the KLD between the actual and ideal joint pdfs, our method facilitates more accurate control laws under various conditions of uncertainty, thereby enhancing the overall performance and reliability of quantum systems. The fully probabilistic control framework enables us to account for various sources of uncertainty and stochasticity affecting the quantum systems, and to construct efficient control strategies tailored specifically for quantum systems. Moreover, the recursive nature of the controller pdf design allows for efficient online implementation, making it adaptable to real-time applications in quantum information processing, quantum communication, and quantum sensing. The proposed approach addresses the challenges associated with controlling ensembles of quantum systems and offers a novel, comprehensive framework for the robust management of their dynamics.
### General Solution to the Fully Probabilistic Quantum Control Problem
In this section, we derive the general solution of the optimal randomized controller that minimizes the KLD given in equation (21) for systems defined by arbitrary pdfs. This solution is not restricted to specific forms of the pdfs and can be applied to any quantum system described by arbitrary conditional pdfs.
**Theorem 1**: _The pdf of the optimal control law \(c(u_{t-1}|x_{t-1})\) that minimizes the cost-to-go function (25) between the joint pdf of the closed loop description of a quantum system (22) and a pre-specified ideal one (23) is given by:_
\[c(u_{t-1}|x_{t-1})=\frac{{}^{I}c(u_{t-1}|x_{t-1})\exp[-\beta(u_{t-1},x_{t-1})] }{\gamma(x_{t-1})}, \tag{26}\]
_where,_
\[\gamma(x_{t-1})=\int{}^{I}c(u_{t-1}|x_{t-1})\exp[-\beta(u_{t-1},x_{t-1})] \mathrm{d}u_{t-1}, \tag{27}\]
_and,_
\[\beta(u_{t-1},x_{t-1})=\int s(x_{t}|u_{t-1},x_{t-1})s(o_{t}|x_{t})\times\ln \bigg{(}\frac{s(o_{t}|x_{t})}{Ts(o_{t}|x_{t})}\frac{1}{\gamma(x_{t})}\bigg{)} \mathrm{d}x_{t}\mathrm{d}o_{t}. \tag{28}\]
**Proof 1**: _This theorem can be easily proven by adapting the proof of Proposition 2 in Ref [29]._
The above theorem presents a general solution for the fully probabilistic control problem of quantum systems, without being restricted to a specific form of the generative probabilistic model describing the system dynamics. This solution is applicable to any quantum system described by an arbitrary pdf.
Solution to Quantum Control Problems with Gaussian pdfs
Building upon the general solution of the fully probabilistic control approach presented in Section (3.2), we now focus on quantum systems characterized by Gaussian probability density functions (pdfs). The linear Gaussian setting simplifies the problem and provides an opportunity to demonstrate the efficacy of the method.
### System Description and Assumptions
This section considers quantum systems described by a bilinear state space model of the form given in (16) and (18) when the noises \(\eta_{t}\) in (17) and \(\sigma_{t}\) are generated using a Gaussian process. Under these conditions, the pdf of the quantum system state (19) can be described by linear Gaussian pdfs as follows:
\[s(x_{t}\left|x_{t-1},u_{t-1}\right.)\sim\mathcal{N}_{\mathcal{C}}(\mu_{t}, \Gamma_{t}), \tag{29}\]
where \(\mu_{t}\) and \(\Gamma_{t}\) are the mean and covariance matrices, respectively:
\[\mu_{t} =E(x_{t})=Ax_{t-1}+Bu_{t-1}, \tag{30}\] \[\Gamma_{t} =E((x_{t}-\mu_{t})(x_{t}-\mu_{t})^{\dagger}),\] \[=E(\zeta_{t}Ax_{t-1}\zeta_{t}x_{t-1}^{\dagger}A^{\dagger}),\] \[=Ax_{t-1}\Sigma x_{t-1}^{\dagger}A^{\dagger}, \tag{31}\]
where we used equations (16) and (17) in order to evaluate the covariance matrix \(\Gamma_{t}\). In this case, the matrices \(A\) and \(B\) represent the state and control matrices, respectively, appearing in (16). The functional \(E(.)\) denotes the expectation value, \(x_{t}^{\dagger}\) is the conjugate transpose of \(x_{t}\), \(\Sigma=E(\zeta_{t}^{2})\) and \(\mathcal{N}_{\mathcal{C}}\) denotes a complex Gaussian distribution. The forms of the complex Gaussian distribution and the normal Gaussian distribution are recalled in Appendix (B).
Furthermore, with the noise term \(\sigma_{t}\) in (18) generated by a Gaussian distribution, the pdf of the measurement \(o_{t}\) in (20) can be characterized by Gaussian pdf as follows:
\[s(o_{t}\left|x_{t}\right.)\sim\mathcal{N}(o_{d},G), \tag{32}\]
where \(o_{d}=Dx_{t}\) and \(G=E(\sigma_{t}\sigma_{t}^{T})=E((o_{t}-Dx_{t})(o_{t}-Dx_{t})^{T})\) are the mean and covariance matrices, respectively of this pdf.
Under the aforementioned conditions, the quantum system state \(x_{t}\), measurement \(o_{t}\), and the electric field \(u_{t}\) at each time \(t\) can be fully described by the following joint pdf:
\[f(x_{t},o_{t},u_{t-1}|x_{t-1})=c(u_{t-1}|x_{t-1})\mathcal{N}(Dx_{t},G) \mathcal{N}_{\mathcal{C}}(\mu t,\Gamma_{t}). \tag{33}\]
Taking this into account, we postulate that the ideal joint pdf governing the entire system- including the system state, measurement, and controller- is given by:
\[{}^{I}f(x_{t},o_{t},u_{t-1}|x_{t-1})={}^{I}s(x_{t}|x_{t-1},u_{t-1}){}^{I}s(o_ {t}\left|x_{t}\right.){}^{I}c(u_{t-1}\left|x_{t-1}\right.), \tag{34}\]
where the ideal distributions are given by:
\[{}^{I}s(x_{t}|x_{t-1},u_{t-1}) =s(x_{t}|x_{t-1},u_{t-1})\sim\mathcal{N}_{\mathcal{C}}(\mu_{t}, \Gamma_{t}), \tag{35}\] \[{}^{I}s(o_{t}\left|x_{t}\right.) \sim\mathcal{N}(o_{d},G_{r}),\] (36) \[{}^{I}c(u_{t-1}\left|x_{t-1}\right.) \sim\mathcal{N}(u_{r},\Omega), \tag{37}\]
and where \(o_{d}\) and \(G_{r}\) are the mean and covariance matrices of the ideal pdf of the measurement respectively, and \(u_{r}\) and \(\Omega\) are the mean and the covariance matrices respectively of the ideal pdf of the controller. Furthermore, as illustrated in Equation (35), the ideal distribution of the quantum state, \({}^{I}s(x_{t}|x_{t-1},u_{t-1})\), is set to be identical to the actual distribution, \(s(x_{t}|x_{t-1},u_{t-1})\), which is characterized by the master equation (1). This equivalence signifies the undetected natural evolution of the quantum state, an intrinsic characteristic of quantum dynamics, which forms a pivotal component of the proposed control methodology.
### Solution to the linear Gaussian Quantum Control Problem
Given the defined joint pdf of the quantum system (33) and the ideal joint pdf (34), we can now apply the general solution of the fully probabilistic control approach presented in Section (3.2) to solve the quantum control problem for linear Gaussian systems. Specifically, by incorporating the components of the joint pdf of the closed-loop description of the quantum system (33) and the ideal joint pdf (34) into equations (26)-(28), we can derive analytic solutions for the optimal cost-to-go function and the randomized controller. We will soon present the form of the pdf for the optimal control, but first, let us outline the structure of the optimal cost-to-go function.
**Theorem 2**: _By substituting the ideal distribution of the measurements (36), the ideal distribution of the controller (37), the real distribution of the measurement (32), and the real distribution of system dynamics (29) into (27), the optimal cost to go function can be shown to be given by,_
\[-\ln\left(\gamma\left(x_{t-1}\right)\right)=0.5x_{t-1}^{\dagger}M_{t-1}x_{t-1} +0.5P_{t-1}x_{t-1}+0.5\omega_{t-1}, \tag{38}\]
_where,_
\[M_{t-1}=A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})A-A^{\dagger}(D ^{\dagger}G_{r}^{-1}D+M_{t})^{\dagger}B\big{(}\Omega^{-1}+B^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})B\big{)}^{-1}\] \[B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})A+A^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})\Sigma A, \tag{39}\]
\[P_{t-1}=(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)A+2(\Omega^{-1}u_{r} -0.5B^{\dagger}(P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d}))^{\dagger}\big{(} \Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B\big{)}^{-1}\] \[B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})A, \tag{40}\]
_and,_
\[\omega_{t-1} =\omega_{t}+o_{d}^{\dagger}G_{r}^{-1}o_{d}+\ln\bigg{(}\frac{|G_{r}|}{ |G|}\bigg{)}-\mathrm{Tr}\left(G(G^{-1}-G_{r}^{-1})\right)+u_{r}^{\dagger}\Omega ^{-1}u_{r}\] \[-\big{(}\Omega^{-1}u_{r}-0.5B^{\dagger}(P_{t}^{\dagger}-2D^{ \dagger}G_{r}^{-1}o_{d})\big{)}^{\dagger}\big{(}\Omega^{-1}+B^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})B\big{)}^{-1}\big{(}\Omega^{-1}u_{r}-0.5B^{\dagger}( P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d})\big{)}\] \[-2\ln(|\Omega|^{-1/2}|\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{- 1}D+M_{t})B|^{-1/2}), \tag{41}\]
_where \(|G|\) stands for the determinant of the matrix \(G\)._
**Proof 2**: _The proof of this theorem is given in Appendix (C)._
Having established the form of the optimal cost-to-go function (38), we can utilize it in equation (26) to determine the distribution of the optimal controller \(c(u_{t-1}|x_{t-1})\). This optimizes the pdf of the entire system (33), essentially bringing the pdf of the system as close as possible to the ideal one. The form of this optimal controller is presented in the subsequent theorem.
**Theorem 3**: _The controller's distribution, which minimizes the Kullback-Leibler Divergence (KLD) between the joint pdf of the closed-loop system dynamics described in (33) and the ideal joint pdf presented in (34), is given by:_
\[c(u_{t-1}\left|x_{t-1}\right.)\sim\mathcal{N}(v_{t-1},R_{t}), \tag{42}\]
_where,_
\[v_{t-1}=\bigg{(}\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_ {t})B\bigg{)}^{-1}\bigg{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{\dagger}G_{r}^{-1}D+ M_{t})Ax_{t-1}-0.5B^{\dagger}(P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d}) \bigg{)}, \tag{43}\]
_and,_
\[R_{t}=\bigg{(}\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_ {t})B\bigg{)}^{-1}. \tag{44}\]
**Proof 3**: _The proof is given in Appendix (D)._
The randomised optimal controller derived in Theorem (3) represents a novel approach to controlling an ensemble of quantum systems. By introducing a multiplicative noise, \(\eta(x_{t-1})\) and an additive noise, \(\sigma_{t}\), to the state vector \(x_{t}\) and the observations \(o_{t}\) respectively, this innovative linear form of the controller effectively addresses the challenges of controlling randomly selected members of quantum ensembles. This breakthrough in quantum control theory enhances the efficacy and adaptability of control strategies for quantum systems.
### Implementation of the Gaussian controller
The implementation of the Gaussian controller, as described in Theorem (3), can be broken down into a step-by-step process, which is summarized in the following algorithm. The implementation consists of two primary phases: Optimization and Testing.
During the Optimization phase, the objective is to characterize the probability density function (pdf) of the randomized controller by optimizing its parameters. The optimization process utilizes historical data, simulated data, or real-time data obtained online to achieve the desired performance of the controlled quantum system. By optimizing these parameters, the randomized controller will be able to drive the pdf of the joint distribution of the closed-loop description of the quantum system to a pre-specified ideal joint pdf.
The Testing phase involves using the Gaussian controller, constructed during the Optimization phase, to drive members of an ensemble to the same target state. The ensemble members are randomly generated, allowing for an assessment of the controller's performance under a range of initial conditions. This benchmarking process demonstrates the robustness and efficiency of the Gaussian controller when applied to real-world quantum systems.
```
1:Optimization:
2:Evaluate the operator \(D\) associated with the target operator \(\hat{o}\);
3:Compute the matrices \(\tilde{A}\) and \(\tilde{N}\) from equation (7);
4:Determine the predefined desired value \(o_{d}\);
5:Specify the initial state \(x_{0}\), the shifting state \(x_{e}\), and then calculate the initial value of the measurement state \(o_{0}\gets Dx_{0}\);
6:Provide the covariances \(\Sigma,G,G_{r}\), and \(\Omega\) of the multiplicative noise, \(\eta(x_{t-1})\), the measurement vector, \(o_{t}\), the ideal distribution of \(o_{t}\), and the controller, \(u_{t}\) respectively;
7:Initialize: \(t\gets 0\), \(M_{0}\leftarrow\) rand, \(P_{0}\leftarrow\) rand;
8:while\(t\neq\mathcal{H}\)do
9: Set: \(\zeta_{t}=\) sample_from_Gaussian(0, \(\Sigma\));
10: Evaluate \(A\) and \(B\) using equation (13);
11: Calculate the steady state solutions of \(M_{t}\) and \(P_{t}\) following the formulas provided in equations (39) and (40), respectively;
12: Use \(M_{t}\) and \(P_{t}\) to compute the optimal control input, \(v_{t-1}\) following equation (43) given in Theorem (3);
13: Set: \(u_{t-1}\gets v_{t-1}\);
14: Use the obtained control input from the previous step to evaluate \(x_{t}\) according to equation (16), \(x_{t}\gets Ax_{t-1}+B(x_{t-1})u_{t-1}+\zeta_{t}Ax_{t-1}\);
15: Set: \(\sigma_{t}=\) sample_from_Gaussian(0, \(G\));
16: Following equation (18), evaluate \(o_{t}\) to find the measurement state at time instant \(t\), \(o_{t}\gets Dx_{t}+\sigma_{t}\);
17: Update time: \(t\gets t+1\);
18:endwhile
19:Testing:
20:Set \(N\): the number of members of the ensemble;
21:Initialize: \(k\gets 1\);
22:while\(k\leq N\)do
23:while\(t\neq\mathcal{H}\)do
24: Set: \(\zeta_{t}^{k}=\) sample_from_Gaussian(0, \(\Sigma\));
25: Evaluate \(A^{k}\) and \(B^{k}\) using equation (13);
26: Using the obtained control input from the optimization phase, evaluate \(x_{t}^{k}\) according to equation (16), \(x_{t}^{k}\gets A^{k}x_{t-1}^{k}+B^{k}(x_{t-1}^{k})u_{t-1}+\zeta_{t}^{k}A^{k }x_{t-1}\);
27: Set: \(\sigma_{t}^{k}=\) sample_from_Gaussian(0, \(G\));
28: Following equation (18), evaluate \(o_{t}^{k}\) to find the measurement state at time instant \(t\), \(o_{t}^{k}\gets Dx_{t}^{k}+\sigma_{t}^{k}\);
29:endwhile
30: Increment ensemble member: \(k\gets k+1\);
31:endwhile
```
**Algorithm 1** Fully probabilistic control of quantum systems
Algorithm (1) will be applied in the next section to control an ensemble of randomly generated systems to the same desired state.
## 5 Results and discussions
### Control of quantum ensemble
In this section, we demonstrate the effectiveness of our algorithm when applied to an ensemble of quantum systems. We consider a simplified Hamiltonian \(H\) as described in (2), given by:
\[H=H_{0}+u_{t}H_{1}, \tag{45}\]
where we assume that the system interacts with only one electric field, represented by \(u_{t}\). Our control objective is to transfer an ensemble of quantum systems, initially prepared in a state \(\rho_{i}\), to a predefined target state \(\rho_{d}\). This transfer is achieved through the interaction between the ensemble members and the optimal randomized electric field. The parameters of this electric field are optimized using Theorem (3) to achieve a pre-specified desired joint density function for the joint distribution of the ensemble dynamics. We will demonstrate that the optimized randomized controller is highly effective in controlling all members of the ensemble.
The control problem we aim to address is to maximize the fidelity between the actual state \(\rho_{t}\) and the target state \(\rho_{d}\) for all ensemble members. Here, \(\rho_{t}\) represents the density operator describing the systems at time \(t\). By applying our algorithm (1) to this ensemble of quantum systems, we showcase its efficiency and versatility in controlling complex quantum ensembles. This highlights the potential of our algorithm to tackle real-world quantum control challenges.
In the following, we present the simulation results for the control of the quantum ensemble using our proposed Gaussian controller. The simulations demonstrate the robustness and effectiveness of the controller in driving the ensemble members towards the desired target state. Furthermore, we will discuss the implications of our results and the potential applications of our algorithm in quantum control and quantum information processing tasks.
#### 5.1.1 spin-1/2 ensemble
Consider a spin-1/2 system interacting with an electric field \(u_{t}\)[11]. The Hamiltonian describing this interaction is given by:
\[H=H_{0}+u_{t}H_{1}=\frac{1}{2}\sigma_{3}+\frac{u_{t}}{2}(\sigma_{1}+\sigma_{2}), \tag{46}\]
where \(\sigma_{1}\), \(\sigma_{2}\), and \(\sigma_{3}\) are the Pauli operators given in the basis \(\left|1\right\rangle,\left|0\right\rangle\) by the Pauli matrices,
\[\sigma_{1}=\left(\begin{array}{cc}0&1\\ 1&0\end{array}\right),\quad\sigma_{2}=\left(\begin{array}{cc}0&-i\\ i&0\end{array}\right),\quad\sigma_{3}=\left(\begin{array}{cc}1&0\\ 0&-1\end{array}\right). \tag{47}\]
Furthermore, we consider that the system interacts with an environment, e.g., a surrounding vacuum state. The master equation describing this interaction is given by:
\[\frac{d\rho(t)}{dt}=-i[H,\rho(t)]+\Theta\bigg{(}\sigma_{-}\rho(t)\sigma_{+}- \frac{1}{2}\sigma_{+}\sigma_{-},\rho(t)\bigg{)},\ \ \rho(0)=\rho_{0}=\ket{0}\bra{0}, \tag{48}\]
where \(\sigma_{\pm}=\dfrac{\sigma_{1}\pm i\sigma_{2}}{2}\) and \(\Theta\) describes the coupling between the system and the environment.
In the optimization phase, the control objective is to transform the system state from its initial value \(x_{0}=(0,1,0,0)^{T}\) which corresponds to the first eigenstate of the free Hamiltonian, \(\ket{0}=(0,1)^{T}\), to the desired state \(x_{d}=(1,0,0,0)^{T}\) which corresponds to the second eigenstate of the free Hamiltonian \(\ket{1}=(1,0)^{T}\), using the proposed probabilistic quantum control approach. Consequently, the target operator is taken to be \(D=[1\ \ 0\ \ 0\ \ 0]\), which is equivalent to \(D=(\text{vec}(\Pi_{1}))^{T}\), where \(\Pi_{1}=\ket{1}\bra{1}\). The matrix elements of the \(\tilde{A}\) and \(\tilde{N}\) matrices are provided in Appendix (E). In the optimization phase of Algorithm (1), we consider \(x_{e}=(0,0,0,0)^{T}\), \(\Delta t=2.5\times 10^{-6}\), \(G_{r}=0.00001\), \(\Omega=10\), and \(\Theta=0.1\), and the system is evolved until a sufficient fidelity between the actual state and the target state is achieved.
Following the optimization phase, the obtained control signal is used to control one of the members of the spin-\(1/2\) system. Figure (1a) illustrates the obtained time evolution of the populations of the two states \(\ket{0}\bra{0}\) and \(\ket{1}\bra{1}\) of the selected member, and Figure (1b) displays the time evolution of the optimal control signal responsible for achieving the control objective. We can clearly see that the population of the state \(\ket{1}\bra{1}\) has reached its desired value, i.e., \(o_{d}=1\), and it is maintained at that value for extended periods. This indicates that the optimal electric field is efficient in preventing the environmentally induced relaxation of the system to the ground state \(\ket{0}\bra{0}\), signifying that the optimal control can counteract the natural effect of the environment, i.e., relaxation.
Furthermore, the optimized randomized controller obtained from the optimization phase is applied to a sample of ensemble members, each corresponding to different realizations of the state time evolution generated through the noise term \(\eta(x_{t-1})\). This experiment allows us to assess the performance of the controller on a diverse set of members, each representing a unique noise-driven evolution. Figure (2) displays the fidelities between the target state \(\ket{1}\bra{1}\) and the states of \(1000\) ensemble members interacting with the optimal control electric field. From this figure, we can clearly see that the fidelities for the state transition of all tested \(1000\) members lie in the interval of \([0.9932,1]\) with a mean value equal to \(0.9945\). This demonstrates that the algorithm exhibits exceptional performance in this specific example. It is worth mentioning that we have considered members which are interacting with their external environment. This implies that the optimal electric field is capable of driving the ensemble members to the desired state and preventing their relaxation, even in the presence of environmental effects. The results underscore the robustness and effectiveness of our algorithm in addressing real-world quantum control challenges that involve noise and environmental interactions.
#### 5.1.2 \(\Lambda\)-type atomic ensemble
We now investigate a \(\Lambda\)-type atomic system [11] prepared in an initial state and interacts with an electric field \(u_{t}\). The Hamiltonian describing this interaction is represented by matrices in the basis \(\ket{2},\ket{1},\ket{0}\) as follows:
\[H=H_{0}+u_{t}H_{1}=\left(\begin{array}{ccc}\dfrac{3}{2}&0&0\\ 0&1&0\\ 0&0&0\end{array}\right)+\left(\begin{array}{ccc}0&0&1\\ 0&0&1\\ 1&1&0\end{array}\right)u_{t}. \tag{49}\]
Figure 1: (1a ): Time evolution of the populations of the \(\ket{0}\bra{0}\) and \(\ket{1}\bra{1}\) states. (1b): Time evolution of the control signal \(u_{t}\) responsible of achieving the control objective.
Figure 2: The testing performance of the optimal control for spin-1/2 ensemble. The samples are randomly generated through the noise term \(\eta(x_{t-1})\).
Moreover, we assume that the system interacts with an environment, with the system-environment coupling being described by a single Lindblad operator, \(L_{02}=\sqrt{\Theta}\left|0\right\rangle\left\langle 2\right|\), where \(\Theta\equiv\Theta_{2\to 0}\) represents the dissipative transition rate from Hamiltonian eigenstate \(\left|2\right\rangle\) to the Hamiltonian ground state \(\left|0\right\rangle\).
The control objective aims to transfer the system from state \(\left|0\right\rangle\left\langle 0\right|\) to the desired state \(\rho_{d}=\left|\psi_{d}\right\rangle\left\langle\psi_{d}\right|\), with \(\left|\psi_{d}\right\rangle=\dfrac{1}{\sqrt{2}}(\left|1\right\rangle+\left|2\right\rangle)\). To achieve this, we must maximize the fidelity between the actual state and the target state \(\rho_{d}\). The target operator \(D\) in (18) now takes the form \(D=(\mathrm{vec}(\Pi_{d})^{T})^{T}\), where \(\Pi_{d}=\left|\psi_{d}\right\rangle\)\(\left\langle\psi_{d}\right|\) represents the projector onto the target state \(\left|\psi_{d}\right\rangle\). By focusing on this control objective, we can ensure a high degree of precision in the transfer of the system between states, even when accounting for the environmental interactions.
Figure 4: The testing performance of the optimal control for three level systems ensemble. The samples are randomly generated through the noise term \(\eta(x_{t-1})\).
Figure 3: (3a ): Time evolution of the populations the \(\left|0\right\rangle\left\langle 0\right|\), \(\left|1\right\rangle\left\langle 1\right|\) and \(\left|2\right\rangle\left\langle 2\right|\) states. (3b): Time evolution of the control signal, \(u_{t}\) responsible of achieving the control objective.
This approach highlights the versatility of the probabilistic quantum control method in effectively managing different quantum systems and control objectives.
The elements of the \(\tilde{A}\) and \(\tilde{N}\) matrices are provided in Appendix (F). These matrices are then employed to compute the time-evolution of matrices \(A\) and \(B\) using (13) at each time step. With these matrices and setting \(x_{e}=(0,0,0,0,0,0,0,0,0)^{T}\), \(\Delta t=0.000000001\), \(G_{r}=0.0000001\), \(\Omega=10000\), and \(\Theta=0.9\), we evaluate the matrices \(M_{t}\) and \(P_{t}\) defined in equations (39) and (40), respectively, at each instant of time, as discussed in the optimization phase of Algorithm (1). The evolution of \(u_{t}\) is then obtained by substituting the calculated \(M_{t}\) and \(P_{t}\) matrices into (43). We repeat these steps until the fidelity between the actual state and the target state reaches its maximum value of unity, signifying that the system's state has reached the desired target state.
Following the optimization phase, the obtained optimal randomized controller is used to control one of the members of the \(\Lambda-\)type system. Figure (3a) illustrates the time evolution of the populations of states \(\ket{0}\bra{0}\), \(\ket{1}\bra{1}\), and \(\ket{2}\bra{2}\) of the selected ensemble interacting with the calculated electric field. The time evolution of the obtained optimal control signal is depicted in Figure (3b). It is evident that the optimal control objective is achieved after only a few steps. As a result, the optimal control is capable of preserving quantum coherence, i.e., the quantum superposition, under the influence of decoherence.
To further demonstrate the capability of the designed randomised controller to effectively control the members of the \(\Lambda-\)type system, the optimal controller is applied to other randomly selected ensemble members generated through the noise term \(\eta(x_{t-1})\). These ensemble members represent different realizations of the state time evolution influenced by the noise term \(\eta(x_{t-1})\). Figure (4) displays the fidelities between the target state and the final states of \(1000\) ensemble members interacting with the optimal controller. We observe that the fidelities are almost equal to \(1\), implying that all members successfully reached the target state \(\ket{\psi_{d}}\). This result highlights the potential effectiveness of the proposed method in controlling \(\Lambda\)-type atomic systems under environmental effects.
## 6 Final comments
In this paper, we have presented a novel approach for deriving optimal randomized controllers for quantum ensembles. Our proposed method consists of two main phases: optimization and testing. The optimization phase involves constructing a control for a generalized system of the ensemble using a probabilistic approach, which is based on describing the system dynamics with probability density functions and minimizing the distance between the actual closed-loop joint pdf and the desired joint pdf. We first established a general control solution for quantum systems described by arbitrary pdfs. The solution is then demonstrated on a specific case involving quantum systems affected by Gaussian noise and described by Gaussian pdfs, which yielded an analytical form for the randomized controller.
In the testing phase, the controller obtained during the optimization phase is applied to a large number of randomly selected ensemble members. Our results demonstrated that, with the optimized controller, all
members of the quantum ensemble that were also affected by environmental noises and interactions were successfully steered to the same target state, illustrating the effectiveness of our proposed methodology. In practical applications, the verification of successful steering to the target state would involve performing quantum state tomography on the final state of the system, and calculating the fidelity between the actual state and the desired state. Moreover, it's important to note a crucial aspect of quantum mechanics: the act of measurement itself can influence the state of the quantum system, a phenomenon known as measurement backaction. While this effect was not explicitly accounted for in the current work, it's an important consideration as it can potentially impact the effectiveness of the control field. Future work could explore integrating the effect of measurement backaction into the control methodology to ensure robust control even in the presence of frequent measurements.
In practical settings, the approach entails deriving the control field using historical, simulated, or real-time data obtained from quantum systems with the introduced method, and subsequently applying the electric field to the quantum ensemble.
The main advantages of the proposed methodology include:
* It provides a general form for the time evolution of the controller for systems affected by arbitrary noise, making it suitable for a wide range of quantum systems.
* It is easy to implement numerically, as explained in Algorithm (1), and does not require significant computational resources.
* The method is highly adaptable and can be applied to various quantum systems and control objectives, making it appealing to physicists and experimentalists working with quantum systems.
Our approach allows for the efficient control and manipulation of quantum systems, even in the presence of environmental noises and interactions. This can be particularly useful for preserving quantum coherence and achieving specific control objectives in various experimental settings.
Possible extensions of this work include considering quantum systems with arbitrary forms of noise affecting their evolutions. This would further expand the applicability and versatility of our proposed method, enabling it to address a broader range of quantum systems and control challenges.
## Acknowledgements:
This work was supported by the EPSRC grant EP/V048074/1.
## Appendix A Vectorization of the density operator
In equation (1), we consider a density matrix \(\rho(t)\in\mathbf{C}^{l\times l}\) given by:
\[\rho(t)=(\rho(t))^{\dagger}=\left(\begin{array}{cccc}\rho_{0,0}(t)&\rho_{0,1}( t)&\ldots&\rho_{0,l-1}(t)\\ \rho_{1,0}(t)&\rho_{1,1}(t)&\ldots&\rho_{1,l-1}(t)\\ \vdots&\vdots&\ddots&\vdots\\ \rho_{l-1,0}(t)&\rho_{l-1,1}(t)&\ldots&\rho_{l-1,l-1}(t)\end{array}\right).\] (A.1)
The vectorization of \(\rho(t)\) used in (9) is:
\[\tilde{x}(t) =\mathrm{vec}(\rho(t))\] \[=\left[\rho_{0,0}(t)\quad\rho_{1,1}(t)\ldots\rho_{l-1,l-1}(t) \quad\rho_{0,1}(t)\ldots\rho_{0,l-1}(t)\quad\rho_{1,0}(t)\ldots\rho_{l-1,0}(t )\ldots\ldots\ldots\rho_{l-2,l-1}(t)\quad\rho_{l-1,l-2}(t)\right]^{T},\] (A.2)
where \(T\) stands for the transpose operation. Thus, for \(l=3\) the vectorization of the density matrix \(\rho(t)=\left(\begin{array}{cccc}\rho_{00}(t)&\rho_{01}(t)&\rho_{02}(t)\\ \rho_{10}(t)&\rho_{11}(t)&\rho_{12}(t)\\ \rho_{20}(t)&\rho_{21}(t)&\rho_{22}(t)\end{array}\right)\) is trivially given by:
\[\tilde{x}(t)=\tilde{x}_{t}=\left[\rho_{00}(t)\quad\rho_{11}(t)\quad\rho_{22}( t)\quad\rho_{01}(t)\quad\rho_{02}(t)\quad\rho_{10}(t)\quad\rho_{20}(t)\quad \rho_{12}(t)\quad\rho_{21}(t)\right]^{T}.\] (A.3)
## Appendix B Complex Gaussian and Gaussian distributions
For nonsingular covariance matrix \(\Gamma_{t}\), the complex Gaussian distribution of a complex random variable \(x_{t}\in\mathbf{C}^{n}\) is given by:
\[\mathcal{N}_{\mathcal{C}}(\mu_{t},\Gamma_{t})=\frac{1}{\pi^{n}|\Gamma_{t}|} \exp\bigg{[}-(x_{t}-\mu_{t})^{\dagger}\Gamma_{t}{}^{-1}(x_{t}-\mu_{t})\bigg{]},\] (B.1)
where \(\mu_{t}=E(x_{t})\), and \(\Gamma_{t}=\mathrm{E}((x_{t}-\mu_{t})(x_{t}-\mu_{t})^{\dagger})\), with \(\mathrm{E}(x_{t})\) being the expected value of \(x_{t}\), \(|\Gamma_{t}|\) is the determinant of \(\Gamma_{t}\), and \(\dagger\) stands for the complex conjugate transpose operation.
The Gaussian distribution for a real random variable \(o_{t}\in\mathbf{R}^{n}\) is given by:
\[\mathcal{N}(o_{m},G)=\frac{1}{\sqrt{(2\pi)^{n}|G|}}\exp\bigg{[}-0.5(o_{t}-o_{m} )^{T}G^{-1}(o_{t}-o_{m})\bigg{]}.\] (B.2)
where \(o_{m}=E(o_{t})\) and \(G=\mathrm{E}((o_{t}-o_{m})(o_{t}-o_{m})^{T})\) are respectively the mean and covariance matrices, and \(T\) stands for the transpose operation. Since \(o_{t}\) belongs to the set of real numbers \(\mathbf{R}^{n}\), the Gaussian distribution (B.2) can be equivalently restructured as:
\[\mathcal{N}(o_{m},G)=\frac{1}{\sqrt{(2\pi)^{n}|G|}}\exp\bigg{[}-0.5(o_{t}-o_{m })^{\dagger}G^{-1}(o_{t}-o_{m})\bigg{]}.\] (B.3)
This recasting of the real Gaussian distribution into the form of a complex Gaussian distribution provides a consistency in the derivation of the randomized controller. It not only simplifies the presentation but also aligns well with the inherent nature of quantum systems, where the state variable is complex. Thus, the adapted representation facilitates a more coherent theoretical framework for our probabilistic quantum control approach.
## Appendix C Evaluation of the optimal cost-to-go, \(\gamma(x_{t-1})\)
In this appendix we show how to obtain the optimal cost-to-go function (38) stated in Theorem (2). For this purpose, we first evaluate the coefficient \(\beta(u_{t-1},x_{t-1})\) defined in equation (28), repeated here:
\[\beta(u_{t-1},x_{t-1})=\int s(x_{t}|u_{t-1},x_{t-1})s(o_{t}|x_{t})\times\ln \bigg{(}\frac{s(o_{t}|x_{t})}{s(o_{t}|x_{t})}\frac{1}{\gamma(x_{t})}\bigg{)} \mathrm{d}x_{t}\mathrm{d}o_{t}.\] (C.1)
Using equations (32), (36) and (38) we evaluate:
\[\ln\bigg{(}\frac{s(o_{t}|x_{t})}{I_{S}(o_{t}|x_{t})}\frac{1}{ \gamma(x_{t})}\bigg{)}\] \[=-0.5(o_{t}-Dx_{t})^{\dagger}G^{-1}(o_{t}-Dx_{t})+0.5(o_{t}-o_{d} )^{\dagger}G_{r}^{-1}(o_{t}-o_{d})+0.5x_{t}^{\dagger}M_{t}x_{t}+0.5P_{t}x_{t }+0.5\omega_{t}+0.5\ln\bigg{(}\frac{|G_{r}|}{|G|}\bigg{)},\] (C.2)
which implies that:
\[\ln\bigg{(}\frac{s(o_{t}|x_{t})}{I_{S}(o_{t}|x_{t})}\frac{1}{ \gamma(x_{t})}\bigg{)}\] \[=-0.5o_{t}^{\dagger}G^{-1}o_{t}-0.5x_{t}^{\dagger}D^{\dagger}G^{-1 }Dx_{t}+o_{t}^{\dagger}G^{-1}Dx_{t}+0.5o_{t}^{\dagger}G_{r}^{-1}o_{t}-o_{t}^{ \dagger}G_{r}^{-1}o_{d}+0.5x_{t}^{\dagger}M_{t}x_{t}+0.5P_{t}x_{t}+0.5\omega_ {t}\] \[+0.5o_{d}^{\dagger}G_{r}^{-1}o_{d}+0.5\ln\bigg{(}\frac{|G_{r}|}{| G|}\bigg{)}\] \[=-0.5o_{t}^{\dagger}(G^{-1}-G_{r}^{-1})o_{t}+o_{t}^{\dagger}(G^{-1 }Dx_{t}-G_{r}^{-1}o_{d})-0.5x_{t}^{\dagger}D^{\dagger}G^{-1}Dx_{t}+0.5x_{t}^{ \dagger}M_{t}x_{t}+0.5P_{t}x_{t}+0.5\omega_{t}\] \[+0.5o_{d}^{\dagger}G_{r}^{-1}o_{d}+0.5\ln\bigg{(}\frac{|G_{r}|}{| G|}\bigg{)}.\] (C.3)
By substituting (C.3) into (C.1), and integrating over \(o_{t}\) we get:
\[\beta(u_{t-1},x_{t-1})=\int s(x_{t}|u_{t-1},x_{t-1})s(o_{t}|x_{t })\bigg{(}-0.5o_{t}^{\dagger}(G^{-1}-G_{r}^{-1})o_{t}+o_{t}^{\dagger}(G^{-1}Dx _{t}-G_{r}^{-1}o_{d})-0.5x_{t}^{\dagger}D^{\dagger}G^{-1}Dx_{t}\] \[+0.5x_{t}^{\dagger}M_{t}x_{t}+0.5P_{t}x_{t}+0.5\omega_{t}+0.5o_{d }^{\dagger}G_{r}^{-1}o_{d}+0.5\ln\bigg{(}\frac{|G_{r}|}{|G|}\bigg{)}\bigg{)} \mathrm{d}x_{t}\mathrm{d}o_{t}\] \[=\int s\,(x_{t}|u_{t-1},x_{t-1})\,\bigg{(}0.5x_{t}^{\dagger}D^{ \dagger}G_{r}^{-1}Dx_{t}-x_{t}^{\dagger}D^{\dagger}G_{r}^{-1}o_{d}+0.5x_{t}^{ \dagger}M_{t}x_{t}+0.5P_{t}x_{t}+0.5\omega_{t}+0.5o_{d}^{\dagger}G_{r}^{-1}o_ {d}\] \[-0.5\,\mathrm{Tr}((G^{-1}-G_{r}^{-1})G)+0.5\ln\bigg{(}\frac{|G_{r} |}{|G|}\bigg{)}\bigg{)}dx_{t}\] \[=\int s\,(x_{t}|u_{t-1},x_{t-1})\,\bigg{(}0.5x_{t}^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})x_{t}+0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)x_{t}+0.5 \omega_{t}+0.5o_{d}^{\dagger}G_{r}^{-1}o_{d}+0.5\ln\bigg{(}\frac{|G_{r}|}{|G|} \bigg{)}\] \[-0.5\,\mathrm{Tr}((G^{-1}-G_{r}^{-1})G)\bigg{)}dx_{t}.\] (C.4)
Integrating over \(x_{t}\) yields the following form of \(\beta(u_{t-1},x_{t-1})\):
\[\beta(u_{t-1},x_{t-1})=0.5\mu_{t}^{\dagger}(D^{\dagger}G_{r}^{-1}D+ M_{t})\mu_{t}+0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)\mu_{t}+0.5\omega_{t}+0.5o_{d}^ {\dagger}G_{r}^{-1}o_{d}+0.5\ln\left(\frac{|G_{r}|}{|G|}\right)\] \[-0.5\operatorname{Tr}((G^{-1}-G_{r}^{-1})G)+0.5\operatorname{Tr} ((D^{\dagger}G_{r}^{-1}D+M_{t})\Gamma_{t}).\] (C.5)
Using the definition of \(\Gamma_{t}\) given in equation (31), the last term in the above equation can be evaluated to give:
\[\operatorname{Tr}((D^{\dagger}G_{r}^{-1}D+M_{t})\Gamma_{t}) =\operatorname{Tr}((D^{\dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}\Sigma x _{t-1}^{\dagger}A^{\dagger}),\] \[=<\text{cyclic permutation}>\] \[\operatorname{Tr}(A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_ {t-1}\Sigma x_{t-1}^{\dagger}),\] \[=<\text{identity }\operatorname{Tr}(Qzz^{\dagger})=z^{\dagger}Qz>\] \[x_{t-1}^{\dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_ {t-1}\Sigma.\] (C.6)
Remembering that \(\mu_{t}=Ax_{t-1}+Bu_{t-1}\), the relation (C.5) becomes:
\[\beta(u_{t-1},x_{t-1})=0.5(Ax_{t-1}+Bu_{t-1})^{\dagger}(D^{\dagger }G_{r}^{-1}D+M_{t})(Ax_{t-1}+Bu_{t-1})+0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D) (Ax_{t-1}+Bu_{t-1})\] \[+0.5\omega_{t}+0.5o_{d}^{\dagger}G_{r}^{-1}o_{d}+0.5\ln\left( \frac{|G_{r}|}{|G|}\right)-0.5\operatorname{Tr}((G^{-1}-G_{r}^{-1})G)+0.5x_{t -1}^{\dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})\Sigma Ax_{t-1}\] \[=0.5x_{t-1}^{\dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_ {t-1}+0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)Ax_{t-1}+0.5u_{t-1}^{\dagger}B^{ \dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Bu_{t-1}\] \[+u_{t-1}^{\dagger}B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_{t -1}+0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)Bu_{t-1}+0.5\omega_{t}+0.5o_{d}^{ \dagger}G_{r}^{-1}o_{d}+0.5\ln\left(\frac{|G_{r}|}{|G|}\right)\] \[-0.5\operatorname{Tr}((G^{-1}-G_{r}^{-1})G)+0.5x_{t-1}^{\dagger}A ^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})\Sigma Ax_{t-1}\] (C.7)
By substituting the form of \(\beta\) found in (C.7) and the ideal distribution of the controller provided in equation (37), into the definition of \(\gamma(x_{t-1})\) given in (27) we find that:
\[\gamma(x_{t-1})=\int{}^{I}c(u_{t-1}|x_{t-1})\exp[-\beta(u_{t-1},x_{ t-1})]\mathrm{d}u_{t-1},\] \[=(2\pi)^{-1/2}|\Omega|^{-1/2}\int\exp\bigg{[}-0.5(u_{t-1}-u_{r})^ {\dagger}\Omega^{-1}(u_{t-1}-u_{r})-0.5x_{t-1}^{\dagger}A^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}\] \[-0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)Ax_{t-1}-0.5u_{t-1}^{ \dagger}B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Bu_{t-1}-u_{t-1}^{\dagger}B^ {\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}\] \[-0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)Bu_{t-1}-0.5\omega_{t}-0. 5o_{d}^{\dagger}G_{r}^{-1}o_{d}-0.5\ln\bigg{(}\frac{|G_{r}|}{|G|}\bigg{)}+0.5 \operatorname{Tr}((G^{-1}-G_{r}^{-1})G)\] \[-0.5x_{t-1}^{\dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t}) \Sigma Ax_{t-1}\bigg{]}\mathrm{d}u_{t-1}\] \[=(2\pi)^{-1/2}|\Omega|^{-1/2}\mathrm{exp}\bigg{(}-0.5x_{t-1}^{ \dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}-0.5(P_{t}-2o_{d}^{ \dagger}G_{r}^{-1}D)Ax_{t-1}-0.5\omega_{t}-0.5o_{d}^{\dagger}G_{r}^{-1}o_{d}\] \[-0.5\ln\bigg{(}\frac{|G_{r}|}{|G|}\bigg{)}+0.5\operatorname{Tr}(( G^{-1}-G_{r}^{-1})G)-0.5x_{t-1}^{\dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t}) \Sigma Ax_{t-1}-0.5u_{r}^{\dagger}\Omega^{-1}u_{r}\bigg{)}\] \[\int\exp\bigg{(}-0.5u_{t-1}^{\dagger}(\Omega^{-1}+B^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})B)u_{t-1}+u_{t-1}^{\dagger}\big{(}\Omega^{-1}u_{r}- B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}\] \[-0.5B^{\dagger}(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)^{\dagger} \big{)}\bigg{)}\mathrm{d}u_{t-1}\] (C.8)
By completing the square with respect to \(u_{t-1}\) in the above equation, and using the solution of the general multiple integral given in Theorem (10.5.1) in Ref [43], it follows that:
\[\gamma(x_{t-1})=\exp\bigg{(}-0.5x_{t-1}^{\dagger}A^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}-0.5(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)Ax_{ t-1}-0.5\omega_{t}-0.5o_{d}^{\dagger}G_{r}^{-1}o_{d}\] \[-0.5\ln\bigg{(}\frac{|G_{r}|}{|G|}\bigg{)}+0.5\operatorname{Tr}( (G^{-1}-G_{r}^{-1}))-0.5x_{t-1}^{\dagger}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_ {t})\Sigma Ax_{t-1}-0.5u_{r}^{\dagger}\Omega^{-1}u_{r}\bigg{)}\] \[\times\exp\bigg{(}0.5\big{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})Ax_{t-1}-0.5B^{\dagger}(P_{t}-2o_{d}^{\dagger}G_{r}^ {-1}D)^{\dagger}\big{)}^{\dagger}\] \[\big{(}\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B \big{)}^{-1}\big{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t}) Ax_{t-1}-0.5B^{\dagger}(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)^{\dagger}\big{)}\] \[\bigg{)}\times|\Omega|^{-1/2}|\Omega^{-1}+B^{\dagger}(D^{\dagger }G_{r}^{-1}D+M_{t})B|^{-1/2}\] (C.9)
Finally the form provided in (38) in Theorem (2) is found:
\[-\ln\left(\gamma(x_{t-1})\right)\] \[=0.5x_{t-1}^{\dagger}\bigg{(}A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t })A-A^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})^{\dagger}B\big{(}\Omega^{-1}+B^{ \dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B\big{)}^{-1}\] \[B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})A+A^{\dagger}(D^{\dagger }G_{r}^{-1}D+M_{t})\Sigma A\bigg{)}x_{t-1}\] \[+0.5\bigg{(}(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)A+2(\Omega^{-1}u_{ r}-0.5B^{\dagger}(P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d}))^{\dagger}\big{(} \Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B\big{)}^{-1}\] \[B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})A\bigg{)}x_{t-1}\] \[+0.5\bigg{(}\omega_{t}+o_{d}^{\dagger}G_{r}^{-1}o_{d}+\ln\bigg{(} \frac{|G_{r}|}{|G|}\bigg{)}-\text{Tr}\left((G^{-1}-G_{r}^{-1})G\right)+u_{r}^ {\dagger}\Omega^{-1}u_{r}\] \[-\big{(}\Omega^{-1}u_{r}-0.5B^{\dagger}(P_{t}^{\dagger}-2D^{ \dagger}G_{r}^{-1}o_{d})\big{)}^{\dagger}\big{(}\Omega^{-1}+B^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})B\big{)}^{-1}\big{(}\Omega^{-1}u_{r}-0.5B^{\dagger}( P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d})\big{)}\] \[-2\ln(|\Omega|^{-1/2}|\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{- 1}D+M_{t})B|^{-1/2})\bigg{)}.\] (C.10)
## Appendix D Calculating of the control distribution function
By substituting (37), (C.7) and (C.10) into (26), it follows that the optimal control distribution is given by:
\[c(u_{t-1}|x_{t-1})=(2\pi)^{-1/2}|\Omega^{-1}+B^{\dagger}(D^{ \dagger}G_{r}^{-1}D+M_{t})B|^{1/2}\exp\bigg{[}-0.5u_{t-1}^{\dagger}\bigg{(} \Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B\bigg{)}u_{t-1}\] \[+u_{t-1}^{\dagger}\bigg{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{\dagger }G_{r}^{-1}D+M_{t})Ax_{t-1}-0.5B^{\dagger}(P_{t}-2o_{d}^{\dagger}G_{r}^{-1}D)^{ \dagger}\bigg{)}-0.5\bigg{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{\dagger}G_{r}^{-1} D+M_{t})Ax_{t-1}\] \[-0.5B^{\dagger}(P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d}) \bigg{)}^{\dagger}\bigg{(}\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t })B\bigg{)}^{-1}\bigg{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_ {t})Ax_{t-1}\] \[-0.5B^{\dagger}(P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d}) \bigg{)}\bigg{]}.\] (D.1)
which can be written as:
\[c(u_{t-1}|x_{t-1}) =(2\pi)^{-1/2}|\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{ t})B|^{1/2}\] (D.2) \[\times\exp\bigg{[}-0.5(u_{t-1}-v_{t-1})^{\dagger}\bigg{(}\Omega^ {-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B\bigg{)}(u_{t-1}-v_{t-1})\bigg{]},\]
where,
\[v_{t-1}=\bigg{(}\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B\bigg{)}^ {-1}\bigg{(}\Omega^{-1}u_{r}-B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})Ax_{t-1} -0.5B^{\dagger}(P_{t}^{\dagger}-2D^{\dagger}G_{r}^{-1}o_{d})\bigg{)},\] (D.3)
This means that the pdf of the controller follows a normal distribution, as given by:
\[c(u_{t-1}|x_{t-1})\sim\mathcal{N}(v_{t-1},R_{t}),\] (D.4)
where
\[R_{t}=\bigg{(}\Omega^{-1}+B^{\dagger}(D^{\dagger}G_{r}^{-1}D+M_{t})B \bigg{)}^{-1}.\] (D.5)
## Appendix E State space model for spin \(\frac{1}{2}\)
The evolution of a spin-1/2 system in interaction with an external environment can be described by the following master equation:
\[\frac{d\rho(t)}{dt}=-i[H,\rho(t)]+\Theta\bigg{(}\sigma_{-}\rho(t) \sigma_{+}-\frac{1}{2}\{\sigma_{+}\sigma_{-},\rho(t)\}\bigg{)},\] (E.1)
where, as given in equation (46), \(H=\frac{1}{2}\sigma_{3}+\frac{1}{2}(\sigma_{1}+\sigma_{2})u_{t}\) such that \(u_{t}\) is an external electric field, and \(\sigma_{1},\sigma_{2},\sigma_{3}\) are the relevant Pauli matrices and \(\sigma_{\pm}=\frac{\sigma_{1}\pm i\sigma_{2}}{2}\). The Hamiltonian \(H\) can be written in matrix form as follows:
\[H=\frac{1}{2}\left(\begin{array}{cc}1&u_{t}(1-i)\\ u_{t}(1+i)&-1\end{array}\right).\] (E.2)
Writing the density operator in terms of its elements, the von-Neumann equation (E.1) can be re-written as:
\[\frac{d}{dt}\left(\begin{array}{cc}\rho_{00}(t)&\rho_{01}(t)\\ \rho_{01}^{*}(t)&\rho_{11}(t)\end{array}\right)=\frac{-i}{2}\left(\begin{array} []{cc}u_{t}\big{(}(1-i)\rho_{01}^{*}(t)-(1+i)\rho_{01}(t)\big{)}&2\rho_{01}(t) +u_{t}(1-i)(\rho_{11}(t)-\rho_{00}(t))\\ -2\rho_{01}^{*}(t)+u_{t}(1+i)(\rho_{00}(t)-\rho_{11}(t))&u_{t}\big{(}(1+i)\rho _{01}(t)-(1-i)\rho_{01}^{*}(t)\big{)}\end{array}\right)\] (E.3)
\[-\left(\begin{array}{cc}\Theta\rho_{00}(t)&\frac{\Theta}{2} \rho_{01}(t)\\ \frac{\Theta}{2}\rho_{01}^{*}(t)&-\Theta\rho_{00}(t)\end{array}\right)\]
where the elements \(\rho_{00}(t),\rho_{01}(t),\rho_{01}^{*}(t)\) and \(\rho_{11}(t)\) are the elements of the density operator \(\rho(t)\). Vectorizing the above equation yields:
\[\frac{d}{dt}\left(\begin{array}{c}\rho_{00}(t)\\ \rho_{11}(t)\\ \rho_{01}(t)\\ \rho_{01}^{*}(t)\end{array}\right)=\underbrace{\left(\begin{array}{cccc}- \Theta&0&0&0\\ \Theta&0&0&0\\ 0&0&-i-\frac{\Theta}{2}&0\\ 0&0&0&i-\frac{\Theta}{2}\end{array}\right)}_{\hat{A}}\underbrace{\left( \begin{array}{c}\rho_{00}(t)\\ \rho_{11}(t)\\ \rho_{01}(t)\\ \rho_{01}^{*}(t)\end{array}\right)}_{x(t)}\] (E.4)
from which we find the state equation for the spin-1/2 system to be given by:
\[\frac{dx(t)}{dt}=(\tilde{A}+i\tilde{N}u_{t})x(t).\] (E.6)
## Appendix F State space model for a three level system
The Hamiltonian describing the interaction between a \(\Lambda\)-type atomic system, examined in Section (5.1.2), and an electric field \(u_{t}\) can be written in matrix form as follows:
\[H=H_{0}+u_{t}H_{1}=\left(\begin{array}{ccc}3&0&0\\ 2&0&0\\ 0&1&0\\ 0&0&0\end{array}\right)+\left(\begin{array}{ccc}0&0&1\\ 0&0&1\\ 1&1&0\end{array}\right)u_{t}=\left(\begin{array}{ccc}3&0&u_{t}\\ 2&0&1&u_{t}\\ u_{t}&u_{t}&0\end{array}\right)\] (F.1)
By assuming that the system interacts with an external environment described by one Linblad operator, \(L_{02}=\sqrt{\Theta}\left|0\right\rangle\left\langle 2\right|\), and expanding the density operator in terms of its matrix elements the von-Neumann equation given in (1) can be written as:
\[i\frac{d}{dt}\left(\begin{array}{ccc}\rho_{00}(t)&\rho_{01}(t)& \rho_{02}(t)\\ \rho_{01}^{*}(t)&\rho_{11}(t)&\rho_{12}(t)\\ \rho_{02}^{*}(t)&\rho_{12}^{*}(t)&\rho_{22}(t)\end{array}\right)=\] (F.2) \[\left(\begin{array}{ccc}u_{t}\big{(}\rho_{02}^{*}(t)-\rho_{02}( t)\big{)}&\frac{1}{2}\rho_{01}(t)+u_{t}\big{(}\rho_{12}^{*}(t)-\rho_{02}(t) \big{)}&\frac{3}{2}\rho_{02}(t)+u_{t}\big{(}\rho_{22}(t)-\rho_{00}(t)-\rho_{01 }(t)\big{)}\\ -\frac{1}{2}\rho_{01}^{*}(t)+u_{t}(\rho_{02}^{*}(t)-\rho_{12}(t))&u_{t}\big{(} \rho_{12}^{*}(t)-\rho_{12}(t)\big{)}&\rho_{12}(t)+u_{t}\big{(}\rho_{22}(t)-\rho _{01}^{*}(t)-\rho_{11}(t)\big{)}\\ -\frac{3}{2}\rho_{02}^{*}(t)+u_{t}\big{(}\rho_{00}+\rho_{01}^{*}(t)-\rho_{22}( t)\big{)}&-\rho_{12}^{*}(t)+u_{t}(\rho_{01}(t)+\rho_{11}(t)-\rho_{22}(t))&u_{t} \big{(}\rho_{02}(t)+\rho_{12}(t)-\rho_{02}^{*}(t)-\rho_{12}^{*}(t)\big{)}\\ +\frac{i}{2}\Theta\left(\begin{array}{ccc}-2\rho_{00}(t)&-\rho_{01}(t)&-\rho _{02}(t)\\ -\rho_{01}^{*}(t)&0&0\\ -\rho_{02}^{*}(t)&0&2\rho_{00}(t)\end{array}\right).\]
which when using the vectorisation defined in Eq.(A.2), gives,
\[\frac{d}{dt}\left(\begin{array}{c}\rho_{00}(t)\\ \rho_{11}(t)\\ \rho_{22}(t)\\ \rho_{01}(t)\\ \rho_{02}(t)\\ \rho_{01}^{*}(t)\\ \rho_{02}^{*}(t)\\ \rho_{02}^{*}(t)\\ \rho_{02}^{*}(t)\\ \rho_{12}(t)\\ \rho_{12}^{*}(t)\\ \rho_{12}^{*}(t)\\ \rho_{12}^{*}(t)\\ \rho_{12}^{*}(t)\\ \end{array}\right)=\underbrace{\left(\begin{array}{cccccccc}-\Theta&0&0&0&0&0 &0&0\\ 0&0&0&0&0&0&0\\ \Theta&0&0&0&0&0&0\\ 0&0&0&-\frac{i}{2}-\frac{\Theta}{2}&0&0&0&0\\ 0&0&0&0&-\frac{3i}{2}-\frac{\Theta}{2}&0&0&0\\ 0&0&0&0&\frac{i}{2}-\frac{\Theta}{2}&0&0&0\\ 0&0&0&0&0&\frac{3i}{2}-\frac{\Theta}{2}&0&0\\ 0&0&0&0&0&0&-i&0\\ 0&0&0&0&0&0&0&i\end{array}\right)\left(\begin{array}{c}\rho_{00}(t)\\ \rho_{11}(t)\\ \rho_{22}(t)\\ \rho_{01}(t)\\ \rho_{02}(t)\\ \rho_{01}(t)\\ \rho_{01}^{*}(t)\\ \rho_{02}^{*}(t)\\ \rho_{02}^{*}(t)\\ \rho_{12}(t)\\ \rho_{12}^{*}(t)\\ \rho_{12}^{*}(t)\\ \end{array}\right)}_{x(t)}\underbrace{\left(\begin{array}{cccccccc}0&0&0&0&1& 0&-1&0&0\\ 0&0&0&0&0&0&1&-1\\ 0&0&0&0&-1&0&1&-1&1\\ 0&0&0&0&1&0&0&0&-1\\ 1&0&-1&1&0&0&0&0&0\\ 0&0&0&0&0&-1&1&0\\ -1&0&1&0&0&-1&0&0&0\\ 0&1&-1&0&0&1&0&0&0\\ 0&-1&1&-1&0&0&0&0&0\end{array}\right)}_{x(t)}\underbrace{\left(\begin{array}{ c}\rho_{00}(t)\\ \rho_{11}(t)\\ \rho_{22}(t)\\ \rho_{01}(t)\\ \rho_{02}(t)\\ \rho_{02}(t)\\ \rho_{02}^{*}(t)\\ \rho_{12}(t)\\ \rho_{12}^{*}(t)\\ \end{array}\right)}_{x(t)}\] (F.3)
Hence, we find the form of the state equation given in Eq.(9) as follows,
\[\frac{dx(t)}{dt}=(\tilde{A}+i\tilde{N}u(t))x(t).\] (F.4)
|
2309.08754 | Reproducible Domain-Specific Knowledge Graphs in the Life Sciences: a
Systematic Literature Review | Knowledge graphs (KGs) are widely used for representing and organizing
structured knowledge in diverse domains. However, the creation and upkeep of
KGs pose substantial challenges. Developing a KG demands extensive expertise in
data modeling, ontology design, and data curation. Furthermore, KGs are
dynamic, requiring continuous updates and quality control to ensure accuracy
and relevance. These intricacies contribute to the considerable effort required
for their development and maintenance. One critical dimension of KGs that
warrants attention is reproducibility. The ability to replicate and validate
KGs is fundamental for ensuring the trustworthiness and sustainability of the
knowledge they represent. Reproducible KGs not only support open science by
allowing others to build upon existing knowledge but also enhance transparency
and reliability in disseminating information. Despite the growing number of
domain-specific KGs, a comprehensive analysis concerning their reproducibility
has been lacking. This paper addresses this gap by offering a general overview
of domain-specific KGs and comparing them based on various reproducibility
criteria. Our study over 19 different domains shows only eight out of 250
domain-specific KGs (3.2%) provide publicly available source code. Among these,
only one system could successfully pass our reproducibility assessment (14.3%).
These findings highlight the challenges and gaps in achieving reproducibility
across domain-specific KGs. Our finding that only 0.4% of published
domain-specific KGs are reproducible shows a clear need for further research
and a shift in cultural practices. | Samira Babalou, Sheeba Samuel, Birgitta König-Ries | 2023-09-15T20:40:59Z | http://arxiv.org/abs/2309.08754v1 | # Reproducible Domain-Specific Knowledge Graphs in the Life Sciences: a Systematic Literature Review
###### Abstract
Knowledge graphs (KGs) are widely used for representing and organizing structured knowledge in diverse domains. However, the creation and upkeep of KGs pose substantial challenges. Developing a KG demands extensive expertise in data modeling, ontology design, and data curation. Furthermore, KGs are dynamic, requiring continuous updates and quality control to ensure accuracy and relevance. These intricacies contribute to the considerable effort required for their development and maintenance. One critical dimension of KGs that warrants attention is reproducibility. The ability to replicate and validate KGs is fundamental for ensuring the trustworthiness and sustainability of the knowledge they represent. Reproducible KGs not only support open science by allowing others to build upon existing knowledge but also enhance transparency and reliability in disseminating information. Despite the growing number of domain-specific KGs, a comprehensive analysis concerning their reproducibility has been lacking. This paper addresses this gap by offering a general overview of domain-specific KGs and comparing them based on various reproducibility criteria. Our study over 19 different domains shows only eight out of 250 domain-specific KGs (3.2%) provide publicly available source code. Among these, only one system could successfully pass our reproducibility assessment (14.3%). These findings highlight the challenges and gaps in achieving reproducibility across domain-specific KGs. Our finding that only 0.4% of published domain-specific KGs are reproducible shows a clear need for further research and a shift in cultural practices.
Astronomical information, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Knowledge Graphs, Graphs, Knowledge Graphs,
To the best of our knowledge, the only research on surveying domain-specific KGs was introduced by Abu-Salih in [29], which differs from our study as we specifically focus on the reproducibility aspects of KGs. In this paper, we take the first step towards analyzing the existing KGs with respect to their reproducibility. We first provide an overview of the existing domain-specific KGs and compare them based on general criteria, including the respective domain, resource type, and construction method. This comparative analysis gives readers more insights into the existing domain-specific KGs. We then investigate the extent to which the KGs are reproducible using a defined set of criteria that reflect the reproducibility aspect. In this paper, we attempt to reproduce knowledge graphs using the same data and methods provided by the original authors in an experimental setup closely resembling theirs.
Although the main focus of this study is the reproducibility of existing domain-specific KGs, it is worth noting that the aspects of findability, accessibility, and interoperability, as emphasized by the FAIR principles [30], constitute an interesting research direction. However, analyzing these aspects is beyond the scope of the current study and could be a potential avenue for future research.
The remainder of this paper is structured as follows: Section 2 shows the survey methodology. Section 3 presents the existing domain-specific Knowledge Graphs and the criteria for their reproducibility, followed by the discussion in Section 4. The conclusion and future works are presented in Section 5.
## 2 Survey Methodology
We first searched for the keyword "domain knowledge graph" in the Google Scholar search engine1. We limited our search to papers published until the end of 2021. At the time of querying (Jan 01, 2022), this search resulted in 713 papers. We looked at their domain names (e.g., biodiversity, geoscience, biomedical, etc.) and then extended our search for those specific domain names that appear on the first result, e.g., for "biodiversity knowledge graph", "biomedical knowledge graph", and so on. To ensure the exclusion of duplicate entries for the "domain knowledge graph" that may have appeared in multiple categories, we removed such duplicates. As a result, we identified a collection of 603 unique papers focused on the "domain knowledge graph." Overall, our research encompassed a total of 1759 papers across 19 distinct domains. Note that we excluded the paper by Kim [31] from our analysis as we were unable to access and ascertain whether it pertained to KG creation, despite attempts to contact the author.
Footnote 1: [https://github.com/google-scholar-gab/](https://github.com/google-scholar-gab/)
We have selected a subset of the papers listed in the search results by considering these criteria: (i) we chose articles written in English only, (ii) we selected papers that focused on the creation or construction of knowledge graphs (KGs). Papers that primarily addressed the usage or other aspects of KGs were excluded. Moreover, the search results from Google Scholar displayed papers where the keywords appeared in the title, introduction, or state-of-the-art sections. Some papers do not focus on the topic of our keywords. However, some papers only briefly mentioned the keywords in the state of the art, indicating that they did not primarily focus on generating KGs. Therefore, we disregarded such papers. The selection process was carried out manually, thoroughly examining each paper to determine its relevance to KG construction. As a result, out of the initial 1759 papers listed in Google Scholar, we identified 250 papers that met our selection criteria.
From this subset, we further narrowed down our selection to papers that provided open-source code. We checked all 250 papers manually by looking at the paper content, whether they have a link to the GitHub repository or any web pages where their code is published. We also checked the data availability statement section in papers, if available. Surprisingly, we only found eight papers out of 250 with open-source code.
We use a script to download the articles to ensure the reproducibility of our experimental results. The script, the original search results obtained from Google Scholar, and our analysis of the results (whether each paper is selected or not, and whether they are open-source or not) are published in our repository2.
Footnote 2: [https://github.com/google-scholar-gab/](https://github.com/google-scholar-gab/)
Table 1 presents a summary of our keyword search results, indicating the number of published papers found on each respective topic as retrieved from Google Scholar. The third column shows the number of papers on Google Scholar for each keyword. The fourth column specifies the count of selected papers relevant to Knowledge Graph (KG) construction, while the final column denotes the number of papers accompanied by open-source code. The last row of this table shows the total number of papers for each category.
## 3 Reproducibility of domain-specific Knowledge Graphs
This paper centers its focus on the aspect of reproducibility. Consequently, as an initial step, we scrutinized all the selected papers to determine the availability of publicly accessible code for the Knowledge Graphs (KGs) they developed. It emerged that only eight papers out of the total 250 (3.2%) met this criterion. Note that AliCG (Alibaba Conceptual Graph) [32]3 and the KG proposed by Hoa et al., [33] (for surveying and remote-sensing applications)4, published only the raw data and not the code. So, these papers were not considered within the category of open-source code. Moreover, in the biomedical domain, we found two different publications [34, 35] related to CROssBAR-KG. We consider them as one unique KG for our further analysis.
Footnote 3: [https://github.com/google-scholar-gab/](https://github.com/google-scholar-gab/)
In this section, we first summarize the domain-specific KGs that provide open-source code. We then provide a general overview of them in subsection 3.1 and discuss their reproducibility aspect in Subsection 3.2. Existing KGs with open-source code:
* **CKGG**[36] (Chinese Knowledge Graph for Geography) is a KG covering the core geographical
knowledge at the high-school level, containing 1.5 billion triples. The authors used a variety of NLP tools to integrate various kinds of geographical data in different formats from diverse sources (such as GeoNames[5], Wikipedia). They conducted a preliminary evaluation of CKGG and showed a prototype educational information system based on CKGG.
* **CROssBAR-KG**[34; 35] Knowledge graph presents biological terms as nodes and their known or predicted pairwise relationships as edges. They are directly obtained from their integrated large-scale database, built upon a set of biomedical data resources. The data is enriched with a deep-learning-based prediction of relations between numerous biomedical entities. At first, the data is stored in a non-relational database. Then, biologically relevant small-scale knowledge graphs are constructed on the fly, triggered by users' queries with a single or multiple term(s). The system is tested by a use-case study of the COVID-19 dataset.
* **ETKG** (Event-centric Tourism Knowledge Graph) [1] is a KG to model the temporal and spatial dynamics of tourist trips. The authors extracted information from over 18000 travel notes (structured and unstructured information) crawled from the Internet, and defined an ETKG schema to model tourism-related events and their key properties. The schema of ETKG is built upon the Simple Event Model [37] with augmented properties and classes. The authors constructed an ETKG of Hainan and realized an application of POI recommendation based on it.
* **FarsBase**[38] is a cross-domain knowledge graph in the Farsi language, consisting of more than 500K entities and 7 million relations. Its data is extracted from the Farsi edition of Wikipedia in addition to its structured data, such as infoboxes and tables. To build Farsi Knowledge Graph (FKG), the authors first developed an ontology retrieved from DBpedia ontology, based on resources from Farsi Wikipedia. Then, they mapped Wikipedia templates to their built ontology. They consider Wikipedia as input of the FKG system. To enhance the performance and flexibility of the knowledge base, they stored data in two-level architecture: a NoSQL database for storing data and metadata, and a triplestore for storing the final data. Most entities in the FKG have been linked to DBpedia[6] and Wikidata[7] resources by owl:sameAs property. A SPARQL endpoint provides access to the knowledge graph.
* **GAKG** (GeoScience Academic Knowledge Graph) [39] is a large-scale multimodal academic KG, consisting of more than 68 million triples based on 1.12 million papers published in various geoscience-related journals. The entities of GAKG have been extracted under a Human-In-the-Loop framework, using machine reading and information retrieval techniques with manual annotation of geoscientists in the loop. The schema of GAKG consists of 11 concepts connected by 19 relations. GAKG is updated regularly and can be queried at the SPARQL query Endpoint. It is evaluated using two benchmarks.
* **MDKG**[40] stands for Microbe-Disease Knowledge Graph and is built by integrating multi-source heterogeneous data from Wikipedia text and other related databases. Through a series of natural language processing, they split the text of Wikipedia pages into sentences. Then, using an existing tool, they perform named entity recognition and relationship extraction on the sentences and obtain the interaction triplets. Afterward, other databases are integrated into their KG. Moreover, they used the representation learning method for knowledge inference and link prediction.
* **Ozymandias**[41], a biodiversity knowledge graph, combines scholarly data about the Australian fauna from different sources, including the Atlas of Living Australia[8], the Biodiversity Heritage Library, ORCID[9], and links to external KGs like Wikidata and GBIF[10].
* **RTX-KG2**[42] is an open-source software system for building and hosting a web API for querying a biomedical knowledge graph. The data from 70
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{no.} & \multicolumn{2}{c|}{**Keyword**} & \multirow{2}{*}{**Papers**} & \multirow{2}{*}{**Selected**} & \multirow{2}{*}{**Open-source**} \\ & & search & & & \\ \hline
1 & ”Domain knowledge graph” & 602 & 88 & 2 \\ \hline
2 & ”Agriculture knowledge graph” & 16 & 5 & 0 \\ \hline
3 & ”Biodiversity knowledge graph” & 87 & 5 & 1 \\ \hline
4 & ”Bimerical knowledge graph” & 214 & 12 & 2 \\ \hline
5 & ”Cultural knowledge graph” & 17 & 6 & 0 \\ \hline
6 & ”E-commerce knowledge graph” & 16 & 7 & 0 \\ \hline
7 & ”Education knowledge graph” & 32 & 14 & 0 \\ \hline
8 & ”Financial knowledge graph” & 64 & 3 & 0 \\ \hline
9 & ”Geographic knowledge graph” & 117 & 20 & 1 \\ \hline
10 & ”Geoscience knowledge graph” & 9 & 4 & 1 \\ \hline
11 & ”Healthcare knowledge graph” & 45 & 5 & 0 \\ \hline
12 & ”Industrial knowledge graph” & 37 & 8 & 0 \\ \hline
13 & ”Medical knowledge graph” & 291 & 38 & 0 \\ \hline
14 & ”Military knowledge graph” & 26 & 8 & 0 \\ \hline
15 & ”Movie knowledge graph” & 48 & 6 & 0 \\ \hline
16 & ”Political knowledge graph” & 6 & 0 & 0 \\ \hline
17 & ”Robotic knowledge graph” & 4 & 1 & 0 \\ \hline
18 & ”Seativity knowledge graph” & 80 & 10 & 0 \\ \hline
19 & ”Tourism knowledge graph” & 42 & 9 & 1 \\ \hline
20 & ”Water knowledge graph” & 3 & 1 & 0 \\ \hline \hline \multicolumn{5}{|c|}{Total} & 1756 & 250 & 8 \\ \hline \end{tabular}
\end{table}
Table 1: Keyword search on the Google Scholar. |Papers| denotes the total number of papers retrieved for a given keyword: |Selected| shows the number of selected papers related to building KGs; |Open-source code| shows the number of papers that provides open-source code.
core biomedical knowledge-bases are extracted via a set of Extract-Transform-Load (ETL) modules. Its schema is built based on an existing metamodel in the biological domain. RTX-KG version 2.7.3 contains 10.2 million nodes and 54.0 million edges.
### Comparison of KGs
In this section, we summarize the key features of each KG mentioned in Section 3. Table 2 shows the comparison of domain-specific KGs with respect to their domain, resource type, construction method, reasoning, cross-linking, evaluation, and year. The cross-linking aspect indicates whether the elements of the KG are connected to external resources or other KGs such as Wikidata or DBpedia. Note that if the KG is built based on some resources, i.e., the elements of KG are mapped to other data resources, we do not consider them as cross-linking.
### Criteria for reproducibility of KGs
Reproducibility is one of the important principles of scientific progress. It emphasizes that a result obtained by an experiment or observational study should be consistently obtained with a high degree of agreement when different researchers replicate the study with the same methodology. Indeed, reproducing an experiment is one important approach scientists use to gain confidence in their conclusions [43].
Over time, the scientific community has put forth various guidelines and recommendations for conducting reproducible research [44, 45, 46, 30, 10]. Based on the current literature, we develop a set of criteria that affects the reproducibility of Knowledge Graph construction. Here, we present them as our suggested guidelines in the context of reproducibility of the construction of Knowledge Graphs, as follows:
* **Availability of code and data**: One of the essential requirements for ensuring reproducible research is the availability of code and data used for constructing the KG. This is one of the key requirements for conducting reproducible research [45, 46]. This rule is applied to all computational research [14, 15, 16]. So, for the reproducible research, public access to scripts, runs, and results should be provided. The data used for generating KG should be available or accessible for querying. To construct a knowledge graph, not only the code but also the data should be accessible. Therefore, the published papers should deposit data in public repositories where available and link data bi-directionally to the published paper. Data and code shared on personal websites are considered accessible as long as the websites are maintained [46].
* **Code License**: The code used for KG construction should be accompanied by an appropriate license for reuse or reproduction. Since we found no particular mention of licenses for datasets in most of the systems, we do not report about them in this paper.
* **DOI for code and data**: To ensure findability, the code and data should have persistent identifiers [10]. The materials used for KG construction should be findable and linked to the published research with a permanent Digital Object Identifier (DOI). Archiving data in online repositories is one way to ensure the findability of the code and data.
* **Availability of execution environment**: The execution environment should be available in any format such as configuration, setup, yaml, or requirement files. The format for the execution environment can vary based on the programming language used for the construction of KG. For example, for Python, the execution environment is generally addressed by defining dependencies in standard files like requirements.txt, setup.py, and pipfile [15, 16]. According to [47], the lack of versions of imported libraries may cause incompatibilities and prevent the usage in other systems. Hence, the libraries and their version used are important information for the reproducibility of KGs.
* **Run instruction**: Comprehensive instructions for running the code should be provided. In order to reproduce the results, it is important to document the process. For computational experiments, the process of generating the results is often provided through instructions in a format like README files in the code repositories.
* **Online demo**: It is desirable to have the KG itself available for use through an online demo.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|} \hline
**KG** & **Domain** & **Resource Type** & **Construction Method** & **Reasoning** & **Cross Linking** & **Evaluation** & **Year** \\ \hline CKGG & Geography & Data resources & Machine Learning & Not declared & Wikidedia & Yes & 2021 \\ \hline CROsBank-RG & Biomedical & Data resources & Machine Learning & Yes & Not declared & Yes & 2020 \\ \hline ETKG & Tourism & Web pages & Machine Learning & Not declared & Not declared & Yes & 2020 \\ FarBase & Cross-domain & Wikipedia & Heuristic & Not declared & DBpedia, Wikidata & Not provided & 2021 \\ \hline GAKG & Geoscience & Publication & Machine Learning & Not declared & Wikidata & Yes & 2021 \\ \hline MDKG & Biomedical & Wikipedia text & Machine Learning & Yes & Not declared & Not provided & 2020 \\ \hline Ozymandias & Biodiversity & Publication & Heuristic & Not declared & Wikidata, GBIF & Not provided & 2019 \\ RTX-KG2 & Biomedical & Data resources & Heuristic & Yes & Not declared & Not provided & 2021 \\ \hline \end{tabular}
\end{table}
Table 2: General overview of domain-specific KGs.
However, this criterion does not directly impact the reproducibility of KG systems.
* **SPARQL endpoint**: Having a SPARQL Endpoint to access and query the data within the Knowledge Graph offers significant advantages.
* **Successful regeneration**: The code should be executable, allowing successful regeneration of the KG.
* **Provenance information**: Provenance plays a key role in the reproducibility of results. Provenance support can be used to maintain, analyze, and debug evolving knowledge graphs [48]. Both prospective and retrospective provenance offer insights into the steps required and the events that happened during the development of knowledge graphs. This information includes the addition, deletion, and updation of RDF statements [49] in the construction of KGs. Additionally, it includes details regarding dataset versions, code, libraries, modules, SPARQL endpoints, etc.
Table 3 shows the comparison between KGs in terms of the mentioned reproducibility criteria. Our experiments yield the following findings:
* KGs such as MDKG and CKGG are not reproducible because, despite their code being publicly accessible, the necessary data for constructing these specific knowledge graphs remains inaccessible.
* FarseBase, MDKG, CROssBAR, and ETKG do not provide run instructions on their repository. Although their code is publicly available, it requires extra expertise to be familiar with that system to make their systems run. Therefore, we cannot assert their reproducibility.
* Reproducing RTX-KG2 is challenging due to its high computational requirements. Currently, we lack resources with system specifications comparable to those of RTX-KG2. Therefore, we cannot draw any conclusions regarding its reproducibility at this time.
* Ozymandias was regenerated successfully.
In the RTX-KG2 repository, the authors provide links to all 70 original data sources used in its construction. However, we cannot conclude that the data of RTX-KG2 has a DOI, as some of those data sources do not have a DOI. Moreover, a read-only endpoint11 for RTX KG2 as a graph database was not available at the time of our access. Further demo pages were not found. Thus, we marked it with "-" in column 7 of Table 3.
Footnote 11: [https://github.com/articles/data/](https://github.com/articles/data/)
FarseBase derives its source data from Wikipedia articles composed in the Farsi language. While the repository linked with it contains the code for acquiring the source data, the actual data is not included. Since downloading the source data may not yield identical results to the data utilized in generating FarseBase, we cannot conclude whether that data is available.
## 4 Discussion
From our comparison in terms of general criteria (Table 2), we can conclude that:
* Within our dataset, the fields of medicine, biomedicine, and healthcare, which are encompassed within the broader realm of medical science, stand out as the most prevalent domains for Knowledge Graphs (KGs). This prominence can likely be attributed to the substantial volume of available data within this domain and the numerous applications that make use of KGs. Out of the 250 selected papers focusing on KGs, 56 of them (comprising 39 from medical, 12 from biomedical, and 5 from healthcare domains) account for approximately 22% of the total (refer to Table 1). There is a growing trend in constructing KGs for geographic and education domains.
* Most existing KGs are built based on textual data (publication) and different data sources. Interestingly, there were no KGs in our selected ones that target the tabular data. However, there is a trend to build KGs based on the tabular data. The Semantic Web Challenge on Tabular Data to Knowledge Graph Matching [50] is held annually to understand the semantic structure and meaning of tabular data.
* Although the heuristic approaches are used to build some KGs, the machine learning approaches are the most popular construction method.
* Although reasoning capabilities can help discover additional relationships, most KGs do not explicitly mention their use of reasoning.
* KGs are widely regarded as one of the most promising ways to link information in the age of Big Data. According to the linked open data (LOD) principles [51], each knowledge resource on the web receives a stable, unique and resolvable identifier. Because of the unique identifiers, KGs can be interlinked. However, most KGs did not provide cross-linkage. Three KGs out of eight provide the cross-link (see Table 2).
* The evaluation of KGs remains a challenge in this domain, as it requires the establishment of benchmarks, which is a laborious and time-consuming task. Although the criteria introduced in [52] can partially be applied in this context, KGs' evaluation seeks its own specific strategy.
* Constructing domain-specific Knowledge Graphs (KGs) using open-source code has gained popularity in recent years. As illustrated in Table 2, all the studied platforms were developed recently.
Following this general comparison of the studied KGs, this section explores a detailed discussion about the reproducibility test we have conducted. To carry out this test, we examined the repository of each studied KG (as listed in Table 2) and carefully followed the provided instructions, if available, to run the system. Note that more than one person has tested each system to ensure the reliability of the results. We draw our findings as:
* Only 3.2% (8 out of 250) of selected KGs have publicly available source code, indicating the need for greater encouragement towards open science and sharing data and code.
* Only one system out of seven open-source KGs (not considering RTX-KG2) could successfully run. This shows that only 0.4% of selected 250 KGs (14.3% of open-source KGs) are reproducible. This finding opens a new door for further research. It also indicates that the availability of open-source code alone does not guarantee the reproducibility of KGs. The availability of run instructions and the execution environment also have a significant impact on reproducibility.
* Tracking provenance of KG construction is rarely addressed in most papers, indicating a potential gap in this aspect.
* Only publishing the code cannot conclude the system's reproducibility. It is essential to provide the code along with detailed run instructions and information about the required execution environment to facilitate reproducibility.
* Access to the data on which a KG is built presents another challenge for reproducibility. But, mostly domain-specific KGs are built within a project or an organization, where their data is not publicly available.
* It is worth mentioning that the usage of the code and data will require the corresponding licenses and considering their usage restriction.
## 5 Conclusion & Future work
Domain-specific knowledge graphs (KGs) have gained popularity due to their usability in different applications. However, the process of KG development is often challenging and time-consuming. Thus, their reproducibility can facilitate the usage of KGs in various applications. In this paper, we have conducted an analysis of existing domain-specific KGs across 19 domains, focusing on their reproducibility aspects. Our study reveals that only 0.4% (1 out of 250) of the published domain-specific KGs is reproducible.
An important future direction involves assessing the extent to which KGs effectively record their provenance. The process of maintaining KGs in alignment with their data sources can be made effortless through the establishment of a comprehensive record of source code, input data, methods, and results. This not only allows other scientists to reproduce the results, but also enables the seamless re-execution of workflows with modified input data, ensuring that KGs remain synchronized with evolving data sources.
## CRediT authorship contribution statement
**Samira Babalou:** Conceptualization of this study, existing Knowledge Graphs analysis, Original draft preparation. **Sheeba Samuel:** Conceptualization of this study, existing Knowledge Graphs analysis, Original draft preparation. **Birgitta Konig-Ries:** Supervision, Validation, review & editing.
## Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
## Acknowledgements
SB's work has been funded by the iKNOW Flexpool project of iDiv, the German Centre for Integrative Biodiversity Research, funded by DFG (Project number 202548816). SS's work has been funded by the Carl Zeiss Foundation for the financial support of the project "A Virtual Werkstatt for Digitization in the Sciences (K3)"
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Name**} & \multicolumn{3}{c|}{**Code**} & \multicolumn{1}{c|}{**Data**} & \multicolumn{1}{c|}{**Online**} & \multicolumn{1}{c|}{**SPARQL**} & \multicolumn{1}{c|}{**Execution**} & \multicolumn{1}{c|}{**Run**} & \multicolumn{1}{c|}{**Successful**} \\ & \multicolumn{1}{c|}{**Availability**} & \multicolumn{1}{c|}{**License**} & \multicolumn{1}{c|}{**doi**} & \multicolumn{1}{c|}{**Availability**} & \multicolumn{1}{c|}{**doi**} & \multicolumn{1}{c|}{**demo**} & \multicolumn{1}{c|}{**endpoint**} & \multicolumn{1}{c|}{**environment**} & \multicolumn{1}{c|}{**instruction**} & \multicolumn{1}{c|}{**regenerating**} \\ \hline CKGG & Yes\({}^{12}\) & No & No & No & No & Yes\({}^{32}\) & No & No & Yes & No \\ \hline CROSBAR-KG & Yes\({}^{14}\) & Yes & No & Yes & Yes & Yes\({}^{32}\) & Yes & No & No & No \\ \hline ETKGCN & Yes\({}^{16}\) & No & No & Yes & No & No & No & No & No \\ \hline FarBase & Yes\({}^{17}\) & No & No & - & - & Yes\({}^{18}\) & Yes & Yes & No & No \\ \hline GAKG & Yes\({}^{19}\) & Yes & No & No & No & Yes\({}^{32}\) & Yes\({}^{21}\) & No & Yes & No \\ \hline MDRG & Yes\({}^{22}\) & No & No & No & No & No & No & No & No & No \\ \hline Ozymandias & Yes\({}^{23}\) & Yes & No & Yes & Yes & Yes\({}^{4}\) & Yes & No & Yes & Yes \\ RTX-KG2 & Yes\({}^{25}\) & Yes & No & Yes & - & - & Yes\({}^{36}\) & Yes & Yes & - \\ \hline \end{tabular}
\end{table}
Table 3: Comparing KGs in terms of reproducbility criteria.
within the scope of the program line "Break-throughs: Exploring Intelligent Systems for Digitization - explore the basics, use applications". We also thank Badr El Haouni, Erik Kleinsteuber, and Anirudh Kumbakunam Ashok for testing the systems.
## Notes
* [1][https://scholar.google.de/](https://scholar.google.de/) accessed on 17.01.2022
* [2][https://github.com/fusion-jena/iXOW/tree/main/Reproducibility-Survey](https://github.com/fusion-jena/iXOW/tree/main/Reproducibility-Survey)
* [3][https://github.com/alihab-research/ConceptGraph](https://github.com/alihab-research/ConceptGraph)
* [4][https://github.com/hosk061828457/Knowledge-graphs](https://github.com/hosk061828457/Knowledge-graphs)
* [5][https://www.geonhas.org/](https://www.geonhas.org/)
* [6][https://www.dbpedia.org](https://www.dbpedia.org)
* [7][https://www.wikiidata.org.us](https://www.wikiidata.org.us)
* [8][https://www.ala.org.us](https://www.ala.org.us)
* [9][https://orcid.org/html:is-ghi?](https://orcid.org/html:is-ghi?)
* [10][https://www.ghi.org/what-is-ghi?](https://www.ghi.org/what-is-ghi?)
* [11][https://gizdemoph.tr.txi.ai.744](https://gizdemoph.tr.txi.ai.744)
* [12][https://github.com/wjeweb/stor/XGG](https://github.com/wjeweb/stor/XGG)
* [13][https://ws.nju.edu.cn/C00G1.0/demo](https://ws.nju.edu.cn/C00G1.0/demo)
* [14][https://github.com/cangyl/C0RosSAR](https://github.com/cangyl/C0RosSAR)
* [15][https://crossbar.karniss.org](https://crossbar.karniss.org)
* [16][https://github.com/kupcai123/Mainan_XG](https://github.com/kupcai123/Mainan_XG)
* [17][https://github.com/lUST-DMLab/wiki-extractor](https://github.com/lUST-DMLab/wiki-extractor)
* [18][http://arsbase.net/sparal](http://arsbase.net/sparal)
* [19][https://github.com/download/gskg](https://github.com/download/gskg)
* [20][https://gagag.acemo.info/](https://gagag.acemo.info/)
* [21][https://www.acelsg.cn/sparal](https://www.acelsg.cn/sparal)
* [22][https://github.com/crosdb/MCGS](https://github.com/crosdb/MCGS)
* [23][https://github.com/rdrange/agymandias-demo](https://github.com/rdrange/agymandias-demo)
* [24][https://ozymandias-demo.herokuapp.com/](https://ozymandias-demo.herokuapp.com/)
* [25][https://github.com/RTKean/RTX-KZ2](https://github.com/RTKean/RTX-KZ2)
* [26][https://arx.mcats.io/api/rtk8g2/v1.2/openapi.json](https://arx.mcats.io/api/rtk8g2/v1.2/openapi.json)
|
2309.11205 | A Model-Based Machine Learning Approach for Assessing the Performance of
Blockchain Applications | The recent advancement of Blockchain technology consolidates its status as a
viable alternative for various domains. However, evaluating the performance of
blockchain applications can be challenging due to the underlying
infrastructure's complexity and distributed nature. Therefore, a reliable
modelling approach is needed to boost Blockchain-based applications'
development and evaluation. While simulation-based solutions have been
researched, machine learning (ML) model-based techniques are rarely discussed
in conjunction with evaluating blockchain application performance. Our novel
research makes use of two ML model-based methods. Firstly, we train a $k$
nearest neighbour ($k$NN) and support vector machine (SVM) to predict
blockchain performance using predetermined configuration parameters. Secondly,
we employ the salp swarm optimization (SO) ML model which enables the
investigation of optimal blockchain configurations for achieving the required
performance level. We use rough set theory to enhance SO, hereafter called ISO,
which we demonstrate to prove achieving an accurate recommendation of optimal
parameter configurations; despite uncertainty. Finally, statistical comparisons
indicate that our models have a competitive edge. The $k$NN model outperforms
SVM by 5\% and the ISO also demonstrates a reduction of 4\% inaccuracy
deviation compared to regular SO. | Adel Albshri, Ali Alzubaidi, Ellis Solaiman | 2023-09-20T10:39:21Z | http://arxiv.org/abs/2309.11205v1 | # A Model-Based Machine Learning Approach for Assessing the Performance of Blockchain Applications
###### Abstract
The recent advancement of Blockchain technology consolides its status as a viable alternative for various domains. However, evaluating the performance of blockchain applications can be challenging due to the underlying infrastructure's complexity and distributed nature. Therefore, a reliable modelling approach is needed to boost Blockchain-based applications' development and evaluation. While simulation-based solutions have been researched, machine learning (ML) model-based techniques are rarely discussed in conjunction with evaluating blockchain application performance. Our novel research makes use of two ML model-based methods. Firstly, we train a \(k\) nearest neighbour (\(k\)NN) and support vector machine (SVM) to predict blockchain performance using predetermined configuration parameters. Secondly, we employ the salp swarm optimization (SO) ML model which enables the investigation of optimal blockchain configurations for achieving the required performance level. We use rough set theory to enhance SO, hereafter called ISO, which we demonstrate to prove achieving an accurate recommendation of optimal parameter configurations; despite uncertainty. Finally, statistical comparisons indicate that our models have a competitive edge. The \(k\)NN model outperforms SVM by 5% and the ISO also demonstrates a reduction of 4% inaccuracy deviation compared to regular SO.
Blockchain, Performance, Evaluation, Prediction, Optimization.
## I Introduction
Blockchain technology, a form of _Distributed Ledger Technology_ (DLT), has seen increasing adoption across various sectors, including healthcare, supply chain management, and the Internet of Things (IoT). This is due to its decentralized nature, resistance to tampering, and features such as consistency, anonymity, and traceability. These features make it an excellent choice for applications requiring high levels of security and accountability [1][2].
Nevertheless, configuring a blockchain optimally can pose challenges, as applications' requirements vary significantly. Factors influencing these requirements include the nature of stored data, the frequency and concurrency of transactions, the number of validating nodes, and infrastructure specifications such as CPU, memory, network bandwidth, and Input/Output speed. The constraints that these applications must account for can further complicate the development process, thus affecting the overall performance of the blockchain-based application in terms of throughput, latency, and the rate of successful/failed transactions [3].
This study draws inspiration from a hypothetical scenario where a healthcare organization is contemplating the integration of blockchain technology into its operations. To gauge the feasibility and potential success of this project, certain performance metrics, such as transaction volume, average transaction time, and transaction success rate, amongst others, can be utilized. These metrics can be leveraged for one of specific purposes outlined subsequently:
1. Predicting how the blockchain-based application will perform under certain conditions and preset configuration parameters, given the limitation of the available resources.
2. Vice versa, given a target performance level, the task is to estimate the right configuration parameters. This is to answer questions like what configurations should be in place to enable an IoT-enabled hospital to achieve a blockchain throughput of at least 1000 transactions per second.
To approach the selection and implementation of blockchain-based applications systematically, it is necessary to conduct a comprehensive evaluation of the application's requirements. Numerous simulation frameworks for this purpose have been proposed [4]. However, due to blockchain systems' complexity, it is challenging to provide a comprehensive and accurate representation of a specific blockchain application's performance. The interdependency and wide range of parameters in a blockchain system make achieving an accurate performance evaluation a significant challenge.
On another front, machine learning (ML), a subfield of artificial intelligence, uses historical data to develop algorithms and statistical models that aim for optimal performance [5]. This paper mainly focuses on the supervised learning approach, specifically on classification and optimization algorithms.
In the realm of machine learning, classification is concerned with understanding and sorting data into predetermined groups or "sub-populations". Classification algorithms use labelled training data to determine whether an object belongs to
a predefined category. They identify recurring patterns and common features, thereby enabling "pattern recognition". The efficiency of these algorithms is evaluated based on their ability to classify objects correctly. In this study, we focus on two well-known classification algorithms: the \(k\) nearest neighbour (_k_NN) algorithm [6][7], and the support vector machine (SVM) algorithm [8].
Swarm optimization, a machine learning technique, is gaining attention due to its ability to efficiently find near-optimal solutions for complex problems, even with limited resources, such as processing power or time, and incomplete or imprecise knowledge about the problem [9][10]. To overcome the limitations of traditional optimization methods, several ML algorithms have been proposed, such as Harris Hawk Optimization (HHO) [11], Grey Wolf Optimization (GWO) [12], Artificial Bee Colony (ABC) [13], Ant Colony Optimization (ACO) [14], Particle Swarm Optimization (PSO) [15], and Salp Optimization (SO) [16]. These algorithms provide robust optimization capabilities.
Another interesting machine learning algorithm is the Rough Set Theory (RST) [17], which provides a formal approach to approximate conventional or crisp sets using lower and upper approximations. If the lower and upper approximations are identical, RST provides a crisp set. If the approximations are different, variations of RST may result in rough sets.
This study aims to use ML techniques to help mitigate some of these challenges by providing a more comprehensive and accurate evaluation of blockchain performance. The main contributions of this paper are:
1. An ML model that utilizes the _k_NN algorithm to predict a blockchain system's performance, considering various parameters such as the number of nodes, number of miners, and number of transactions.
2. An improved Salp Optimization (ISO) algorithm leverages the capability of a rough set in dealing with uncertainties to predict optimal parameter configurations for a given metric value.
The remainder of this paper is structured as follows. Section II provides a brief review of the related works in the literature. Section III proposes our two models for predicting the overall blockchain performance and estimating the optimal configuration parameters, respectively. Section IV conducts several experiments to validate the proposed models. Finally, Section V concludes the paper and discusses potential future work.
## II Related Work
In the quest to systematically and strategically evaluate blockchain performance, a variety of approaches have been proposed. Some of these methodologies centre around performance analysis via monitoring and observation of blockchain networks. For instance, a log-based blockchain monitoring framework was put forward by Zheng et al. [18]. Although this form of monitoring aids in identifying recurring patterns and observing performance, it lacks an intuitive and straightforward mechanism to propose the optimal configuration for peak performance.
To address this, several studies in the literature propose benchmarking blockchain networks by applying synthetic workloads in a controlled evaluation environment. Dinh et al. [19], for example, introduced a benchmarking framework, named BlockBench, for evaluating and analysing the performance of private blockchain platforms, such as Hyperledger Fabric and a private version of Ethereum. This was conducted with a focus on key aspects including latency, throughput, fault tolerance, and scalability [20]. Hyperledger Caliper [21] is another such tool, aimed at gauging the performance of various Hyperledger blockchains, namely Fabric, Sawtooth, Burrow, and Ethereum. This tool evaluates performance based on four critical metrics: throughput, latency, transaction success rate, and resource utilization. Despite the advantages of benchmarking in ensuring precise evaluation and measurement, the process remains primarily a trial-and-error endeavour that does not guarantee an automated discovery of optimal performance.
Several studies, in their pursuit of optimal performance, have focused on characterizing performance features of existing blockchain platforms under varying workloads and supported consensus algorithms. By doing so, they aim to reveal the maximum attainable performance in terms of throughput and latency characteristics. For instance, a comprehensive performance analysis of Ethereum was conducted by Rouhani and Deters [22], wherein they assessed the two most widely used Ethereum clients - the Proof-of-Work (PoW)-based Geth and the Proof-of-Authority (PoA)-based Parity. Similarly, an in-depth analysis of the Quorum blockchain's performance was performed by Baliga et al. [23]. These studies have provided insights into performance approximations under a given set of conditions.
To circumvent the challenges associated with real blockchain deployment, several blockchain simulation frameworks have been proposed. The aim here is to facilitate the investigation of various configuration parameters' impact on overall performance for different scenarios, requiring the least possible effort. For example, the frameworks proposed by Alharby and Moorsel [24] and Pandey et al. [25] enable the discrete-event dynamic simulation of blockchain platforms using various configurations to assess overall performance. Other blockchain simulation efforts are covered in [4].
Despite the facilitation offered by simulation, it shares a common problem with real blockchain benchmarking - the reliance on predetermined parameters for modelling blockchain system behaviour. This approach could fall short of achieving the best possible performance due to the difficulty in determining the optimal values for configuration parameters. Hence, this work investigates the use of machine learning techniques to estimate blockchain performances and suggest optimal configuration parameters, thereby aiming to achieve the best possible performance.
## III Proposed Models
### _Preliminaries_
Assume a number of configuration parameters (input) and performance metrics (output) of a blockchain-based solution as follows:
1. Configuration _Parameters_, _P_: the set of \(l\) parameters \(P=\left\{p_{1},p_{2},\ldots,p_{l}\right\}\) represents the _input configuration_ of the blockchain network such as the quantity of participating nodes, transactions frequency, payload size, selected consensus mechanism, and so forth.
2. Performance _Metrics_, _M_: the set of \(n\) metrics \(M=\left\{m_{1},m_{2},\ldots,m_{n}\right\}\) represent the _conditional outputs_ with respect to the given parameters \(P\) such as network throughput and latency.
Hypothetically, there is a strong correlation between configuration parameters and produced metrics. Therefore, we investigate the following:
* Employing _k_NN algorithm as a regression tool for predicting the overall performance in terms of each metric \(m\in M\) of a blockchain-based application based on a given set of configuration parameters \(P\).
* Employing the Salp Swarm Optimization (SO) algorithm to determine the optimal configuration parameters \(P\) based on a target level for each performance metric \(m\in M\)
### _kNN Regression Algorithms for Performance Prediction_
To identify commonalities, _k_NN algorithms compare a given set of parameters (\(P_{0}\)) with unknown values of performance metrics to their \(k\) neighbours. The commonalities are usually computed using a distance measure. The idea is that the set of parameters \(P_{0}\) will be closer to the set of parameters \(P_{i}\) of similar characteristics. _k_NN trains vectors with class labels in a multidimensional feature space. Each training data row has its parameters setup and decision values. Here, measurements are decision-conditional features. Only the algorithm's training samples' feature vectors and class labels are stored. Averaging the metric values of \(k\) nearby objects should yield the anticipated value. Given a dataset \(D\) with \(l\) features (configuration parameters) and \(m\) performance metric, we refer to the parameter value \(j\) of object \(i\) as \(v_{i,j}\). For example, \(v_{2,5}=6\) means that parameter number 2 of object number 2 has value 6. Moreover, the decision value \(m_{k}\) of object \(i\) is referred to as \(v_{i,d}\).
k_NN depends heavily on a distance measure. Euclidean distance is a typical distance metric for continuous values (parameters). The Euclidean distance \(l\left(u_{0},u_{i}\right)\) between two different objects, \(u_{0}\) and \(u_{i}\) is given by
\[l\left(u_{0},u_{i}\right)=\sqrt{\left(\mathbf{V}_{u_{0}}^{\prime}-\mathbf{V}_{ u_{i}}^{\prime}\right)^{T}\left(\mathbf{V}_{u_{0}}^{\prime}-\mathbf{V}_{u_{i}}^{ \prime}\right)}, \tag{1}\]
where
\[\mathbf{V}_{u_{k}}^{\prime}=<v_{u_{i},d_{1}^{\prime}},v_{u_{i},d_{2}},\ldots,v _{u_{i},d_{m}^{\prime}}>,\]
The proposed algorithm employing the previous steps is shown in Algorithm 1.
### _Improved Salp optimization (ISO) algorithm_
Each salp has a number \(i\), where \(i=1,2,\ldots,\mathcal{P}\) and an identifier indicating whether it is a leader or not. The numerals are permanent, but the identifiers may vary between iterations. In the initial iteration, the \(P\) salps occupy arbitrary "positions" in the chain, i.e., they simply adhere to the chain. A "position" is a location vector that describes a set of parameter values. The algorithm finalizes the iteration by identifying the \(m\) salps with the greatest performance as leaders, moving them to the front of the chain, and allowing them to share their position data (location vectors) with the non-leaders. In other words, the algorithm accomplishes the parameter values identification assignment in two successive steps: the exploration step and the exploitation step, each of which is described in greater detail below.
**ISO exploration step**
Salp \(i>1\) has in iteration \(k\geq 1\) a location vector \(\mathbf{S}_{i_{k}}=\left[s_{1},s_{2},\ldots,s_{n}\right]\), the values of \(y_{j}\) has different ranges. Therefore, we feed the algorithm by the separate range of each \(s_{j}\). In subsequent cycles, this \(s_{j}\) is constantly updated. The parameter vector defines the parameters' values. The \(s_{j}\) specifically reflects the value of parameter \(j\).For example, \(\mathbf{S}_{3_{2}}=\left[4,2,1,0.064\right]\) means that salp 3 in iteration 2 is representing parameter configuration for four parameters with values 4,2,1,0.064, respectively.
At iteration 1, each salp is started by an _randomly_ generated parameter vector of the \(l\) original parameters, acquiring an initial parameter vector. Remember that the random values are chosen with the \(s_{j}\) bounds in mind. This parameter vector is changed on each cycle. The dependence function evaluates the fitness of the parameter, and vectors, and also acts as an ambiguity-relaxing tool.
Specifically, the dependency value \(\gamma\) is computed by the end of each iteration \(k\geq 1\), for the parameter vectors of all \(P\) salps in the chain is calculated. The salps with the greatest \(\gamma\) values are then designated as leaders. Those in charge are said to be closer to the ideal parameter setting than the others.
**ISO exploitation steps**
Let the set of \(p\) leaders in iteration \(k\geq 1\) be \(\mathbb{P}_{k}\). A non-leader salp \(i\) gets its updated parameter vector \(\mathbf{S}_{i_{k+1}}\) in two stages in the subsequent iteration \(k+1\) of the algorithm. Each leader salp modifies its parameter vector in the first phase in the manner described below.
\[\mathbf{S}_{i_{k}}=\mathbf{S}_{i_{k-1}}+r^{2}(ub-lb)+rlb \tag{2}\]
Similarly, each non-leader salp \(i\), \(i\notin\mathbb{P}_{k}\), will calculate mean difference of \(p\) vectors (one for each \(j\in\mathbb{P}_{k}\)) as follows.
\[\mathbf{D}_{i,j}=\frac{1}{m}\left[\sum r_{1}\mathbf{S}_{i_{k}}-\mathbf{S}_{j_{ k}},\quad j\in\mathbb{P}_{k}\right], \tag{3}\]
where \(r\) is given by
\[r=2e^{-\left(\frac{4m}{L}\right)}.\]
To this end, given a set of \(p\) salps \(\mathbf{S}_{i_{j}}\) at iteration \(j\), some of the salps parameter vectors may have ambiguous
values. The ambiguous values are those that do not lead to promising solutions. Therefore, it will be a hard task to update such vectors. This problem may worsen by getting trapped in the local minima. Consequently, we introduce a goodness function \(\gamma\) depending on the well-known mathematical theory: rough set theory (RST), to solve such an issue. RST is known for its promising abilities in dealing with ambiguity through computing the approximation space, which is a set of approximations referred to as lower and upper. The former represents the set of objects with no ambiguity, while the latter represents the set of ambiguous objects. Assume we are optimizing the metric value \(m_{k}\); i.e. \(m_{k}\) is the input value. First, we compute the fitness value \(f_{\mathbf{S}_{j}}\) of each salp \(\mathbf{S}_{j_{j}}\) by computing the regression value using \(k\)NN algorithm as per Section III-B. Second, let \(\tau\) be a user-defined value that serves as a threshold. The set of salps \(\mathbb{S}_{j}^{+}\) at iteration \(j\) having \(f_{\mathbf{S}_{j_{j}}}>\tau\) are considered good; otherwise; the set \(\mathbb{S}_{j}^{-}\) are considered ambiguous.
**Definition 1 (Goodness function, \(\gamma\)):** Given finite set of \(n\) salps \(\mathbf{S}_{j_{j}}\), we compute the equivalence relation \(E\) of each salp as follows.
\[E_{\mathbf{S}_{j}}=\{\mathbf{S}_{k_{j}}|l(\mathbf{S}_{j_{j}},\mathbf{S}_{k_{ j}})<\frac{1}{2}(|\mathbf{D}_{i,j}-\mathbf{D}_{k,j}|)\} \tag{4}\]
The lower, upper and boundary approximations are given as follows.
\[\underline{Apr}(\mathbb{S}_{j}^{+})=\{E_{\mathbf{S}_{j_{j}}}|:E_{\mathbf{S}_{ j_{j}}}\subseteq\mathbb{S}_{j}^{+}\} \tag{5}\]
\[\overline{Apr}(\mathbb{S}_{j}^{+})=\{E_{\mathbf{S}_{j_{j}}}|:E_{\mathbf{S}_{ j_{j}}}|\bigcap\mathbb{S}_{j}^{+}\neq\emptyset\} \tag{6}\]
\[BND(\mathbb{S}_{j}^{+})=\overline{Apr}(\mathbb{S}_{j}^{+})-\underline{Apr}( \mathbb{S}_{j}^{+}) \tag{7}\]
Finally, the goodness of the upper approximation is given by
\[\gamma=\frac{|\underline{Apr}(\mathbb{S}_{j}^{+})|}{n}. \tag{8}\]
```
Input :\(D\)//Training data \(\omega\)//Unknown query object \(k\)//Number of nearest neighbour Output :\(M\)//Set of predicted metrics \(1\); \(2\); \(\blacksquare\)
1forachobject\(u_{i}\in D\)do
2 Compute the Euclidean distance between \(u_{0}\) and \(u_{k}\) as per Eq. (1) and add it to \(L[i]\)
3 end for
4Sort \(L\) in ascending order Find the first \(k\) objects in \(L[i]\) with the least distance value for each metric \(m_{i}\in M\)do
5 Compute the value of \(m_{i}\) for the unknown object \(u_{0}\) by averaging the corresponding metric values of the \(k\) neighbouring objects \(m_{i}=\frac{1}{L}\sum_{j=1}^{L}\gamma_{j,d}\), where \(\gamma_{j,d}\) is the decision value of object \(u_{j}\) in the first \(k\) objects in \(L\).
6 end for
```
**Algorithm 1**\(k\)NN regression algorithm for blockchain metitres prediction
Having said this, to improve ISO algorithm convergence and to avoid getting trapped in the local optima, the set of salps in the boundary region \(BND(\mathbb{S}_{j}^{+})\) is completely deleted are regenerated concerning the salps having high goodness values.
The processes mentioned above are used by the ISO pseudocode displayed in Algorithm 2. Only the first iteration, where \(k=1\), uses the algorithm's initialization process. Then it executes a loop where a different method is used for every \(k>1\) iteration. The computational cost of ISO may be calculated by using Algorithm 1 and noting that \(P\) is the number of salps and \(R\) is the number of iterations. The _exploration_ step involves ISO spanning \(P\) parameter vectors. With \(N\) salps in hands, computing the fitness function for each vector costs \(O(N)\). The exploration phase thus costs \(O(MN)\). Second, the ISO method changes each parameter vector \(R\) twice at most during the _exploitation_ stage. As a result, this step's cost is \(O(RMN2)\). The entire computing cost of the ISO method is \(O(RMN2)\) since the exploitation step is the most important one.
## IV Experimental work
The proposed models were implemented using Python and executed on a system equipped with CentOS 7, a 2.4 GHz Intel Core i7 processor, and 16 GB of RAM. The code is available on GitHub1. We conducted several experiments using the collected data, with two main objectives in mind. First, we aimed to test the \(k\)NN model's ability to predict blockchain performance accurately. Second, we aimed to test the ISO algorithm's ability to identify the best parameter configurations required to achieve a user-defined value for a specific metric, such as throughput.
Footnote 1: [https://github.com/AlbshriAdel/BlockchainPerformanceML](https://github.com/AlbshriAdel/BlockchainPerformanceML)
### _Data collection_
There is currently no readily accessible public dataset tailored to the tasks outlined in this work. Furthermore, considering our objective to validate the proposed concepts, we elected to utilize a dataset derived from a simulation environment. This approach allows us to control the parameters involved and generate a diverse array of performance data.
In the simulation scenario used for our study, we employed the Raft consensus algorithm. As per the operational constraints imposed by Raft, we were compelled to operate with a single miner node. This constraint is inherent to the design of the Raft consensus protocol and is not a limitation of our study per se.
It is important to note that the training of machine learning models necessitates a substantial volume of historical data.
Therefore, we generated the requisite data using a blockchain simulator. The specifics of the parameters (\(P_{i}\)) that we manipulated to alter the blockchain's characteristics are described in Table I. Our data generation approach provided us with the flexibility to adjust these parameters and collect a comprehensive dataset for our machine learning models.
The simulated blockchain model is executed several times using different configuration values for the parameters described in Table I. During these runs, we thoroughly examined the data. Having identified the set of conditional features, we now turn our attention to the decision features, which include the set of performance metrics \(M\). These features are computed based on conditional features and can be used to evaluate the performance of the blockchain. Table II provides details about the metrics we have used.
Note that computing the decision characteristics presented in Table II in simulation mode requires computing the prior features, as shown in Table I. To count these features, we need to have access to the details of each block, which can be a time-consuming process. Therefore, we can define the issue as follows: we will use the ML method (\(k\)NN) and conditional features to directly forecast metric choice feature values. In the following sections, we will train the ML model to predict decision feature values using conditional features.
It is crucial to examine the statistical properties of the collected data to ensure the reliability of the subsequent ML results. Upon reviewing Table I, we observe that there are six numerical features (\(P_{5}\), \(P_{6}\),...\(P_{9}\)) present in the dataset.
Prompted by this observation, we sought to gain insights into the dispersion and distribution of these numerical features. We specifically calculated the mean and standard deviation for these features to evaluate the skewness, or asymmetry, of the distribution in the dataset. Additionally, we conducted an examination for any missing values that might affect the analysis.
The results of this comprehensive statistical analysis are detailed in Table III. These preliminary findings will aid us in understanding the inherent characteristics of our dataset,
\begin{table}
\begin{tabular}{l l l l l l} \hline Parameter & Abh. & Desc. & Format & \(L\) & \(U\) \\ \hline Number of nodes & \(P_{i}\) & The number of nodes participating in the blockchain network & Integer & 3 & 15 \\ \hline Number of miners & \(P_{2}\) & The number of miners participating in the blockchain network & Integer & 1 & 1 \\ \hline Consensus algorithm & \(P_{3}\) & Consensus Algorithm "Raff" & String & - & - \\ \hline transactions/second & \(P_{4}\) & The total number of transactions generated & Integer & 9 & 1650 \\ \hline Max block size & \(P_{5}\) & The maximum amount of block size & Decimal & 1 & 1 \\ \hline Max transaction size & \(P_{6}\) & The maximum transaction data size & Decimal & 0.064 & 0.064 \\ \hline Min transaction size & \(P_{7}\) & The minimum transaction data size & Decimal & 0.001 & 0.001 \\ \hline Block interval & \(P_{8}\) & Block processing time & Decimal & 0.05 & 0.0099 \\ \hline Simulation time & \(P_{9}\) & The time taken for executing & Decimal & 1 & 1 \\ \hline \end{tabular}
\end{table} TABLE I: The description of the nine used parameters with their abbreviation, lower \(L\) and upper \(U\) bound of each.
thereby assisting in the formulation of more accurate machine learning models and predictions.
In the context of a substantial dataset, it proves beneficial to ascertain its central tendency, often represented by a single value such as the mean, median, or mode. This central tendency provides an approximate average value, facilitating an understanding of the dataset's general characteristics. Referring to Table III, it is evident that all numerical features exhibit a notably small standard deviation. This indicates that the data points for each feature are closely distributed around the mean, a sign of well-organized and reliable data. Additionally, to complement the numerical evaluation, we conducted a visual examination of the dataset. For instance, we inspected the distribution of one of the numerical features, namely the block interval feature (\(P_{8}\)). This analysis revealed a normal, or Gaussian, distribution, further validating the quality of the dataset. Furthermore, a meticulous inspection of the collected data did not identify any missing values. This absence of missing data implies that our dataset is complete and further contributes to the robustness of our subsequent ML analysis.
The correlation matrix between parameter-conditional features is helpful for understanding the data and examining feature relationships. This information can be used to verify projected performance. Table IV presents the results of this analysis. We have found that the total number of blocks without transactions (\(M_{5}\)) is unrelated to the other features and can therefore be overlooked. However, the average block size (\(M_{6}\)) and the average number of transactions per block (\(M_{7}\)) have a strong positive association, demonstrating the power of the decision features.
### _kNN Prediction results_
To prevent the issue of feature dominance, all numerical features are normalized. A normalized feature value \(\widehat{v}_{u_{i},a_{j}}\) is obtained from its raw value \(v_{u_{i},a_{j}}\) by
\[\widehat{v}_{u_{i},a_{j}}=\frac{v_{u_{i},a_{j}}-\min_{k}\big{(}v_{u_{k},a_{j}} \big{)}}{\max_{k}\big{(}v_{u_{k},a_{j}}\big{)}-\min_{k}\big{(}v_{u_{k},a_{j}} \big{)}},\]
where \(\min_{k}\big{(}v_{u_{k},a_{j}}\big{)}\) and \(\max_{k}\big{(}v_{u_{k},a_{j}}\big{)}\) are the minimum and maximum values of feature \(a_{j}\), considering all objects, respectively. This formula guarantees that \(-\widehat{v}_{u_{i},a_{j}}\in[0,1]\) for all \(i\) and all \(j\).
Our first test involves finding the best \(k\) value for the \(k\)NN algorithm. To do so, we ran the model multiple times while changing the \(k\) value and computing the root mean square error (RMSE). We then selected the \(k\) value with the best RMSE. The RMSE is calculated as the standard deviation of the residuals, which represent the prediction errors. Residuals indicate how far data points are from the regression line, while RMSE indicates how spread out these residuals are. In other words, it shows how closely the data is clustered around
\begin{table}
\begin{tabular}{l c c c} \hline \hline Metric & Abb. & Desc. & Format \\ \hline Total number of blocks & \(M_{1}\) & The number of blocks generated & Integer \\ \hline Total number of blocks including transactions & \(M_{2}\) & The number of blocks that contains transactions & Integer \\ \hline Total number of transactions & \(M_{3}\) & The number of transactions generated & Integer \\ \hline Total number of transactions & \(M_{4}\) & The number of transactions not processed & Integer \\ \hline Total number of blocks without transactions & \(M_{5}\) & The number of empty blocks & Integer \\ \hline Average block size & \(M_{6}\) & The average blocks size & Decimal \\ \hline Average number of transactions per block & \(M_{7}\) & Average transactions per block & Decimal \\ \hline Average transaction inclusion time & \(M_{8}\) & Average transaction time & Decimal \\ \hline Average transaction size & \(M_{9}\) & The average size of the transactions & Decimal \\ \hline Average block propagation & \(M_{10}\) & Average block time & Decimal \\ \hline Average transaction latency & \(M_{11}\) & The average time between transaction submission and confirmation & Decimal \\ \hline Transactions execution & \(M_{12}\) & Average number of transactions per block & Decimal \\ \hline Transaction Throughput & \(M_{13}\) & The rate of throughput & Decimal \\ \hline \hline \end{tabular}
\end{table} TABLE II: The description of the thirteen used metrics with their abbreviation.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline Feature & Mean & Std & Min & Max \\ \hline \(P_{3}\) & \(1\) & \(0\) & \(1\) & \(1\) \\ \hline \(P_{6}\) & 6.40E-02 & 1.40E-17 & 6.40E-02 & 6.40E-02 \\ \hline \(P_{7}\) & 1.00E-03 & 2.20E-19 & 1.00E-03 & 1.00E-03 \\ \hline \(P_{8}\) & 0.075 & 0.014 & 0.05 & 0.1 \\ \hline \(P_{9}\) & 1 & 0.0145 & 0.05 & 0.09 \\ \hline \(M_{6}\) & 0.585 & 0.287 & 0.0302 & 0.971 \\ \hline \(M_{7}\) & 18.044 & 8.887 & 1 & 30.846 \\ \hline \(M_{8}\) & 0.484 & 0.0275 & 0.421 & 0.585 \\ \hline \(M_{9}\) & 0.0325 & 0.0012 & 0.027 & 0.0373 \\ \hline \(M_{10}\) & 0.0381 & 0.009 & 0.0209 & 0.089 \\ \hline \(M_{11}\) & 0.0525 & 0.0521 & 0.016 & 0.266 \\ \hline \(M_{12}\) & 0.9303 & 0.0332 & 0.8047 & 0.999 \\ \hline \(M_{13}\) & 508.306 & 268.197 & 11.184 & 1248.655 \\ \hline \hline \end{tabular}
\end{table} TABLE III: Statistical analysis (mean, standard deviation, std, minimum and maximum values) for numerical features (5 parameters and 8 metrics).
the line of best fit. Root mean square error is often used to evaluate the results of experiments in climatology, forecasting, and regression analysis. RMSE is given by
\[RMSE=\sqrt{\frac{\sum_{i=1}^{N}||y(i)-\hat{y}(i)||^{2}}{N}},\]
where \(N\) represents the total count of data points, \(y(i)\) denotes the \(i\)-th measurement in the dataset, and \(\hat{y}(i)\) signifies the corresponding predictive estimation for the \(i\)-th observation. The result of this experiment is shown in Figure 1. It is evident that a choice of \(k=1\) leads to a very high RMSE. When \(k\) is set to 5, the RMSE reduces significantly, approximating a value of 67.06. Any further increment in the value of \(k\) results in a drastic drop in the RMSE. Consequently, it can be confidently inferred that \(k=5\) is the optimal choice for this particular scenario, yielding the most favourable results.
majority of the values are less than 0.05 which confirms the statistical difference between the algorithms.
### _ISO validation results_
This section looks at how well the ISO algorithm works to find parameters in a blockchain. We compare ISO's performance to that of five other competitor algorithms to make the investigation meaningful. The five algorithms we compare, namely, PSO, HHO, GWO, ABC, ACO, and SO, are very recent.
Here, each algorithm searches for the ideal configuration based on a metric input value. To verify, the \(k\)NN regressor receives the parameter vector. The method is more reliable the closer the original value is to the anticipated one. We used 20 salps for 50 iterations in this experiment with three leaders. Table VII shows that ISO (last row) won this experiment. In the last row, \(M13=1100\), ISO produced parameter vector has 83% accuracy, while the best competitor, the classic salp technique, has 81 accuracy.
The development of the fitness value across iterations serves as another comparison test. Figure 2 shows this trend. The ISO curve is generally superior to all other curves. This suggests that ISO consistently outperforms other standards, regardless of the statistic. As a result, the evolution paints a clear picture of the algorithm's conduct from the beginning to the finish of the assignment.
One final experiment is to look at the predicted parameter vector by ISO and its five competitors. In this experiment, we are attempting to identify the parameter vector that will result in \(M_{13}=1100\). The results of this experiment are shown in the table VIII. Noting that the last column (achieved value) corresponds to the value of \(M_{13}\) in the resulting parameter vector, we can notice that ISO has the best-achieved value which is much closer to 1100 than any other competitor.
## V Conclusions
The advent of blockchain technology has initiated a paradigm shift across numerous sectors due to its inherent potential and advanced capabilities. Nevertheless, the intricate and decentralized characteristics of blockchain's underlying infrastructure introduce challenges in assessing the performance of blockchain-based applications. Thus, the necessity for a dependable modeling methodology becomes paramount to aid the creation and performance evaluation of such applications. Historically, research has predominantly focused on simulation-based solutions to evaluate blockchain application performance, while the exploration of machine learning (ML) model-based techniques remains comparatively scant in this context. This study sought to bridge this gap by proposing two innovative ML-based techniques.
The first approach integrated a \(k\) nearest neighbour (\(k\)NN) and a support vector machine (SVM) to predict blockchain performance by leveraging predefined configuration parameters. The second method utilized salp swarm optimization (SO), an ML model, to identify the most advantageous blockchain configurations for achieving the desired performance benchmarks. To further refine the efficacy of SO, we incorporated rough set theory, thus formulating an Improved Swarm Optimization (ISO) model. The ISO model displayed superior capabilities in generating accurate recommendations for optimal parameter configurations amidst uncertainties. Upon comparative statistical evaluation, our proposed models exhibited a competitive advantage. Specifically, the \(k\)NN model outperformed the
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \multicolumn{1}{c}{} & \multicolumn{4}{c|}{Parameters (\(P\))} & \multicolumn{4}{c}{Metrics (\(M\))} \\ \cline{3-11} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{3-11} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & & & & \(M_{11}\) & \(M_{12}\) & \(M_{13}\) \\ \cline{3-11} \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & & & & & \(\mu\)NN & SVM & \(k\)NN & SVM & \(k\)NN & SVM \\ \hline
13 & 1 & raft 519 & 1 & 0.064 & 0.001 & 0.083 & 1 & 0.033 & 0.024 & **0.912** & 0.99 & **569.026** & 559.99 \\ \hline
6 & 1 & raft 682 & 1 & 0.064 & 0.001 & 0.069 & 1 & 0.035 & 0.045 & **0.92** & 0.89 & **737.72** & 744.66 \\ \hline
9 & 1 & raft 66 & 1 & 0.064 & 0.001 & 0.070 & 1 & 0.022 & 0.055 & **0.91** & 0.84 & **72.35** & 72.44 \\ \hline
9 & 1 & raft 450 & 1 & 0.064 & 0.001 & 0.058 & 1 & 0.020 & 0.029 & **0.93** & 0.83 & **480.88** & 489.81 \\ \hline
15 & 1 & raft 893 & 1 & 0.064 & 0.001 & 0.072 & 1 & 0.12 & 0.19 & **0.99** & 0.74 & **754.931** & 759.899 \\ \hline
9 & 1 & raft 440 & 1 & 0.064 & 0.001 & 0.069 & 1 & 0.026 & 0.031 & **0.940** & 0.830 & **467.88** & 476.35 \\ \hline
6 & 1 & raft 965 & 1 & 0.064 & 0.001 & 0.065 & 1 & 0.1077 & 0.098 & **0.98** & 0.98 & **982.21** & 977.23 \\ \hline
7 & 1 & raft 17 & 1 & 0.064 & 0.001 & 0.095 & 1 & 0.033 & 0.055 & **0.91** & 0.97 & **18.68** & 19.67 \\ \hline \end{tabular}
\end{table} TABLE V: The classification accuracy of \(k\)NN and SVM for three different metrics over 10 different parameter configurations.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \multicolumn{1}{c}{} & \multicolumn{1}{c}{Correction} & \(P_{\text{S}}\) & \(P_{\text{S}}\) & \(P_{\text{J}}\) & \(P_{\text{S}}\) & \(P_{\text{S}}\) \\ \hline \multirow{2}{*}{Features} & Fisher & 0.0001 & 0.0007 & 0.0575 & 0.1372 & 0.0166 \\ & Turkey & 0.0028 & 0.0126 & 0.4808 & 0.7532 & 0.2007 \\ \hline \end{tabular}
\end{table} TABLE VI: Post hoc \(p\)-values resulting from Friedman tests of \(K\)NN for four parameters.
Fig. 2: Fitness value compared to the number of iterations: A higher fitness value indicates quicker convergence of the algorithm.
SVM by a margin of 5%, while the ISO model demonstrated a 4% reduction in accuracy deviation relative to the standard SO model. These encouraging results underscore the potential of our proposed methodology in addressing the inherent challenges associated with evaluating the performance of blockchain-based applications. Moreover, they underline the contribution of our work to the advancement and performance evaluation of blockchain-based applications.
Notably, the utility of our models is not confined to specific algorithms, thereby enhancing their adaptability. Future research directions include investigating the scalability of our proposed methodology and its applicability to larger, more complex blockchain-based applications. Additionally, the exploration of a broader range of recent algorithms beyond SVM and \(k\)NN presents an exciting avenue for future studies.
## Acknowledgements
This work is funded in part by the EPSRC, under grant number EP/V042017/1. Scalable Circular Supply Chains for the Built Environment.
|
2305.00552 | Deep Learning-based Spatio Temporal Facial Feature Visual Speech
Recognition | In low-resource computing contexts, such as smartphones and other tiny
devices, Both deep learning and machine learning are being used in a lot of
identification systems. as authentication techniques. The transparent,
contactless, and non-invasive nature of these face recognition technologies
driven by AI has led to their meteoric rise in popularity in recent years.
While they are mostly successful, there are still methods to get inside without
permission by utilising things like pictures, masks, glasses, etc. In this
research, we present an alternate authentication process that makes use of both
facial recognition and the individual's distinctive temporal facial feature
motions while they speak a password. Because the suggested methodology allows
for a password to be specified in any language, it is not limited by language.
The suggested model attained an accuracy of 96.1% when tested on the
industry-standard MIRACL-VC1 dataset, demonstrating its efficacy as a reliable
and powerful solution. In addition to being data-efficient, the suggested
technique shows promising outcomes with as little as 10 positive video examples
for training the model. The effectiveness of the network's training is further
proved via comparisons with other combined facial recognition and lip reading
models. | Pangoth Santhosh Kumar, Garika Akshay | 2023-04-30T18:52:29Z | http://arxiv.org/abs/2305.00552v1 | # Deep Learning-based Spatio Temporal Facial Feature Visual Speech Recognition
###### Abstract
In low-resource computing contexts, such as smartphones and other tiny devices, Both deep learning and machine learning are being used in a lot of identification systems. as authentication techniques. The transparent, contactless, and non-invasive nature of these face recognition technologies driven by AI has led to their meteoric rise in popularity in recent years. While they are mostly successful, there are still methods to get inside without permission by utilising things like pictures, masks, glasses, etc. In this research, we present an alternate authentication process that makes use of both facial recognition and the individual's distinctive temporal facial feature motions while they speak a password. Because the suggested methodology allows for a password to be specified in any language, it is not limited by language. The suggested model attained an accuracy of 96.1% when tested on the industry-standard MIRACL-VC1 dataset, demonstrating its efficacy as a reliable and powerful solution. In addition to being data-efficient, the suggested technique shows promising outcomes with as little as 10 positive video examples for training the model. The effectiveness of the network's training is further proved via comparisons with other combined facial recognition and lip reading models.
Visual Speech Recognition, Deep Learning, LSTM, MIRACL-VC1, Lip Reading
## I Introduction
Digital systems have long used biometric authentication technologies to identify authorised users and provide them access to protected resources. Verifying a user's identification using a photograph of their face has been more popular in recent years due to the system's contact-less [1], noninvasive nature and low barrier to entry. Hand-engineered features like SIFT, LBP, and Fisher vectors formed the foundation of early image-based face recognition systems. Deep learning approaches like FaceNet, Baidu [2], and DeepID models [3] have recently eclipsed human performance in terms of accuracy. Despite this, unauthorised individuals have often been able to trick face authentication systems by utilising a picture of a legitimate user. To address this weakness, researchers are looking at models that analyse the user's lip movements as they say a word. LipNet [4], a deep learning network for lip reading, has reached 95% accuracy in sentence-level categorization. Lip reading models based on HMM, LSTM Networks, CNNs, and other neural network architectures have also been developed. However, they are too dependent on lip movement and lip characteristics and struggle to adapt to varied lighting situations [5].
In order to solve the security and privacy issues, a novel solution has been developed that combines deep facial recognition models with photos without the requirement for audio. In this research, we present an authentication model that records the dynamic patterns of a person's face when they say a certain phrase. The facial characteristics were captured using a VGGNet model [6] dubbed VGGFace [7] that had been pre-trained on faces; the sequence of features was then processed through numerous LSTM layers, and the model learned to predict whether a genuine user was saying their chosen password or not. The selected password is not given to the model, just videos of it being said. This means that if pronounced continuously, it may be any alphabetic or numeric sequence. To evaluate the efficacy of the proposed model on real-world data, it was benchmarked using the publicly accessible standard dataset MIRACL-VC1 [8].
Tend to get around security measures by utilising a picture of the real user. To address this weakness, researchers are looking at models that analyse the user's lip movements as they say a word. Sentence-level classification accuracy of 95% has been attained by deep learning models for lip reading like LipNet [9]. Lip reading models based on HMM, LSTM Networks, CNNs, and other neural network architectures have also been developed. However, they are too dependent on lip movement and lip characteristics and struggle to adapt to varied lighting situations. In order to solve the security and privacy issues, a novel solution has been developed that combines deep facial recognition models with photos without the requirement for audio. In this research, we present an authentication model that records the dynamic patterns of a person's face when they say a certain phrase. To do this, we first used a VGGNet model [10] dubbed VGGFace [11] have been trained on expressions to learn the features of faces, afterwards we fed it pictures of people, the sequence of features through numerous LSTM layers, and the model learned to predict whether or not a genuine user was reciting their chosen password. The selected password is not given to the model, just videos of it being said. This means that if pronounced continuously, it may be any alphabetic or numeric sequence. To evaluate how well the proposed model performs on real-world data and requirements, a second dataset was built, consisting of videos captured by smartphone cameras in a variety of environments. This dataset was then benchmarked against the publicly available standard dataset, MIRACL-VC1 [2]. This compiled dataset was also used to test the system.
The fundamental contribution of this work is a secure video-based face authentication system that utilises a video of a
person speaking a password to detect and prevent imposter scenarios, such as i) the same person pronouncing an alternative phrase (Same Person scenario) or, ii) an alternative individual attempting to obtain permission ( Different person case).This model's independence from a particular language and domain is another important addition and feature. As far as we're aware, no such system has ever been put into place. This paper will continue in the following format. In Section 2, we'll go through the recent research that's been done on lip reading and authentication, as well as the current limitations that call for new developments. The methodology presented is described in Section 3, followed by an evaluation and validation of the model in Section 4, a summary and a list of references in Section 5.
## II Related works
Several widely used models for facial recognition based on deep learning have been presented recently [3]. Near-perfect results have been achieved on benchmark datasets by the FaceNet [12] and DeepID models [13][9]. The performance of these facial recognition systems, which have been trained on millions of photos, has exceeded that of humans. When employed as authentication systems, however, they remain vulnerable to assaults. This section presents the current state of the art efforts in this area and identifies the few holes that have been found in the performance of these systems.
[9] investigated lip reading methods for word categorization and lip password authentication. Traditional algorithms like FCM and LCACM are employed for lip localization in a picture, however they aren't the only ones that have been tried. They found that at the time, Gaussian Mixture Models (GMMs) and Hidden Markov Models were the most popular classifiers for visual speech detection. (HMMs). Based only on the behavioural biometrics of lip movements, [6] suggested a lip password verification model using Multi-boosted HMMs. They used an algorithm that divides lips to retrieve visual sequences of the mouth area, then used Multiboosted HMMs on those subunits to determine a decision boundary. However, users were limited to passwords consisting entirely of digits, resulting in an equal error rate (EER) of 4.06%. According to the work of [10], a CNN trained on pictures of the mouth can be used to forecast phenomes in visual speech recognition systems based on deep learning. For the purpose of the word recognition task, the CNN outputs were treated as sequences, and an HMM+GMM observation model was used. Their results showed that visual characteristics learned using a CNN generalised across domains performed better than more standard methods.
suggest LSTM-based lip reading. [14]. Using a Histogram of Oriented Gradients, the LSTM classified words from sequences of mouth photos. (HOG). They outperformed SVM and HMM classifiers. [15] built a hybrid CNN-LSTM model for word classification on the same MIRACL-VC1 dataset as this work. CNN+LSTM, CNN, and SVM classifiers were compared. The CNN method performed best for visual speech recognition (61.2%), while the CNN+LSTM model may improve with extra training. LipNet, created by [16], accurately categorises sentence-level lip reading. Spatio-temporal convolutions [17] and recurrent neural networks helped them outperform human lip readers. In the recent times, vision transformers-based architectures [18][19] have been performing better than neural network architectures in computer vision. This paper points out the performance of vision transformers on Lip reading.
Through discussion on many past research and state-of-the-art models, several shortcomings in facial recognition-based biometric systems were uncovered. Like any technology, face recognition systems may be exploited and have security weaknesses. Lip password models are inaccurate, and passwords must be solely numbers. Most models only speak their training language. This paper develops a language- and domain-neutral methodology to alleviate these limits. Next, we'll explain the model.
## III Research Methodology
Our proposed model is similar to AuthNet [20] and employs the two-stage convolutional neural network architecture shown in Fig.3. At each frame, we extract key face traits using a
Fig. 1: Proposed Architecture built on VGGFace
VGGFace model that has already been trained [11]. A tiny network of stacked LSTM layers is used to recognise and learn the behaviours of these characteristics. After training, this network can tell whether the person-word combination in a test case is the same as the one that was trained for.
There are 10 speakers in the MIRACL-VC1 Words dataset, and they each say 10 words 10 times. This is broken down into sub-sections for ease of testing and validation. To test the model's resilience against the Different Person Case, 5 speakers, or 5 \(\times\) 10 x 10 = 500 utterances, were arbitrarily put aside as unseen data. (mentioned above). To evaluate the model's performance in the Same Person Case, we put aside 3 words from each of the remaining 5 speakers (5 x 3 x 10 = 150 utterances). As a result, 650 specimens were removed from consideration. Five times seven times ten is 350 remaining utterances, 10 of which are good (right speaker, correct word), 60 are negative (right speaker, wrong word), and 280 are wrong speakers. Oversampling the 10 positive samples results in 100 more positive samples, for a grand total of 350 + 100 = 450 instances. This is then divided into a train and test set with a 70:30 split; the 650 instances that were previously reserved are added to the test set as negative samples to account for the aforementioned impostor scenarios.This method is used to guarantee that we are working with information that the model has never seen before. Ten samples of correct person and correct word (oversampled to 110), ninety samples of correct person saying wrong word, ninety samples of wrong person saying correct word, and eight hundred and ten samples of wrong person saying wrong word add up to a total of one thousand words (10 speakers uttering 10 words 10 times each).
The model was trained in an iterative way on all such conceivable combinations and evaluated against unseen cases to offer a full analysis of the performance and show the generalizability and resilience of the system. Therefore, there are 35 potential person-word combinations generated by repeating the training process for the 5 randomly selected speakers who each pronounce 7 unique words.
### _Data preprocessing._
In the MIRACL-VC1 dataset, each testing and training sample is made up of 5-15 photographs of the individual saying a password (see Fig.2). Only the words' colour images have been used for training so far [20]. Pictures of people are first sent through a Haar cascade face detector [8] before they are cropped. The Haar cascade detector, which is part of OpenCV, is often used to identify people and their various facial features. You may use it for basic item identification as well as recognising faces. The four primary phases of the strategy are selecting Haar features, creating a collection of images for easier being processed, training using Adaboost, and lastly feeding the data through a sequence of cascade classifiers. Padding images with white photographs ensures that all frames have the same number of frames. In order for the trained VGGNet model to handle the photos, they are downscaled to a size of 224 x 224. Identical procedures were used for the manual creation of the dataset. This set of numbers represents a large portion of one possible multiverse. Before being fed via the VGGFace framework, the frames are separated at an even frame rate, padded to keep the number of timesteps constant, and scaled to a resolution of 224 by 224.
### _VGGFace_
A vast series of convolutional layers trained on hundreds of thousands of photos of celebrity faces makes up the pre-trained VGGFace model [21], which is applied to each image. Each image's feature vector has an output of 2,622 dimensions. To prepare the data for input into the LSTM layer, this procedure is repeated for each individual and each phrase (as shown in Fig. 1). Each sample therefore consists of 2622 characteristics over 20 timesteps. This is a binary classification issue since the data are marked with a 1 for the right personword combination and a 0 for all others. Oversampling was used to address the ensuing inequity in the representation of different social groups.
### _LSTM network_
When dealing with sequential data, LSTM networks have shown remarkable success in a broad range of application fields. To do this, we use a 4 LSTM layer network where each layer has 20-time steps in order and an output sigmoid layer to provide probabilistic forecasts. After being trained, this model may decide whether or not a given test video has the required person-word combination before granting access to a verified user.
## IV Experiment Results
In order to demonstrate that the proposed strategy is effective, a number of tests were carried out. In order to train the model, an i7-8640U central processing unit and 8 gigabytes of RAM were used. Both the conventional MIRACL-VC1 dataset and a constructed dataset of videos, each of which is 2 seconds in duration, were used in the evaluation of the model. The Realme X3 superzoom (4k, 64MP), the OnePlus Nord (4k, 32MP), and the Apple iPhone 13 were utilised to record the clip on their respective devices. (2k, 16MP). They were shot in a variety of locations with variable lighting and backdrops. Table.I summarises each dataset's key statistics. The average time for the suggested model to process a person-word pair was 154.2 seconds.
The purpose of the suggested pipeline is to perform a binary classification job; the pipeline is also intended to accept as
Fig. 2: Data Preprocessing Pipeline
input a sequence of pictures with a resolution of 640 by 480 pixels, which are arranged in the order of the timesteps. After that, the photos were processed using the pretrained version of the VGGFace model, as shown in Fig.2. Following this, the generated feature vectors were input into the LSTM network for the purposes of training and assessing it, as shown in Fig.3. The proposed model was trained utilising the binary crossentropy (BCE) loss function (derived as per Eq.(1)) and the Adam optimizer, with an initial learning rate of 0.001 over the course of 60 epochs and a batch size of 75 [20].
\[\mathbf{BCE}=-\frac{1}{n}\sum_{i=1}^{n}\left(y_{i}\cdot\log\left(\hat{y}_{i} \right)+\left(1-y_{i}\right)\cdot\log\left(1-\hat{y}_{i}\right)\right) \tag{1}\]
## V Performance Metrics
Many different standard measures were utilised to assess the performance of the proposed model and prove its effectiveness. The authorization ratio of the framework, or the probability that an authenticated user will be permitted access, is based on the sensitiveness of the framework, which may be calculated using Eq.(2) The rejection ratio of a system is its specificity or the probability that an unwanted attacker would be denied entry to the system (as shown by the Eq.(3)). It represents the overall efficacy of the authentication system and is used to evaluate the model's performance (through Eq. (4)). The degree of accuracy achieved is indicative of its effectiveness.
\[\text{Recall}\ =\frac{TP}{TP+FN} \tag{2}\]
\[\text{Specificity}\ =\frac{TN}{TN+FP} \tag{3}\]
\[\text{Accuracy}\ =\frac{TP+TN}{TP+TN+FP+FP} \tag{4}\]
True Positive and True Negative samples are indicated by the acronyms TP and TN, respectively. On the other hand, False Positive and False Negative samples are indicated by the abbreviations FP and FN, respectively. During the process of examining the many different components of the model that was provided, two more metrics were used. The Equal Error Rate (EER) metric is stated to have been attained at the point in time when the probability of an unauthorised user being refused access is equal to the chance of an intruder acquiring entrance. This is the point at which the likelihood of an unauthorised user being denied access is equal to the probability of an intruder obtaining admittance. It is also the point at which the False Acceptance Ratio (FAR) is comparable to
Fig. 3: Proposed Architecture built on LSTM Network
the False Rejection Rate. This is because of the reason stated above. (FRR). Calculating the FAR is done using Eq.(5), while computing the FRR is done using Eq.(6). The Equivalence Entropy Ratio (EER) is a standardised measure that is used for the purpose of comparison between different biometric systems. The Receiver Operating Characteristic curve, often known as the ROC, was used in order to determine the Effective Exposure Rate (EER) since it is less susceptible to scaling modifications. The ROC Curve has been plotted, and Fig.6 provides a depiction of what it looks like.
\[\mathbf{FAR}=\frac{FP}{TP+TN+FP+FN} \tag{5}\]
\[\mathbf{FRR}=\frac{FN}{TP+TN+FP+FN} \tag{6}\]
The ROC curve shows how the True Positive Rate (TPR) and the False Positive Rate (FPR or FAR) alter in relation to one another when the threshold is varied. The receiver operating characteristic (ROC) curve may be seen as a cost-benefit analysis when making decisions. To ensure the pipeline gets the most true positives possible for each false positive it encounters, it's also utilised to compute the threshold. This ensures that the greatest number of legitimate accesses are provided by the system while minimising the possibility of any unauthorised access being granted. Finding the intersection of the ROC curve with the line (x, y) = 1 gives us the point of equal error. The Receiver Operating Characteristic (ROC)
[20]curve is used in the Area Under the ROC curve computation. (AUC). The AUC measures how likely it is that the model would assign a higher score to a hypothetically positive example than to a hypothetically negative one. Area under the curve (AUC) is a metric that measures how confident a model is in its decision boundaries and does not need a threshold.
## VI Results and Analysis
To measure our proposed methodology efficacy, researchers built a cross-validation dataset that included three types of imposter cases for any given person-word pair: the same speaker using a different word, a different speaker using the same word, and a third speaker using a different word. The voices used in the test were selected at random from a pool of people the model has never seen before throughout the training process.
The model's capacity to generalise to new person-word pairings is also rigorously evaluated by selecting a set of terms that are wholly novel yet are all uttered by the same individual. Table.III compares the proposed model's performance to that of other metrication strategies, and Fig.5 displays the associated confusion matrix.
\[\cos(\theta)=\frac{\mathbf{A}\cdot\mathbf{B}}{\|\mathbf{A}\|\|\mathbf{B}\|}= \frac{\sum_{i=1}^{n}A_{i}B_{i}}{\sqrt{\sum_{i=1}^{n}A_{i}^{2}\sqrt{\sum_{i=1} ^{n}B_{i}^{2}}}} \tag{7}\]
The differences between the photographs were calculated using the Cosine Difference, which was calculated using Eq. (7). The cosine difference between feature vectors serves. The purpose of comparing feature vectors acquired by the VGGFace model across speakers and words, or even across time for the same speaker, is to demonstrate that there is a substantial variation between the feature vectors. The empirical rationale for how facial characteristics vary when people recite their passwords is provided by this variation in feature vectors between individuals and words. Therefore, the LSTM network is able to capture substantial variations across photos when the pre-trained VGGFace model extracts a feature vector for each image. This is seen in Fig.4, and it emphasises how the suggested model can deal with imposters saying the same password, enabling it to function as an open password system.
The system's high sensitivity of 0.96, as shown in Table.III, indicates that it incorrectly identifies an input user-password combination just 2 times for every 1000 attempts. It's an excellent gauge of our ability to distinguish false positives. The model's high specificity shows how well it can identify different types of imposter attacks. The varying degrees of wrong specificity for each imposter circumstance are shown in Table.IV. The data shows that although the system is successful in preventing attacks from a new user almost all the time (about 96 percent of the time), it is only able to correctly recognise an erroneous password being provided around 9 out of every 10 times. This proves the model can serve as a robust defensive mechanism against attacks from malicious actors, ensuring a high level of personal privacy is preserved. Fig.6 shows the ROC curve, from which we can calculate that the EER is 0.023 by selecting the point at which the TPR is 0.963 and the FPR is 0.023. The equal error rate of a pipeline, which should be reasonably low, may be used as a proxy for its ability to reduce the amount of false acceptances and incorrect
Fig. 4: Variations in future extraction
rejections. The pipeline seems to be rather confident in its prediction, with an area under the curve (AUC) of 0.96.
We find that the model performs similarly on the MIRACL-VC1 dataset developed in the lab as it does on the collected dataset acquired from videos shot by a smartphone camera. This displays the system's flexibility in dealing with pictures and lighting conditions seen in the actual world. This suggests that the system may inadvertently pick up patterns that are unique to each user, and that the results are not just the result of overfitting to data obtained in a lab.
As evidence of the efficiency of the training, we compare our model to a two-tiered system consisting of various state-of-the-art models in face recognition and lip reading Fig.3.
The state-of-the-art FaceNet model [12] was used as a benchmark since it claims an accuracy of 99.76 percent. A lip reading model was presented by [15] with an accuracy of 61.2%. [14] and [13] models, which achieved 65.4% and 86.6% accuracy, respectively, were the previous state-of-the-art models in lip reading on the word level. The suggested model's efficacy has been measured against that of Lipnet [16].
Table.II displays the results and comparisons, demonstrating that the trained model achieves the same performance as the sum of state-of-the-art models while protecting against any imposter assaults or faults present in the latter.
## VII Conclusion and Future Work
To verify people using their facial movements over time, we developed a deep neural network that combines a convolutional neural network (CNN) and a recurrent neural network (RNN). Our model was tested on the MIRACL-VC1 dataset, and it was shown to be 96.1% accurate there. Cross-validation was used to verify the dataset's usefulness and demonstrate that it is not language-specific. We also showed that our system
Fig. 5: Learning Curves of Training and Validation Accuracy and Loss for 30 Epochs and its Confusion Matrix
Fig. 6: ROC Curve for the MIRACL-V1 dataset
can be tailored to individual tastes in terms of background visuals, lighting, and video quality, making it particularly well-suited to mobile and low-resource smart devices. Our methodology might be used as an open credential system in which the danger of disclosing the password is eliminated if it can accurately recognise the same password being uttered by different persons. Since our method only needs temporal facial movements captured from speech videos and not information about the speaker's dialect, it is not hindered by language barriers. Our method showed exceptional efficiency in terms of data needs, requiring just 100 good video examples for training. Currently, each sample takes about 10 seconds to test, but with some optimisation, that time could be cut in half. The development of an authentication system for mobile devices and personal computers presents a substantial difficulty in optimising testing time.
|
2307.16549 | DiffProsody: Diffusion-based Latent Prosody Generation for Expressive
Speech Synthesis with Prosody Conditional Adversarial Training | Expressive text-to-speech systems have undergone significant advancements
owing to prosody modeling, but conventional methods can still be improved.
Traditional approaches have relied on the autoregressive method to predict the
quantized prosody vector; however, it suffers from the issues of long-term
dependency and slow inference. This study proposes a novel approach called
DiffProsody in which expressive speech is synthesized using a diffusion-based
latent prosody generator and prosody conditional adversarial training. Our
findings confirm the effectiveness of our prosody generator in generating a
prosody vector. Furthermore, our prosody conditional discriminator
significantly improves the quality of the generated speech by accurately
emulating prosody. We use denoising diffusion generative adversarial networks
to improve the prosody generation speed. Consequently, DiffProsody is capable
of generating prosody 16 times faster than the conventional diffusion model.
The superior performance of our proposed method has been demonstrated via
experiments. | Hyung-Seok Oh, Sang-Hoon Lee, Seong-Whan Lee | 2023-07-31T10:28:45Z | http://arxiv.org/abs/2307.16549v1 | DiffProsody: Diffusion-based Latent Prosody Generation for Expressive Speech Synthesis with Prosody Conditional Adversarial Training
###### Abstract
Expressive text-to-speech systems have undergone significant advancements owing to prosody modeling, but conventional methods can still be improved. Traditional approaches have relied on the autoregressive method to predict the quantized prosody vector; however, it suffers from the issues of long-term dependency and slow inference. This study proposes a novel approach called DiffProsody in which expressive speech is synthesized using a diffusion-based latent prosody generator and prosody conditional adversarial training. Our findings confirm the effectiveness of our prosody generator in generating a prosody vector. Furthermore, our prosody conditional discriminator significantly improves the quality of the generated speech by accurately emulating prosody. We use denoising diffusion generative adversarial networks to improve the prosody generation speed. Consequently, DiffProsody is capable of generating prosody 16 times faster than the conventional diffusion model. The superior performance of our proposed method has been demonstrated via experiments.
text-to-speech, speech synthesis, denoising diffusion model, prosody modeling, generative adversarial networks
## I Introduction
Recent advancements in neural text-to-speech (TTS) models have significantly enhanced the naturalness of synthetic speech. In several studies [1, 2, 3, 4, 5, 6, 7], prosody modeling has been leveraged to synthesize speech that closely resembles human expression. Prosody, which encompasses various speech properties, such as pitch, energy, and duration, plays a crucial role in the synthesis of expressive speech.
In some studies [8, 9], reference encoders have been used to extract prosody vectors for prosody modeling. A global style token (GST) [10] is an unsupervised style modeling method that uses learnable tokens to model and control various styles. Meta-StyleSpeech [11] proposes the application of style vectors extracted using a reference encoder through a style-adaptive layer norm. Progressive variational autoencoder TTS [12] presents a method for gradual style adaptation. A zero-shot method for speech synthesis that comprises the use of a normalization architecture, speaker encoder, and feedforward transformer-based architecture [13] was proposed. Despite the intuitive and effective nature of using a reference encoder, these methods cannot reflect the details of prosody in their synthetic speech without ground-truth (GT) prosody information.
Recently, methods for inferring prosody from text in the absence of reference audio have been developed [14, 15]. FastPitch [16], for instance, synthesizes speech under text and fundamental frequency conditions. FastSpeech 2 [3] aims to generate natural, human-like speech by using extracted prosody features, such as pitch, energy, and duration, through an external tool and introduce a variance adaptor module that predicts these features. Some studies [17, 18] have proposed hierarchical models through the design of prosody features at both coarse and fine-grained levels. However, the separate modeling of prosodic features may yield unnatural results owing to their inherent correlation.
Some studies have predicted a unified prosody vector, thus enhancing the representation of prosody, given the interdependence of prosody features. Text-predicted GST [19] is a method for modeling prosody without reference audio by predicting the weight of the style token from the input text. [20] proposed a method for paragraph-based prosody modeling by introducing a paragraph encoder. Gaussian-mixture-model-based phone-level prosody modelling [21] is a method for sampling reference prosody from Gaussian components. [22] proposed a method for modeling prosody using style, perception, and frame-level reconstruction loss. There are also studies in which prosody is modeled using pre-trained language models [23, 24, 25, 26]. ProsoSpeech [27] models prosody with vector quantization (VQ) using large amounts of data and predicts the index of the codebook using an autoregressive (AR) prosody predictor. However, when predicting prosody vectors, an AR prosody predictor encounters challenges related to long-term dependencies.
To address these issues, we propose DiffProsody, a novel approach that generates expressive speech by employing a diffusion-based latent prosody generator (DLPG) and prosody conditional adversarial training.
The primary contributions of this work are as follows:
* We propose a diffusion-based latent prosody modeling method that can generate high-quality latent prosody representations, thereby enhancing the expressiveness of synthetic speech. Furthermore, we adopted denoising diffusion generative adversarial networks (DDGANs) to reduce the number of timesteps, resulting in speeds that
were 2.48x and 16x faster than those of the AR model and denoising diffusion probabilistic model (DDPM) [28], respectively.
* We propose prosody conditional adversarial training to ensure an accurate reflection of prosody using the TTS module. A significant improvement in smoothness, attributable to vector quantization, was observed in the generated speech.
* Objective and subjective evaluations demonstrated that the proposed method outperforms comparative models.
The implementation1 of proposed method and audio samples2 for various datasets, such as VCTK3 and LibriTTS [29], are available online. Footnote 1: [https://github.com/hschol0306/Diff/Prosody](https://github.com/hschol0306/Diff/Prosody)
Footnote 2: [https://pmml-lab-speech-team.github.io/demo/Diff/Prosody/](https://pmml-lab-speech-team.github.io/demo/Diff/Prosody/)
Footnote 3: [https://datashare.ed.ac.uk/handle/10283/2651](https://datashare.ed.ac.uk/handle/10283/2651)
## II Related works
### _Non-autoregressive text-to-speech_
Traditional TTS models function autoregressively. This implies that the spectrogram generates one frame at a time, with each frame conditioned on the preceding frames. Despite the high-quality speech that this approach can produce, it has a drawback in terms of speed owing to the sequential nature of the generation process. To address this issue, non-autoregressive TTS (NAR-TTS) models have been proposed as an alternative for parallel generation. These models have the advantage of simultaneously generating the entire spectrogram, thus resulting in a significant acceleration of the speech synthesis. FastSpeech [2] and FastSpeech 2 [3] serve as examples of NAR-TTS models that can synthesize speech at a much faster rate than their AR counterparts while maintaining a comparable level of quality. For parallel generation, these models require phoneme-level durations. FastSpeech uses an AR teacher model to obtain durations through knowledge distillation. The phoneme-level input is scaled to the frame-level using a length regulator, and a transformer-based network [30] is used to generate the entire utterance at once. FastSpeech 2 addresses some of the disadvantages of FastSpeech by extracting the phoneme duration from forced alignment as the training target instead of relying on the attention map of the AR teacher model and introducing more variation information in speech as conditional inputs. In contrast to these models that use an external aligner, [31, 32, 33, 34] are parallel TTS models that use an internal aligner to model duration. These parallel-generation models exhibit faster and more robust generation than the AR models. In this study, we adopted a transformer-based NAR model with a simple structure to focus on prosody modeling.
### _Generative adversarial networks_
Generative adversarial networks (GANs) [35] are generative models in which generative and discriminative networks compete against each other. The objective of the generative network is to create samples that closely resemble the true data distribution, whereas the discriminative network strives to differentiate between the data sampled from the true and generated distributions.
These two networks play a minimax game with the following value function \(V(D,G)\):
\[\min_{G}\max_{D}V(D,G)=\mathbb{E}_{\mathbf{x}\sim p_{data}(\mathbf{ x})}[\log{(D(\mathbf{x}))}]\\ +\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{z})}[\log{(1-D (G(\mathbf{z})))}], \tag{1}\]
where \(G\) represents the generative network, and \(D\) represents the discriminative network. The training process involves minimizing \(D(1-G(z))\) to recognize \(D\) as real for \(G\), and maximizing \(log(D(x))\) for \(D\) to learn the likelihood of real data. The conditional generation model was obtained by introducing condition \(c\) into both \(G\) and \(D\). In this study, we incorporated adversarial training conditions into prosody features to generate expressive speech.
### _Denoising diffusion models_
The denoising diffusion model is a generative model that gradually collapses data into noise and generates data from noise. The processes of collapsing and denoising data are called the forward and reverse processes, respectively. The forward process gradually collapses data \(\mathbf{x}_{0}\) into noise over the \(T\)-step, with a predefined variance schedule \(\beta_{t}\).
\[q(\mathbf{x}_{1:T}|\mathbf{x}_{0})=\prod_{t=1}^{T}q(\mathbf{x}_{t}|\mathbf{x} _{t-1}), \tag{2}\]
where \(q(\mathbf{x}_{t}|\mathbf{x}_{t-1}):=\mathcal{N}(\mathbf{x}_{t};\sqrt{1-\beta_ {t}}\mathbf{x}_{t-1},\beta_{t}I)\). The reverse process is defined as follows:
\[p_{\theta}(\mathbf{x}_{0:T})=p(\mathbf{x}_{T})\prod_{t=1}^{T}p_{\theta}( \mathbf{x}_{t-1}|\mathbf{x}_{t}). \tag{3}\]
The reverse process is driven by the \(\theta\) denoising model parameterized by a. The denoising model was optimized for a variational bound on the negative log-likelihood.
\[\mathbb{E}[-\log p_{\theta}(\mathbf{x}_{0})]\leq\mathbb{E}_{q}[- \log\frac{p_{\theta}(\mathbf{x}_{0:T})}{q(\mathbf{x}_{1:T}|x_{0})}]\\ =\mathbb{E}_{q}[-\log p(\mathbf{x}_{T})-\sum_{t\geq 1}\log \frac{p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})}{q(\mathbf{x}_{t}|\mathbf{x }_{t-1})}]:=L. \tag{4}\]
In the DDPM [28], the denoising distribution \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is assumed to comprise a Gaussian distribution. Moreover, it has been demonstrated that the diffusion model can generate a diverse range of complex distributions, provided that a sufficient number of iterations are performed. Proceeding with only a small step at a time is possible by setting the denoising distribution to a Gaussian distribution, which implies that a considerable number of timesteps are required. DDGAN [36] was proposed by modeling the denoising distribution with a non-Gaussian multimodal distribution to reduce the sampling step. It predicts \(\mathbf{x}_{0}\) with the generator \(G_{\theta}\), which models the implicit distribution, in contrast to the denoising network of the DDPM, which predicts noise. In the DDGAN, the conditional probability \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is defined as follows:
\[p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t}):=\int p_{\theta}( \mathbf{x}_{0}|\mathbf{x}_{t})q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{0})d \mathbf{x}_{0}\\ =\int p(\mathbf{z})q(\mathbf{x}_{t-1}|\mathbf{x}_{t},\mathbf{x}_{ 0}=G_{\theta}(\mathbf{x}_{t},\mathbf{z},t))d\mathbf{z}, \tag{5}\]
where \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) represents the implicit distribution imposed by the generator \(G_{\theta}(\mathbf{x},\mathbf{z},t)\), which outputs \(\mathbf{x}_{0}\) given \(\mathbf{x}_{t}\) and the latent variable \(\mathbf{z}\sim p(\mathbf{z}):=\mathcal{N}(\mathbf{z};0,\mathbf{I})\). In the DDGAN, the denoising distribution \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\) is modeled as a complex multimodal distribution, in contrast to the unimodal distribution in the DDPM. Sampling \(\mathbf{x}_{0}\) with small timesteps is possible by leveraging the complicated \(p_{\theta}(\mathbf{x}_{t-1}|\mathbf{x}_{t})\). We introduced this model to reduce the sampling timesteps while maintaining its ability to generate diffusion models.
### _Prosody modeling_
Although the pronunciation capabilities of TTS models have seen significant advancements, they still fail to replicate the naturalness inherent in human speech. Various studies have proposed methods for prosody modeling to address this limitation. One such method is the reference-based approach [37, 37, 8, 9, 10, 11], which is a type of expressive speech synthesis that extracts styles from the reference audio. This approach is particularly beneficial for style transfers, but can result in unnatural results when the text and reference audio do not align well. However, there are methods that directly model prosody properties such as pitch, energy, and duration [38, 39, 16]. These methods offer the advantages of explainability and controllability because they directly use and model prosody features. In the context of ProsoSpeech [27], the authors argue that prosodic features are interdependent and that modeling them separately can result in unnatural outcomes. To address this, they proposed the modeling of prosody as a latent prosody vector (LPV) and introduced an AR prosody predictor to obtain the LPV. In this study, we adopt a similar latent vector approach to model prosody. The use of pre-trained language models was also suggested [23, 24, 25, 26]. These models comprise the use of models that have been pre-trained on large datasets, such as BERT [40] or GPT-3 [41].
## III DiffProsody
The proposed method, called DiffProsody, aims to enhance speech synthesis by incorporating a diffusion-based latent prosody generator (DLPG) and prosody conditional adversarial training. The overall structure and process of DiffProsody are presented in Figure 1. In the first stage, we trained a TTS module and a prosody encoder using a text sequence and a reference Mel-spectrogram as inputs. The prosody conditional discriminator evaluates the prosody vector from the prosody encoder and the Mel-spectrogram from the TTS module to provide feedback on their quality. In the second stage, we train a DLPG to sample a prosody vector that corresponds to the input text and speaker. During inference, the TTS module synthesizes speech without relying on a reference Mel-spectrogram. Instead, it uses the output of a DLPG. This facilitates the generation of expressive speech that accurately reflects the desired prosody.
### _Text-to-speech module_
The TTS module is designed to transform text into Mel-spectrograms using speaker and prosody vectors as conditions. The overall structure of the model is presented in Figure 1a. The TTS module comprises a text encoder and a decoder. The text encoder processes the text at both the phoneme and word levels, as illustrated in Figure 1c. The input text, denoted as \(\mathbf{x}_{txt}\), is converted into a text hidden representation \(\mathbf{h}_{txt}\), by the
Fig. 1: Framework of DiffProsody. (a) Overall architecture including TTS and prosody modeling with prosody conditional adversarial training; (b) Prosody modeling by vector quantization with prosody encoder and diffusion-based latent prosody generator; (c) Text encoder that models text at the phoneme-level and word-level; (d) Prosody encoder that models the word-level target prosody; (e) Prosody conditional discriminator for adversarial training. DP represents a duration predictor, and LR represents a length regulator. In the first stage, the TTS and prosody encoder are trained jointly, and in the second stage, a diffusion-based latent prosody generator (DLPG) is trained using the output of the pre-trained prosody encoder as a target. In inference, the TTS module synthesizes speech conditioned on the prosody vector generated by DLPG.
phoneme encoder \(E_{p}\) and word encoder \(E_{w}\). The \(E_{p}\) takes the phoneme-level text \(\mathbf{x}_{ph}\) and the \(E_{w}\) takes as input the word-level text \(\mathbf{x}_{wd}\). The \(\mathbf{h}_{txt}\) is then obtained as the element-wise sum of the outputs of \(E_{p}(\mathbf{x}_{ph})\) and \(E_{w}(\mathbf{x}_{wd})\) expanded to the phoneme-level.
\[\mathbf{h}_{txt}=E_{p}(\mathbf{x}_{ph})+expand(E_{w}(\mathbf{x}_{wd})), \tag{6}\]
where \(expand\) is an operation that expands the word-level features to the phoneme-level. Obtaining the quantized prosody vector \(\mathbf{z}_{pros}\) involves using \(\mathbf{h}_{txt}\) and speaker hidden representation \(\mathbf{h}_{spk}\) as inputs for the prosody module. In addition, \(\mathbf{h}_{spk}\) is acquired using a pre-trained speaker encoder. We use Resemblyzer4, an open-source model trained with generalized end-to-end loss (GE2E) [42], to extract \(\mathbf{h}_{spk}\).
Footnote 4: [https://github.com/resemble-ai/Resemblyzer](https://github.com/resemble-ai/Resemblyzer)
During the first stage of training, a prosody encoder is employed, which receives the target Mel-spectrogram. In the inference, \(\mathbf{z}^{\prime}_{pros}\) is obtained by inputting \(\mathbf{h}_{txt}\) and \(\mathbf{h}_{spk}\) into a DLPG, and this is performed without a reference Mel-spectrogram. Finally, the information related to the text, speaker, and prosody is combined by expanding the latent vectors \(\mathbf{h}_{txt}\), \(\mathbf{h}_{spk}\), and \(\mathbf{z}_{pros}\) to the phoneme-level and then performing an element-wise summation.
\[\mathbf{h}_{total}=\mathbf{h}_{txt}+\mathbf{h}_{spk}+\mathbf{z}_{pros}. \tag{7}\]
The phoneme duration is modeled using the duration predictor \(DP\). The goal of the \(DP\) is to predict the phoneme duration at the frame-level based on the input variable \(\mathbf{h}_{total}\).
\[dur^{\prime}=DP(\mathbf{h}_{total}). \tag{8}\]
In addition, there is a length regulator \(LR\) that expands the input variable to the frame-level using the phoneme duration \(dur\). The expanded \(\mathbf{h}_{total}\) is then transformed to Mel-spectrogram \(\mathbf{y}^{\prime}\) by \(D_{mel}\).
\[\mathbf{y}^{\prime}=D_{mel}(LR(\mathbf{h}_{total},dur)). \tag{9}\]
For TTS modeling, we use two types of losses: the mean square error (MSE) and structural similarity index (SSIM) loss. These losses aid in accurately modeling the TTS. For the duration modeling, we use the MSE loss.
\[\mathcal{L}_{rec}=\mathcal{L}_{MSE}(\mathbf{y},\mathbf{y}^{\prime})+\mathcal{ L}_{SSIM}(\mathbf{y},\mathbf{y}^{\prime}). \tag{10}\]
\[\mathcal{L}_{dur}=\mathcal{L}_{MSE}(dur,dur^{\prime}). \tag{11}\]
### _Prosody module_
Figure 1b presents the prosody module, which includes a prosody encoder \(E_{pros}\) that derives a prosody vector from a reference Mel-spectrogram, a DLPG that produces a prosody vector using text and speaker hidden states, and a codebook \(Z=\{\mathbf{z}_{k}\}_{k=1}^{K}\in\mathbb{R}^{K\times d_{z}}\), where \(K\) represents the size of the codebook and \(d_{z}\) is the dimension of the codes. During the training of \(E_{pros}\), instead of a full-band Mel-spectrogram, we used a low-frequency band Mel-spectrogram to alleviate disentanglement, as in the case of ProsoSpeech [27]. Figure 1d presents the structure of \(E_{pros}\), which comprises two convolutional stacks and a word-level pooling layer. To extract the target prosody, \(E_{pros}\) uses the lowest \(N\) bins of the target Mel-spectrogram \(\mathbf{y}_{[0:N]}\), along with the \(\mathbf{h}_{txt}\) and \(\mathbf{h}_{spk}\), as its inputs. The output of this process is a prosody vector, \(\mathbf{h}_{pros}\in\mathbb{R}^{L\times d_{z}}\), where \(L\) is the word-level length of the input text.
\[\mathbf{h}_{pros}=E_{pros}(\mathbf{y}_{[0:N]},\mathbf{h}_{txt},\mathbf{h}_{spk }). \tag{12}\]
During the inference stage, the prosody vector \(\mathbf{h}^{\prime}_{pros}\) is obtained using the prosody generator trained in the second stage.
\[\mathbf{h}^{\prime}_{pros}=DLPG(\mathbf{h}_{txt},\mathbf{h}_{spk}). \tag{13}\]
Fig. 2: Training a diffusion-based latent prosody generator. We adopt the design of DDGANs [36] to shorten the diffusion timestep. The generator \(G_{\theta}\) takes speaker hidden representation \(\mathbf{h}_{spk}\) and text hidden representation\(\mathbf{h}_{txt}\), timestep \(t\), and noisy data \(\mathbf{x}_{t}\) as input to generate \(\mathbf{x}^{\prime}_{0}\), and the disriminator \(D_{\phi}\) determines which of \(\mathbf{x}^{\prime}_{t-1}\) obtained by posterior sampling on \(\mathbf{x}^{\prime}_{0}\) and \(\mathbf{x}_{t-1}\) obtained by forward process on \(\mathbf{x}_{0}\) is compatible with \(\mathbf{x}_{t}\) at \(t\) timestep.
The \(DLPG\) process is described in section III-D. To obtain the discrete prosody token sequence \(\mathbf{z}_{pros}\in\mathbb{R}^{L\times d_{z}}\), the vector quantization layer \(Z\) maps each prosody vector \(\mathbf{h}_{pros}^{i}\in\mathbb{R}^{d_{z}}\) to the nearest element of the codebook entry \(\mathbf{z}_{k}\in\mathbb{R}^{d_{z}}\).
\[\mathbf{z}_{pros}^{i}=\operatorname*{arg\,min}_{\mathbf{z}_{k}\in Z}|| \mathbf{h}_{pros}^{i}-\mathbf{z}_{k}||_{2}\text{ for }i=1\text{ to }L, \tag{14}\]
where \(\mathbf{z}_{pros}^{i}\) is \(i\)-th element of \(\mathbf{z}_{pros}\). In the first stage, the TTS module is trained jointly with the codebook \(Z\) and prosody encoder \(E_{pros}\).
\[\mathcal{L}_{vq}=||sg[\mathbf{h}_{pros}]-\mathbf{z}_{pros}||_{2}^{2}+\beta|| \mathbf{h}_{pros}-sg[\mathbf{z}_{pros}]||_{2}^{2}, \tag{15}\]
where \(sg[\cdot]\) denotes the stop-gradient operation. Moreover, we employ an exponential moving average (EMA) [43] to enhance the learning efficiency by applying it to codebook updates.
### _Prosody conditional adversarial training_
For our prosody adversarial training, we develop prosody conditional discriminators (PCDs) that handle inputs of varying lengths. This design is inspired by multi-length window discriminators [44]. The PCD structure is presented in Figure 1e. The PCD is designed to accept a Mel-spectrogram \(y\) and a quantized prosody vector \(\mathbf{z}_{pros}\) as inputs, and its role is to determine whether these input features are original or generated. The PCD comprises two lightweight convolutional neural networks (CNNs) and fully connected layers. One of the CNNs is designed to receive only the Mel-spectrogram, whereas the other is designed to receive a combination of \(\mathbf{z}_{pros}\) and \(y\). To match the corresponding PCD, the length of each Mel-spectrogram and the extended \(\mathbf{z}_{pros}\) are randomly clipped. For our objective function, we adopt the least square GAN loss [45]:
\[\mathcal{L}_{D}=\sum_{i}[\mathbb{E}[(PCD^{i}(\mathbf{y}^{\prime},\mathbf{z}_{pros}))^{2}]\\ +\mathbb{E}[(PCD^{i}(\mathbf{y},\mathbf{z}_{pros})-1)^{2}]], \tag{16}\]
\[\mathcal{L}_{G}=\sum_{i}\mathbb{E}[(PCD^{i}(\mathbf{y}^{\prime},\mathbf{z}_{ pros})-1)^{2}], \tag{17}\]
where \(\mathcal{L}_{D}\) denotes the training goal of the discriminators and \(\mathcal{L}_{G}\) represents the feedback on the TTS module. The final object \(\mathcal{L}_{TTS}\) of the TTS module is as follows:
\[\mathcal{L}_{TTS}=\mathcal{L}_{rec}+\mathcal{L}_{dur}+\mathcal{L}_{vq}+\lambda _{1}\mathcal{L}_{G}, \tag{18}\]
where \(\lambda_{1}\) corresponds to the weight of the adversarial loss.
### _Diffusion-based latent prosody generator_
We propose a new module called the DLPG, which leverages the powerful generative capabilities of diffusion models. In addition, we introduce a DDGAN framework [36], which enables faster sampling by reducing the number of required timesteps. Figure 2 presents the training process for the DLPG. During training, the DLPG aims to generate target \(\mathbf{h}_{pros}\), which is extracted from the prosody encoder trained in the first stage. The DLPG is trained to produce \(\mathbf{h}_{pros}^{\prime}\) based on the \(\mathbf{h}_{spk}\) and \(\mathbf{h}_{txt}\). In the diffusion model, we set \(\mathbf{x}_{0}\) as the target \(\mathbf{h}_{pros}\). The DLPG generator \(G_{\theta}\) directly generates \(\mathbf{x}_{0}^{\prime}\).
\[\mathbf{x}_{0}^{\prime}=G_{\theta}(\mathbf{x}_{t},t,\mathbf{h}_{spk},\mathbf{ h}_{txt}), \tag{19}\]
where \(t\) is timestep of diffusion process. To ensure adversarial training, \(\mathbf{x}_{t-1}^{\prime}\) is derived from \(\mathbf{x}_{t}\) and \(\mathbf{x}_{0}^{\prime}\) using posterior sampling \(q(\mathbf{x}_{t-1}^{\prime}|\mathbf{x}_{t},\mathbf{x}_{0}^{\prime})\). Subsequently, a time-dependent discriminator \(D_{\phi}\) determines the compatibility of \(\mathbf{x}_{t-1}\) (obtained from the forward processing of \(\mathbf{x}_{0}\)) and \(\mathbf{x}_{t-1}^{\prime}\) (generated through posterior sampling of \(\mathbf{x}_{0}^{\prime}\)) with respect to \(t\) and \(\mathbf{x}_{t}\), conditioned on \(\mathbf{h}_{spk}\) and \(\mathbf{h}_{txt}\). The objective function of \(G_{\theta}\) is then defined as follows:
\[\mathcal{L}_{G_{\theta}}^{adv}=\sum_{t\geq 1}\mathbb{E}[(D_{\phi}(\mathbf{x}_{t-1} ^{\prime},\mathbf{x}_{t},t,\mathbf{h}_{txt},\mathbf{h}_{spk})-1)^{2}], \tag{20}\]
\[\mathcal{L}_{G_{\theta}}^{rec}=L_{MAE}(\mathbf{x}_{0},\hat{\mathbf{x}}_{0}), \tag{21}\]
where \(\mathcal{L}_{G_{\theta}}^{adv}\) corresponds to the adversarial loss, and \(\mathcal{L}_{G_{\theta}}^{rec}\) denotes the reconstruction loss of \(G_{\theta}\). The total generator loss \(\mathcal{L}_{G_{\theta}}\) is expressed as follows:
\[\mathcal{L}_{G_{\theta}}=\mathcal{L}_{G_{\theta}}^{rec}+\lambda_{2}\mathcal{L }_{G_{\theta}}^{adv}, \tag{22}\]
where \(\lambda_{2}\) is the weight of adversarial loss \(\mathcal{L}_{G_{\theta}adv}\). The objective function of the \(D_{\phi}\) is as follows:
\[\mathcal{L}_{D_{\phi}}=\sum_{t\geq 1}[\mathbb{E}[D_{\phi}( \mathbf{x}_{t-1}^{\prime},\mathbf{x}_{t},t,\mathbf{h}_{txt},\mathbf{h}_{spk}) ^{2}]\\ +\mathbb{E}[(D_{\phi}(\mathbf{x}_{t-1},\mathbf{x}_{t},t,\mathbf{ h}_{txt},\mathbf{h}_{spk})-1)^{2}]]. \tag{23}\]
The DLPG leverages the DDGAN framework to achieve stable and high-quality results in only a few timesteps. This process involves the \(G_{\theta}\), which iteratively generates \(\mathbf{x}_{0}^{\prime}\)\(T\) times during the inference. We set \(\mathbf{x}_{T}\) to follow a normal distribution. The \(\mathbf{h}_{pros}^{\prime}\) is obtained as the final \(\mathbf{x}_{0}^{\prime}\) of the reverse process. The final prosody vector \(\mathbf{z}_{pros}\) is then derived through the vector quantization of \(\mathbf{h}_{pros}^{\prime}\).
### _Inference_
Here is the step-by-step process of generating a Mel-spectrogram using the trained TTS module and DLPG.
1. The \(\mathbf{h}_{txt}\) is extracted from the text encoder, and \(\mathbf{h}_{spk}\) is extracted from the pre-trained speaker encoder.
2. The DLPG generates a \(\mathbf{h}_{pros}^{\prime}\) with \(\mathbf{h}_{txt}\) and \(\mathbf{h}_{spk}\) as inputs.
3. \(\mathbf{h}_{pros}^{\prime}\) is mapped to a codebook, denoted as \(Z\), in the VQ layer to obtain the prosody vector, which is denoted as \(\mathbf{z}_{pros}\).
4. The decoder \(D_{mel}\) generates a Mel-spectrogram \(\mathbf{y}^{\prime}\) using \(\mathbf{h}_{txt}\), \(\mathbf{h}_{spk}\) and \(\mathbf{z}_{pros}\). This process involves the expansion \(\mathbf{h}_{txt}\), \(\mathbf{h}_{spk}\), and \(\mathbf{z}_{pros}\) to the frame-level. The phoneme-duration is predicted by the duration predictor.
5. The Mel-spectrogram \(y^{\prime}\) was converted to a raw waveform using a pre-trained vocoder.
## IV Experimental result and discussion
### _Experimental setup_
We conducted experiments using the VCTK dataset, a multispeaker English dataset consisting of audio clips of 44,200 sentences recorded by 109 speakers for approximately 400 sentences. A total of 2,180 audio clips were randomly selected with 20 sentences per speaker, which constituted the test set. Furthermore, 545 audio clips randomly selected from five sentences per speaker were used as the verification sets, and the remainder were used as the training sets. We sampled audio at 22,050 Hz and then transformed it to an 80-bin Mel-spectrogram using an STFT with a window length of 1,024 and hop size of 256. The text was converted into phoneme sequences for text input using a grapheme-to-phoneme tool5. We extracted the phoneme duration using the Montreal Forced Aligner [46] tool. Furthermore, we used the AdamW optimizer [47] with \(\beta_{1}=0.9\) and \(\beta_{2}=0.98\). The learning rates for the TTS and latent prosody generator training were set as \(5\times 10^{-4}\) and \(2\times 10^{-4}\), respectively. Throughout the training process, the batch size used was 48. The TTS and prosody encoder were updated in 160k steps, and the latent prosody generator was updated in 320k steps. In the experiment, all the audio was synthesized using the official implementation of the HiFi-GAN6[48] and a pre-trained model. We trained the TTS module and prosody encoder for approximately 16 h and the DLPG for 7 h using a single NVIDIA RTX A6000 GPU.
Footnote 5: [https://github.com/Kyubyong/g2p](https://github.com/Kyubyong/g2p)
Footnote 6: [https://github.com/jik876/hit-gan](https://github.com/jik876/hit-gan)
### _Implementation details_
The number of layers, hidden size, filter size, and kernel size of the feed-forward transformer blocks of the phoneme encoder, word encoder, and decoder were set as 4, 192, 384, and 5, respectively. The extracted speaker embedding was projected onto 192 dimensions, and the number of dimensions of the prosody vector was 192. The structure of the prosody encoder follows that of ProsoSpeech [27]. For VQ, we set the size of codebook and dimension of code as 128 and 192, respectively. and updated it using EMA with a decay rate of 0.998. The codebook was initialized as the center of the k-means clustering after 20k steps of TTS training. The number of Mel-spectrogram bins used in the prosody encoder \(N\) was set as 20. We set multiple PCDs to receive different input sizes, such as [49, 50, 51]. The PCD comprises two 2D convolution stacks and three fully connected layers. The convolution stacks in the PCD consist of three 2D convolutions: LeakyReLU, BatchNorm, and a linear layer with a multilength window size of [64, 128, 32]. The latent prosody generator consists of 20 residual blocks with a hidden size of 384. The prosody discriminator is configured with four convolution layers and hidden dimensions of 384. We investigated \(\lambda_{1}\) and \(\lambda_{2}\) between 0.001 and 1.0 and set \(\lambda_{1}\) and \(\lambda_{2}\) as 0.01 and 0.05, respectively. The number of timesteps used in the training and inference was set as four.
### _Comparative studies_
We developed a prosody model from previous works, our proposed model, and the model of the ablation study for performing comparative experiments. All the models produced an 80-bin Mel-spectrogram and synthesized speech using the same vocoder.
#### Iv-C1 Gt
GT audio is human-recorded.
#### Iv-C2 GT (vocoded)
The generated audio was obtained by converting the GT Mel-spectrogram using HiFi-GAN V1 [48].
#### Iv-C3 FastSpeech 2
FastSpeech 2[3] is a speech-synthesis model that directly predicts prosody features (pitch and energy). We trained FastSpeech 2 using an open-source implementation7. For the purpose of a fair comparison, we used a text encoder at the phoneme and word-levels.
Footnote 7: [https://github.com/NATSpeech/NATSpeech](https://github.com/NATSpeech/NATSpeech)
#### Iv-C4 ProsoSpeech
ProsoSpeech [27] is a speech synthesis model that models prosody features as latent prosody vectors and predicts them using an AR predictor. We implemented the model by following the hyperparameters in the study and used the model that provided the best performance.
#### Iv-C5 DiffProsody
The proposed model was implemented using a text encoder and decoder with the same structure as FastSpeech 2 and ProsoSpeech. We used a prosody encoder with the same structure as that of ProsoSpeech. In the first stage, a prosody conditional discriminator was added. In the second stage, a DLPG was used instead of an AR predictor.
#### Iv-C6 DiffProsody (AR)
DiffProsody (AR) is a model in which a DiffProsody trained with an AR prosody predictor to estimate prosody vectors. It has the same structure as the ProsoSpeech prosody predictor.
#### Iv-C7 DiffProsody (DDPM)
DiffProsody (DDPM) is a model in which a DiffProsody trained with a DDPM framework to estimate a prosody vector. It has the same structure as DiffProsody's diffusion-based latent generator.
#### Iv-C8 DiffProsody (w/o PCD)
DiffProsody (w/o PCD) is a model in which a DiffProsody trained without the assistance of a prosody conditional discriminator.
#### Iv-C9 DiffProsody (w/o VQ)
DiffProsody (w/o VQ) is a model in which a DiffProsody trained without a VQ layer in the prosody encoder.
#### Iv-C10 DiffProsody N
DiffProsody \(N\) is a model in which a DiffProsody trained with the lowest \(N\) Mel-bins in the prosody encoder.
### _Subjective metrics_
A subjective assessment is conducted to confirm the effectiveness of the proposed method. To measure the level of naturalness, we used the mean opinion score (MOS). For this evaluation, we employed Amazon Mechanical Turk (MTurk), a crowdsourcing service, to gather feedback from 20 native Americans. MOS was assessed using a 5-point scale, and confidence intervals were calculated at a 95% level. For the evaluation, 100 samples were randomly selected from the test set.
### _Objective metrics_
For realizing an objective evaluation, we calculated the equal error rate (EER) using a pre-trained speaker verification model8[52]. We used the pre-trained wav2vec 2.0 [53] to the compute character error rate (CER) and word error rate (WER).
Footnote 8: [https://github.com/clovaai/voxceleb_trainer](https://github.com/clovaai/voxceleb_trainer)
For the prosodic evaluation, we computed the average differences in utterance duration (DDUR) [54, 55], pitch error (RMSE\({}_{f_{0}}\)) (in cents), periodicity error (RMSE\({}_{period}\)) and F1 score of the voiced/unvoiced classification (F1\({}_{v/uv}\)). We used torchcrepe9 to extract the pitch and periodicity features for evaluation. In addition, we measured the Kullback-Leibler (KL) divergence of log f0 and log energy to compare the distributions of the prosody features in the generated audio. Finally, we calculated the real-time factor (RTF) to compare the generation speeds.
Footnote 9: [https://github.com/maxmorrison/torchcrepe](https://github.com/maxmorrison/torchcrepe)
#### Iii-E1 Average differences in the utterance duration (DDUR)
We obtained \(DDUR\) by calculating the mean absolute error of the difference in the duration of each utterance.
\[DDUR=\frac{1}{N}\sum_{i=1}^{N}|dur_{i}-dur^{\prime}_{i}|, \tag{24}\]
where \(dur_{i}\) denotes the duration of the \(i\)-th GT utterance and \(dur^{\prime}_{i}\) denotes the duration of the \(i\)-th generated utterance.
#### Iii-E2 Pitch error
To measure the pitch error RMSE\({}_{f_{0}}\), we aligned the pitches extracted in hertz with dynamic time warping (DTW) [56] and calculated the root mean square (RMSE) in cents, defined as \(1200\log_{2}(y/\hat{y})\) for pitch of the GT speech \(y\) and pitch of the generated speech \(\hat{y}\). We measured the portion wherein both the GT speech and generated speech were voiced.
\[RMSE_{f_{0}}=\sqrt{\frac{1}{T}\sum_{i=1}^{T}(1200\log_{2}(y_{i}/y^{\prime}_{i} ))^{2}}. \tag{25}\]
#### Iii-E3 Periodicity error
We measured the periodicity error RMSE\({}_{period}\) by root mean square between the periodicities \(\psi\) aligned with the DTW.
\[RMSE_{period}=\sqrt{\frac{1}{T}\sum_{i=1}^{T}(\psi_{i}-\psi^{\prime}_{i})^{2}}, \tag{26}\]
where \(\psi_{i}\) means the \(i\)-th periodicity value.
#### Iii-E4 F1 score of voiced/unvoiced
We obtained voiced/unvoiced flags from the aligned pitches using DTW and calculated the F1 score (F1\({}_{v/uv}\)) between them. We defined a match between the GT voiced flag \(u\) and generated voiced flag \(u^{\prime}\) as a true positive (TP), a match between the GT unvoiced flag \(uv\) and generated voiced flag \(v^{\prime}\) as a false positive (FP), and a match between the GT voiced flag \(v\) and generated unvoiced flag \(uv^{\prime}\) as a false negative (FN).
\[TP=\sum_{i}^{n}[v_{i}=v^{\prime}_{i}], \tag{27}\]
\[FP=\sum_{i}^{n}[uv_{i}=v^{\prime}_{i}], \tag{28}\]
\[FN=\sum_{i}^{n}[v_{i}=uv^{\prime}_{i}], \tag{29}\]
where \(n\) is the length of the sequence, and \([a_{i}=b_{i}]\) is a function that returns 1 if the \(i\)-th element has the same value, and 0 otherwise. The precision and recall are defined as follows:
\[precision=\frac{TP}{TP+FP},recall=\frac{TP}{TP+FN}. \tag{30}\]
We then calculated the F1 score as follows:
\[F1_{v/uv}=\frac{2}{recall^{-1}+precision^{-1}}. \tag{31}\]
#### Iii-E5 KL divergence of log f0 / log energy
We also measured the KL D to analyze the pitch and energy distribution. We first extracted the pitch and energy in log-scale. We then binned the entire range into 100 bins and applied a kernel density estimation [57] to each bin to calculate the KL D for a smoothed distribution. The KL divergence for feature \(x\) is defined as follows:
\[KLD_{x}=\frac{1}{N}\sum_{i=1}^{N}(KLD(KDE(x_{i}),KDE(x^{\prime}_{i})), \tag{32}\]
where \(N\) is the number of bins, \(x_{i}\) is the probability in the \(i\)-th bin of the distribution of \(x\), \(KDE\) is the kernel density estimator function, and \(KLD\) is the KL divergence calculation.
### _Evaluation results_
Table I lists the MOS results and objective evaluations. The results demonstrate that the proposed model DiffProsody surpasses the other models in terms of both subjective and objective metrics. In particular, our model achieved a superior MOS for the subjective metrics, with a p-value of less than 0.05. It also exhibited outstanding performance in terms of the DDUR, EER, RMSE\({}_{f_{0}}\), RMSE\({}_{period}\), and F1\({}_{v/uv}\), which suggests that the speech generated by DiffProsody closely resembles the target prosody. Furthermore, our model displayed a lower WER and CER, thus indicating its capability for synthesizing speech with a more accurate pronunciation.
Figure 3 presents the Mel-spectrogram and pitch contour of speech from each model. The red box in the figure indicates that the DiffProsody model was more similar to the GT. To further evaluate the prosody of the generated speech, we examined pitch and energy distributions.
Figure 4 presents the histogram distribution for log f0 (pitch), and Figure 5 presents the histogram distribution for log energy. In both the figures, the blue bars represent the distribution of the GT features, and the orange bars represent the distribution of the generated features. The results indicate that the proposed model aligns more closely with the GT distribution than the other models. Table II lists the KL
divergence values for comparison. ProsoSpeech exhibited a better performance in f0 than FastSpeech 2, but not in the case of energy. However, DiffProsody outperformed the comparison model in terms of f0 and energy.
### _Latent prosody generation method_
In Table III, DiffProsody (AR) is a model comprising the use of an AR prosody predictor. DiffProsody (DDPM) is a model comprising the use of DDPM (100 timesteps). DiffProsody is a model comprising the use of DDGAN (four timesteps). The AR predictor has the same structure as the prosody predictor in ProsoSpeech, but it does not use context encoders, and the denoising network in the DDPM has the same structure as generator \(g_{\theta}\) in the DDGAN. We also conducted a 7-point comparative mean opinion score (CMOS) evaluation to compare the latent prosody generation methods and measured the RTF to compare the generation speeds. DDGAN achieved \(-0.172\) and \(+0.015\) CMOS compared with AR and DDPM, respectively. The objective metrics showed that the DDPM and DDGAN outperformed the AR. The DDGAN achieved results nearly identical to those of the DDPM for all the metrics. The experimental results showed that the diffusion models performed better than the AR models, as reported in [58]. Furthermore, DDGAN can generate high-quality prosody vectors such as DDPM in only four timesteps. According to the RTF results, the DDGAN produces a 2.7\(\times\) and 16\(\times\) faster prosody than the AR and DDPM, respectively.
Figure 6 presents the traces of the WER, CER, and EER results for the speech generated by the DLPG with the prosody vector generated by each denoising iteration. The red and blue lines represent the results obtained using the DDPM and DDGAN, respectively. We observed that the DDGAN has a larger error rate than the DDPM in the early stages and that the midpoint and final stages of each model have almost the same value. This implies that the DDGAN compresses the timesteps of the DDPM by modeling the denoising process as a multimodal non-Gaussian distribution.
### _Prosody conditional adversarial training_
We conducted CMOS to assess the effectiveness of PCD. The CMOS values were used to measure the degree of PCD preference in comparison with the reference model. Our
Fig. 4: Histogram visualization of log f0, where the blue bars represent the GT distribution and orange bars represent the generated distribution. The distribution of the proposed model overlaps to a greater extent with the GT distribution than the other comparison models.
Fig. 5: Histogram visualization of log energy, where the blue bars represent the GT distribution and orange bars represent the generated distribution. The proposed model’s distribution has a greater overlap with the GT distribution compared to the other models being compared.
Fig. 3: Comparison of the visualized spectrogram and pitch contour. The red box indicates that the proposed model is more similar to the GT.
evaluation involved the analysis of three different models: DiffProsody using PCD and a diffusion-based model, DiffProsody without PCD (referred to as DiffProsody (w/o PCD)) using a diffusion-based model, and ProsoSpeech using an AR prosody predictor without PCD. The results presented in Table IV clearly indicate that models comprising the use of the PCD are preferred over those trained without the PCD. Furthermore, by comparing the CMOS results of DiffProsody (w/o PCD) and ProsoSpeech, we can infer that the diffusion-based method is preferable to the method employed by ProsoSpeech. To provide a more comprehensive analysis, we also incorporated the objective metrics for DiffProsody (w/o PCD) in Table III. These objective metric results demonstrate that DiffProsody outperformed DiffProsody (w/o PCD) in all aspects. However, DiffProsody (w/o PCD) still received a better objective evaluation than ProsoSpeech. In addition, when comparing DiffProsody (AR) with ProsoSpeech, DiffProsody (AR) consistently achieved higher scores on the majority of the objective evaluations. These findings validate that both PCD and DLPG significantly improve the model performance.
### _Prosody encoder evaluation_
In this section, the effectiveness of the prosody encoder is evaluated. We focus on two main aspects: the impact of the number of Mel-bins used in the reference Mel-spectrogram and the role of the vector quantization layer. Detailed results of these evaluations are presented in the following subsections.
#### Iv-I1 Impact of the number of Mel-bins
We conducted an experiment to compare the performance of DiffProsody when trained using various numbers of Mel-bins. The results, including the EER, CER, and WER scores, are presented in Table V. We examined the performance of 10, 20, 30, 40, 60, and 80 (full-band) Mel-bins. The findings indicated that as the number of bins exceeded 20 (baseline), the EER tended to increase, while no significant difference was observed in terms of the CER and WER. It should be noted that the model with 10 bins outperformed the baseline (20 bins) in terms of the WER but yielded higher EER results. Figure 7 illustrates the Mel-spectrogram at the initial iteration (noise) during the diffusion timestep of the DLPG, which was trained using various numbers of Mel-bins. As the number of Mel-bins used for the training was increased, the results of the Mel-spectrogram progressively smoothened and eventually collapsed. This phenomenon occurs because the large amount of information in the reference forces the model to reconstruct the Mel-spectrogram by leveraging the prosody vectors. Through this experiment, we found that, as \(N\) increases, linguistic information becomes increasingly entangled. Consequently, it is reasonable to employ 20 Mel-bins as the input to the prosody encoder for realizing effective prosody modeling.
#### Iv-I2 Vector quantization layer analysis
Figure 8 presents a Mel-spectrogram trace synthesized using the prosody vector generated for each diffusion step of the DLPG. We compared two versions of DiffProsody: one trained without the vector quantization layer (Figure 8a; defined as DiffProsody (w/o VQ)) and the other trained with the vector quantization layer (Figure 8b; defined as DiffProsody). In the case of DiffProsody (w/o VQ), the early steps exhibited a completely distorted Mel-spectrogram, but there was a significant recovery in the middle steps. Conversely, the initial step of DiffProsody exhibits a smoothed but slightly distorted Mel-spectrogram that gradually returns to its original state over the subsequent steps. This phenomenon is also related to prosody disentangling, and we confirmed that the prosody disentangling failed in DiffProsody (w/o VQ). This experiment demonstrated that vector quantization plays a crucial role in effective prosody disentangling. The last row of Table III presents the objective evaluation results for DiffProsody (w/o VQ). DiffProsody (w/o VQ) performed worse for all the objective measurements. These results provide an objective assessment of how the failure to properly disentangle prosody affects the overall performance of the system.
Fig. 6: Comparison of objective evaluation results based on diffusion timesteps when using the DDPM and DDGAN framework in DLPG. The blue line is the result for the DDGAN and the red line is the result for the DDPM.
## V Conclusion
In this study, a novel technique called DiffProsody is proposed, the aims of which is to synthesize high-quality expressive speech. Through prosody conditional adversarial training, we observed significant improvements in speech quality with a more pronounced display of expressive prosody. In addition, our DLPG successfully generated expressive prosody. Our proposed method outperformed comparative models in terms of producing accurate and expressive prosody, as evidenced by the prosody evaluation metrics. Moreover, our method demonstrated superior accuracy in pronunciation, as indicated by the CER and WER evaluations. The KL divergence and histogram analysis further support the claim that DiffProsody yields a more accurate prosody distribution than the other models. Furthermore, we successfully reduced the sampling speed while maintaining the expected performance by introducing DDGAN.
## VI Future works
Despite the importance of vector quantization for disentangling, it has been observed that this approach can negatively affect model performance. This problem is expected to be addressed with the introduction of methods such as residual vector quantizers [59, 60, 61]. Moreover, we acknowledge the limitations of attempting to model prosody using the TTS dataset. It has been suggested that using a language model pre-trained on a large dataset, such as HuBERT [62], could result in significant improvements. In the future, we plan to extend the latent diffusion method to include controllable emotional prosody modeling [63].
|
2302.14580 | Effect Size Estimation in Linear Mixed Models | In this note, we reconsider Cohen's effect size measure $f^2$ under linear
mixed models and demonstrate its application by employing an artificially
generated data set. It is shown how $f^2$ can be computed with the statistical
software environment R using lme4 without the need for specification and
computation of a coefficient of determination. | Jürgen Groß, Annette Möller | 2023-02-28T14:03:15Z | http://arxiv.org/abs/2302.14580v2 | # Effect size estimation in linear mixed models
###### Abstract.
In this note, we reconsider Cohen's effect size measure \(f^{2}\) under linear mixed models and demonstrate its application by employing an artificially generated data set. It is shown how \(f^{2}\) can be computed with the statistical software environment R using lme4.
Key words and phrases:Hypothesis testing, effect size, Cohen's f2, linear regression, linear mixed model, multivariate normal distribution 2010 Mathematics Subject Classification: 62J05, 62J20, 62F03 Support of the second author by the Helmholtz Association's pilot project "Uncertainty Quantification" is gratefully acknowledged.
well known package lme4, see Bates et al. (2015), for fitting linear mixed models. Our explanations are illustrated on the basis of an artificially generated data set.
## 2. Linear Mixed Model
Consider a linear mixed model (LMM) described by
\[\boldsymbol{y}=\boldsymbol{X}\boldsymbol{\beta}+\boldsymbol{Z}\boldsymbol{u}+ \boldsymbol{e}\;, \tag{1}\]
where \(\boldsymbol{y}\) is an \(n\times 1\) observable random vector. It is assumed that the \(n\times p\) model matrix \(\boldsymbol{X}\) of full column rank \(p\) can be partitioned as
\[\boldsymbol{X}=(\boldsymbol{1}_{n}:\boldsymbol{X}_{1}:\boldsymbol{X}_{2})\;, \tag{2}\]
where \(\boldsymbol{1}_{n}\) denotes the \(n\times 1\) vector of ones, while the \(n\times p_{1}\) and \(n\times p_{2}\) matrices \(\boldsymbol{X}_{1}\) and \(\boldsymbol{X}_{2}\) contain the values of \(p_{1}+p_{2}=p-1\) regressors. The \(p\times 1\) vector \(\boldsymbol{\beta}\) comprises \(p\) unknown parameters \(\beta_{0},\beta_{1},\ldots,\beta_{p-1}\) addressed as fixed effects. The \(n\times q\) matrix \(\boldsymbol{Z}\) contains the values of independent variables associated with an \(q\times 1\) vector \(\boldsymbol{u}\) of unobservable random effects. It is assumed that \(\boldsymbol{u}\) has expectation \(\boldsymbol{0}_{q}\) (the \(q\times 1\) vector of zeroes) and variance covariance matrix \(\operatorname{Cov}(\boldsymbol{u})=\sigma^{2}\boldsymbol{D}\) with unknown parameter \(\sigma^{2}>0\) and \(q\times q\) matrix \(\boldsymbol{D}\), which may depend on further unknown parameters. For the \(n\times 1\) vector \(\boldsymbol{e}\) of unobservable random errors it is assumed that \(\operatorname{E}(\boldsymbol{e})=\boldsymbol{0}_{n}\) and \(\operatorname{Cov}(\boldsymbol{e})=\sigma^{2}\boldsymbol{T}\), where the \(n\times n\) positive definite matrix \(\boldsymbol{T}\) may also depend on unknown parameters. Moreover, the assumption \(\operatorname{Cov}(\boldsymbol{u},\boldsymbol{e})=\boldsymbol{0}_{q,n}\) (the \(q\times n\) matrix of zeroes) implies
\[\operatorname{Cov}(\boldsymbol{y})=\sigma^{2}\boldsymbol{V},\quad\boldsymbol{ V}=\boldsymbol{Z}\boldsymbol{D}\boldsymbol{Z}^{T}+\boldsymbol{T}\;. \tag{3}\]
Then the above model may also be represented by the triplet \(\{\boldsymbol{y},\boldsymbol{X}\boldsymbol{\beta},\sigma^{2}\boldsymbol{V}\}\) and can be considered as a special case of the general Gauss-Markov model, see e.g. Gross (2004). As a matter of fact, formulas useful under model 1) carry over from a classical regression model \(\boldsymbol{Q}^{-1}\boldsymbol{y}=\boldsymbol{Q}^{-1}\boldsymbol{X} \boldsymbol{\beta}+\boldsymbol{\varepsilon}\) with \(\operatorname{E}(\boldsymbol{\varepsilon})=\boldsymbol{0}_{n}\) and \(\operatorname{Cov}(\boldsymbol{\varepsilon})=\sigma^{2}\boldsymbol{I}_{n}\), see Christensen (2020, Sect 2.7). Here \(\boldsymbol{Q}\) denotes some nonsingular matrix satisfying \(\boldsymbol{Q}\boldsymbol{Q}^{T}=\boldsymbol{V}\). Let us assume for the moment that \(\boldsymbol{V}\) is completely known. Then
\[\widehat{\boldsymbol{\beta}}=(\boldsymbol{X}^{T}\boldsymbol{V}^{-1}\boldsymbol {X})^{-1}\boldsymbol{X}^{T}\boldsymbol{V}^{-1}\boldsymbol{y} \tag{4}\]
is the best linear unbiased estimator for \(\boldsymbol{\beta}\), and
\[\widehat{\sigma}^{2}=\frac{1}{\nu}(\boldsymbol{y}-\boldsymbol{X}\widehat{ \boldsymbol{\beta}})^{T}\boldsymbol{V}^{-1}(\boldsymbol{y}-\boldsymbol{X} \widehat{\boldsymbol{\beta}}),\quad\nu=n-p\;, \tag{5}\]
is the usual unbiased estimator for \(\sigma^{2}\). Consider the linear hypothesis \(H_{0}:\boldsymbol{R}\boldsymbol{\beta}=\boldsymbol{r}\) versus \(H_{1}:\boldsymbol{R}\boldsymbol{\beta}\neq\boldsymbol{r}\) for a given \(r\times p\) matrix \(\boldsymbol{R}\) of full row rank and a given \(r\times 1\) vector \(\mathbf{r}\). The corresponding \(F\) statistic in model (1) is
\[F=\frac{(\boldsymbol{R}\widehat{\boldsymbol{\beta}}-\boldsymbol{r})^{T}( \boldsymbol{R}\boldsymbol{B}\boldsymbol{R}^{T})^{-1}(\boldsymbol{R}\widehat{ \boldsymbol{\beta}}-\boldsymbol{r})}{r\widehat{\sigma}^{2}}=\frac{( \boldsymbol{R}\widehat{\boldsymbol{\beta}}-\boldsymbol{r})^{T}(\boldsymbol{R} \boldsymbol{B}\boldsymbol{R}^{T})^{-1}(\boldsymbol{R}\widehat{\boldsymbol{ \beta}}-\boldsymbol{r})}{(\boldsymbol{y}-\boldsymbol{X}\widehat{\boldsymbol{ \beta}})^{T}\boldsymbol{V}^{-1}(\boldsymbol{y}-\boldsymbol{X}\widehat{ \boldsymbol{\beta}})}\cdot\frac{\nu}{r}\;, \tag{6}\]
where
\[\operatorname{Cov}(\widehat{\boldsymbol{\beta}})=\sigma^{2}\boldsymbol{B}, \quad\boldsymbol{B}=(\boldsymbol{X}^{T}\boldsymbol{V}^{-1}\boldsymbol{X})^{-1 }\;. \tag{7}\]
Under multivariate normality the statistic \(F\) follows a \(F_{r,\nu}\) distribution provided \(H_{0}\) holds true.
### Effect Size
Suppose that we are interested in the effect of the independent variables represented by the model matrix \(\mathbf{X}_{1}\), given the variables represented by \(\mathbf{X}_{2}\). The corresponding linear hypothesis reads \(\mathbf{R}_{1}\mathbf{\beta}=\mathbf{0}_{p_{1}}\) with \(\mathbf{R}_{1}=(\mathbf{0}_{p_{1}}:\mathbf{I}_{p_{1}}:\mathbf{0}_{p_{1},p_{2}})\). Hence, by reasoning similar to Cohen (1988), an appropriate measure for the size of the effect based on the above \(F\) statistic is provided by
\[f^{2}=\frac{(\mathbf{R}_{1}\widehat{\mathbf{\beta}})^{T}(\mathbf{R}_{1}\mathbf{B}\mathbf{R}_{1}^{T })^{-1}(\mathbf{R}_{1}\widehat{\mathbf{\beta}})}{(\mathbf{y}-\mathbf{X}\widehat{\mathbf{\beta}})^{T }\mathbf{V}^{-1}(\mathbf{y}-\mathbf{X}\widehat{\mathbf{\beta}})}\;, \tag{8}\]
thereby removing the factor \(\nu/r\) from the \(F\) statistic. This generalizes Cohen's \(f^{2}\) in the sense that if \(\mathbf{D}=\mathbf{0}_{q,q}\), then the unobservable random vector \(\mathbf{u}\) equals \(\mathbf{0}_{q}\) with probability one, the LMM reduces to the usual linear model of fixed effects only, and \(f^{2}\) becomes identical to the measure provided by formula (9.2.1) in Cohen (1988).
We note that it is also possible to compute \(f^{2}\) as
\[f^{2}=\frac{R_{A,B}^{2}-R_{A}^{2}}{1-R_{A,B}^{2}} \tag{9}\]
for appropriately defined coefficients of determination \(R_{A,B}^{2}\) derived under the full model and \(R_{A}^{2}\) derived under a reduced LMM assuming that variables represented by \(\mathbf{X}_{1}\) are not present at all. Such a formula is the basis for the widely applied computational procedure suggested by Selya et al. (2012).
### Operational Effect Size
Formula (8) for \(f^{2}\) is only operational when \(\mathbf{V}\) is completely known, a condition not met in practical applications. According to Harville (1977) a simple LMM is the ordinary mixed and random effects ANOVA model, also referred to as traditional variance component model, see Christensen (2019, Chapt. 5). Under this model, the variance-covariance matrix of \(\mathbf{y}\) depends on a total of \(m\) variance components. The \(n\times q\) matrix \(\mathbf{Z}\) is partitioned as \(\mathbf{Z}=(\mathbf{Z}_{1}:\cdots:\mathbf{Z}_{m-1})\), where each \(n\times q_{i}\) matrix \(\mathbf{Z}_{i}\) is the design matrix corresponding to a qualitative variable with a certain number of levels. In such a case one may assume
\[\mathbf{D}=\operatorname{diag}\big{[}(\sigma_{1}^{2}/\sigma^{2})\mathbf{I}_{q_{1}}, \ldots,(\sigma_{m-1}^{2}/\sigma^{2})\mathbf{I}_{q_{m-1}}\big{]} \tag{10}\]
where \(\sigma^{2}>0\) and \(\sigma_{1}^{2},\ldots\sigma_{m-1}^{2}\geq 0\) are \(m\) unknown variance components. Then
\[\mathbf{V}=\mathbf{I}_{n}+\sum_{i=1}^{m-1}(\sigma_{i}^{2}/\sigma^{2})\mathbf{Z}_{i}\mathbf{Z}_ {i}^{T}\;. \tag{11}\]
If \(\sigma_{i}^{2}=0\) for some \(i\), then the corresponding \(q_{i}\times 1\) random effect vector \(\mathbf{u}_{i}\) equals \(\mathbf{0}_{q_{i}}\) with probability one. Such a model, and also more general ones, may be fitted in R with the package lme4, see Bates et al. (2015). From the fitting procedure it is possible to obtain an estimate for the variance-covariance matrix
\[\widehat{\operatorname{Cov}}(\widehat{\mathbf{\beta}})=\widehat{\sigma}^{2} \widehat{\mathbf{B}} \tag{12}\]
of the estimated fixed effects parameter vector \(\mathbf{\beta}\). Then a corresponding estimate for \(f^{2}\) is given as
\[f^{2}=\frac{1}{\nu}(\mathbf{R}_{1}\widehat{\mathbf{\beta}})^{T}(\mathbf{R}_{1}\widehat{ \operatorname{Cov}}(\widehat{\mathbf{\beta}})\mathbf{R}_{1}^{T})^{-1}(\mathbf{R}_{1} \widehat{\mathbf{\beta}})\;. \tag{13}\]
The application of this formula is illustrated in the following section.
## 3. Example
In the following we discuss the performance of measure \(f^{2}\) for an artificially generated data set of \(n=1000\) observations intended to further illustrate some computational aspects. For the sake of simplicity our model consists of two independent variables \(X_{1}\) (categorical/binary) and \(X_{2}\) (quantitative) associated with fixed effects and one variable \(Z\) (categorical) associated with random effects. However, the same principles apply when \(X_{1}\), \(X_{2}\), and \(Z\) are extended to possible sets of variables containing more than one element. Our setting corresponds to the above mentioned variance component model with \(p_{1}=p_{2}=1\) and \(m=2\).
### Variable of Interest
Figure 1 shows a discernible location difference with respect to the distribution of the response variable \(Y\) in the two groups indicated by the binary variable \(X_{1}\). The Welch two-sample \(t\) statistic for the null hypothesis of no difference in group means reads \(|t|=6.0751\) implying a highly significant result. A corresponding effect size measure is Cohen's \(d\) which may be computed from R package effectsize, see Ben-Shachar et al. (2020), as \(|d|=0.4122\). From Cohen, values \(|d|=0.2\), \(|d|=0.5\) and \(|d|=0.8\) indicate a small, medium and large effect, respectively.
### Additional Fixed Effects
From Figure 2 one may conclude, however, that the difference in the two groups may to some extent be explained by the variable \(X_{2}\), since there is a tendency for larger values of \(X_{2}\) to come along with larger values of \(Y\) and observations from group 1 of variable \(X_{1}\). Therefore one might be interested in the size of the effect of \(X_{1}\) when \(X_{2}\) is held constant. This can be achieved by considering a regression model with \(X_{1}\) and \(X_{2}\) as independent variables and deriving the measure \(f^{2}\) as explained in (Cohen, 1988, Sect. 9). From package effectsize one gets \(f^{2}=0.0017767\). Here, values \(f^{2}=0.02\), \(f^{2}=0.15\) and \(f^{2}=0.35\) are supposed to indicate a small, medium and large effect, respectively.
Recently, Gross and Moller (2023) considered a generalized version \(d_{*}\) of \(d\) as an effect size measure for a binary variable \(X_{1}\) given further variables. It may be computed from \(f^{2}\) as
\[d_{*}=\sqrt{f^{2}(n-2-w)\gamma}\;, \tag{14}\]
Figure 1. Distribution of response variable \(Y\) over all \(n=1000\) observations (histogram) and within two groups of sizes \(n_{1}=687\) and \(n_{2}=313\) indicated by 0 and 1 (boxplots)
where in our analysis \(w=1\) is the number of additional independent variables incorporated in the model and \(\sigma^{2}\gamma\) is the variance of the regression coefficient for \(X_{1}\). For our data \(\gamma=0.0065821\), yielding \(d_{*}=0.108\) and thus confirming a less than small effect.
### Additional Fixed and Random Effects
As a next step one may take the categorical variable \(Z\), admitting 15 groups in our data set, into account. From Figure 3 it is seen that there is a tendency for \(Z\) groups with larger means of the response variable \(Y\) to contain less observations marked as 1 (referring to variable \(X_{1}\)) than \(Z\) groups with smaller means of \(Y\). However, since observations from group 1 were earlier seen to reveal on average larger \(Y\) values than observations from group 0, holding \(Z\) constant is expected to contribute in favor of an effect of \(X_{1}\) again.
As noted before, the variable \(Z\) is meant to be associated with a random effects vector \(\boldsymbol{u}\). The corresponding design matrix \(\boldsymbol{Z}\) has 15 columns where each row contains 0's and a
Figure 3. Scatterplot of \((\#\{X_{1}=1\},\overline{Y})\) for the 15 groups of \(Z\), where \(\#\{X_{1}=1\}\) denotes the number of observations with \(X_{1}=1\)
Figure 2. Scatter plot of \(n=1000\) pairs \((X_{2},Y)\) with group markings according to \(X_{1}\)
single 1, indicating the membership of the observations to the respective \(Z\) variable group. There are two variance components, the overall \(\sigma^{2}>0\) and the random effects variance denoted by \(\sigma_{u}^{2}\geq 0\) here. The model may also be reparameterized via an unknown \(k\geq 0\) by setting \(\sigma_{u}^{2}=k\sigma^{2}\). This gives variance-covariance matrix
\[\mathrm{Cov}(y)=\sigma^{2}\mathbf{V},\quad\mathbf{V}=k\mathbf{Z}\mathbf{Z}^{T}+\mathbf{I}_{m}\;, \tag{15}\]
and the LMM reduces to the usual fixed effects regression model in case \(k=0\). For \(\mathbf{V}\) from (15) one may compute the effect size \(f^{2}\) for \(X_{1}\) as a function of \(k\) from formula (8) when all variables \(X_{1}\), \(X_{2}\) and \(Z\) are included in the LMM formulation. Figure 4 shows \(f^{2}\) for different choices of \(k\). As expected, the effect size is larger when both, \(X_{2}\) and \(Z\) are incorporated into the model compared to the case when only \(X_{2}\) is employed, corresponding to the choice \(k=0\).
The operational version of \(f^{2}\) from (13) can easily be computed from fitting the model by the function lmer from package lme4 as fit <- lmer(Y ~ 1 + X1 + X2 + (1|Z)). The estimated variance components are \(\widehat{\sigma}^{2}=393.4455\) and \(\sigma_{u}^{2}=180.4234\) giving \(\widehat{k}=\sigma_{u}^{2}/\widehat{\sigma}^{2}=0.4586\) as an estimation for \(k\), see the dashed vertical line in Figure 4. Then the vector \(\widehat{\beta}\) is obtained from fixef(fit) and \(\widehat{\mathrm{Cov}}(\widehat{\mathbf{\beta}})\) is obtained from vcov(fit). By using \(\mathbf{R}_{1}=(0,\,1,\,0)\) and \(\nu=n-p=997\), formula 13 yields \(f^{2}=0.0946626\) indicating a small but not medium effect size of \(X_{1}\) when \(X_{2}\) and \(Z\) are held constant.
### Coefficient of Determination
Finally, we confirm that \(f^{2}\) may also be obtained from formula (9). For this, we define
\[R_{AB}^{2}=\frac{(r/\nu)F}{1+(r/\nu)F},\quad r=p-1,\,\nu=n-p\;, \tag{16}\]
which is the proposed \(R^{2}\) for linear mixed models by Edwards et al. (2008, Eq. (19)). Here, \(F\) is the \(F\) statistic from (6) for testing the hypothesis \(H_{0}:\mathbf{R}\mathbf{\beta}=\mathbf{0}_{p-1}\) with \(\mathbf{R}=(\mathbf{0}_{p-1}:\mathbf{I}_{p-1})\). The relationship between \(F\) and \(R_{AB}^{2}\) from (16 may be established similar to Example 4.8 from Seber and Lee (2003). For our data \(\mathbf{R}=(\mathbf{0}_{2}:\mathbf{I}_{2})\), \(r=2\), and \(\nu=n-p=997\). The measure \(R_{A}^{2}\) is defined in the same way, but for a reduced model with all variables present except for \(X_{1}\). For this reduced model we have \(\mathbf{R}=(0,\,1)\), \(\mathbf{r}=(0)\), \(r=1\)
Figure 4. Values of \(f^{2}\) from a LMM fit depending on \(k\)
\(\nu=n-p_{2}-1=998\). This results in
\[f^{2}=\frac{R_{A,B}^{2}-R_{A}^{2}}{1-R_{A,B}^{2}}=\frac{0.1539263-0.0738348}{1-0.15 39263}=0.0946626\;, \tag{17}\]
which is identical to the value computed above.
|
2309.10494 | On the amenable subalgebras of group von Neumann algebras | We approach the study of sub-von Neumann algebras of the group von Neumann
algebra $L\Gamma$ for countable groups $\Gamma$ from a dynamical perspective.
It is shown that $L(\Gamma)$ admits a maximal invariant amenable subalgebra.
The notion of invariant probability measures (IRAs) on the space of
sub-algebras is introduced, analogous to the concept of Invariant Random
Subgroups. And it is shown that amenable IRAs are supported on the maximal
amenable invariant sub-algebra. | Tattwamasi Amrutam, Yair Hartman, Hanna Oppelmayer | 2023-09-19T10:09:30Z | http://arxiv.org/abs/2309.10494v2 | # On the amenable subalgebras of group von Neumann algebras
###### Abstract.
We approach the study of sub-von Neumann algebras of the group von Neumann algebra \(L(\Gamma)\) for countable groups \(\Gamma\) from a dynamical perspective. It is shown that \(L(\Gamma)\) admits a maximal invariant amenable subalgebra. The notion of invariant probability measures (IRAs) on the space of sub-algebras is introduced, analogous to the concept of Invariant Random Subgroups. And it is shown that amenable IRAs are supported on the maximal amenable invariant sub-algebra.
###### Contents
* 1 Introduction and the statement of main results
* 1.1 Invariant Random Subalgebras
* 1.1.1.2 The \(L(\Gamma)\)-algebra
* 1.1.3 The \(L(\Gamma)\)-algebra
* 1.1.4 The \(L(\Gamma)\)-algebra
* 1.1.5 The \(L(\Gamma)\)-algebra
* 1.1.6 The \(L(\Gamma)\)-algebra
* 1.1.7 The \(L(\Gamma)\)-algebra
* 1.1.8 The \(L(\Gamma)\)-algebra
* 1.1.9 The \(L(\Gamma)\)-algebra
* 1.2.1 The \(L(\Gamma)\)-algebra
* 1.2.2 The \(L(\Gamma)\)-algebra
* 1.2.3 The \(L(\Gamma)\)-algebra
* 1.2.4 The \(L(\Gamma)\)-algebra
* 1.2.5 The \(L(\Gamma)\)-algebra
* 1.2.6 The \(L(\Gamma)\)-algebra
* 1.2.7 The \(L(\Gamma)\)-algebra
* 1.2.8 The \(L(\Gamma)\)-algebra
* 1.2.9 The \(L(\Gamma)\)-algebra
* 1.3.1 The \(L(\Gamma)\)-algebra
* 1.3.2 The \(L(\Gamma)\)-algebra
* 1.3.3 The \(L(\Gamma)\)-algebra
* 1.3.4 The \(L(\Gamma)\)-algebra
* 1.3.5 The \(L(\Gamma)\)-algebra
* 1.3.6 The \(L(\Gamma)\)-algebra
* 1.3.7 The \(L(\Gamma)\)-algebra
* 1.3.8 The \(L(\Gamma)\)-algebra
* 1.3.9 The \(L(\Gamma)\)-algebra
* 1.3.10 The \(L(\Gamma)\)-algebra
* 1.3.11 The \(L(\Gamma)\)-algebra
* 1.3.11 The \(L(\Gamma)\)-algebra
* 1.3.12 The \(L(\Gamma)\)-algebra
* 1.3.13 The \(L(\Gamma)\)-algebra
* 1.3.14 The \(L(\Gamma)\)-algebra
* 1.3.15 The \(L(\Gamma)\)-algebra
* 1.3.16 The \(L(\Gamma)\)-algebra
* 1.3.17 The \(L(\Gamma)\)-algebra
* 1.3.18 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.20 The \(L(\Gamma)\)-algebra
* 1.3.21 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.33 The \(L(\Gamma)\)-algebra
* 1.3.4 The \(L(\Gamma)\)-algebra
* 1.3.5 The \(L(\Gamma)\)-algebra
* 1.3.6 The \(L(\Gamma)\)-algebra
* 1.3.7 The \(L(\Gamma)\)-algebra
* 1.3.8 The \(L(\Gamma)\)-algebra
* 1.3.9 The \(L(\Gamma)\)-algebra
* 1.3.11 The \(L(\Gamma)\)-algebra
* 1.3.12 The \(L(\Gamma)\)-algebra
* 1.3.13 The \(L(\Gamma)\)-algebra
* 1.3.14 The \(L(\Gamma)\)-algebra
* 1.3.15 The \(L(\Gamma)\)-algebra
* 1.3.16 The \(L(\Gamma)\)-algebra
* 1.3.17 The \(L(\Gamma)\)-algebra
* 1.3.18 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.20 The \(L(\Gamma)\)-algebra
* 1.3.21 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.23 The \(L(\Gamma)\)-algebra
* 1.3.24 The \(L(\Gamma)\)-algebra
* 1.3.25 The \(L(\Gamma)\)-algebra
* 1.3.26 The \(L(\Gamma)\)-algebra
* 1.3.27 The \(L(\Gamma)\)-algebra
* 1.3.28 The \(L(\Gamma)\)-algebra
* 1.3.29 The \(L(\Gamma)\)-algebra
* 1.3.30 The \(L(\Gamma)\)-algebra
* 1.3.31 The \(L(\Gamma)\)-algebra
* 1.3.4.11 The \(L(\Gamma)\)-algebra
* 1.3.4.2 The \(L(\Gamma)\)-algebra
* 1.3.5.1 The \(L(\Gamma)\)-algebra
* 1.3.6.1 The \(L(\Gamma)\)-algebra
* 1.3.7.1 The \(L(\Gamma)\)-algebra
* 1.3.8.1 The \(L(\Gamma)\)-algebra
* 1.3.9.1 The \(L(\Gamma)\)-algebra
* 1.3.10 The \(L(\Gamma)\)-algebra
* 1.3.11 The \(L(\Gamma)\)-algebra
* 1.3.11 The \(L(\Gamma)\)-algebra
* 1.3.12 The \(L(\Gamma)\)-algebra
* 1.3.33 The \(L(\Gamma)\)-algebra
* 1.3.4.3 The \(L(\Gamma)\)-algebra
* 1.3.5.2 The \(L(\Gamma)\)-algebra
* 1.3.6.2 The \(L(\Gamma)\)-algebra
* 1.3.7.3 The \(L(\Gamma)\)-algebra
* 1.3.8.2 The \(L(\Gamma)\)-algebra
* 1.3.9.3 The \(L(\Gamma)\)-algebra
* 1.3.10.3 The \(L(\Gamma)\)-algebra
* 1.3.11.1 The \(L(\Gamma)\)-algebra
* 1.3.11.2 The \(L(\Gamma)\)-algebra
* 1.3.12 The \(L(\Gamma)\)-algebra
* 1.3.13.3 The \(L(\Gamma)\)-algebra
* 1.3.14 The \(L(\Gamma)\)-algebra
* 1.3.15 The \(L(\Gamma)\)-algebra
* 1.3.16 The \(L(\Gamma)\)-algebra
* 1.3.17 The \(L(\Gamma)\)-algebra
* 1.3.18 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.20 The \(L(\Gamma)\)-algebra
* 1.3.21 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.23 The \(L(\Gamma)\)-algebra
* 1.3.24 The \(L(\Gamma)\)-algebra
* 1.3.25 The \(L(\Gamma)\)-algebra
* 1.3.26 The \(L(\Gamma)\)-algebra
* 1.3.27 The \(L(\Gamma)\)-algebra
* 1.3.28 The \(L(\Gamma)\)-algebra
* 1.3.39 The \(L(\Gamma)\)-algebra
* 1.3.30 The \(L(\Gamma)\)-algebra
* 1.3.4.1 The \(L(\Gamma)\)-algebra
* 1.3.5.3 The \(L(\Gamma)\)-algebra
* 1.3.6.4 The \(L(\Gamma)\)-algebra
* 1.3.7.2 The \(L(\Gamma)\)-algebra
* 1.3.8.3 The \(L(\Gamma)\)-algebra
* 1.3.9.4 The \(L(\Gamma)\)-algebra
* 1.3.10.5 The \(L(\Gamma)\)-algebra
* 1.3.11.6 The \(L(\Gamma)\)-algebra
* 1.3.11.7 The \(L(\Gamma)\)-algebra
* 1.3.18 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.20 The \(L(\Gamma)\)-algebra
* 1.3.21 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.33 The \(L(\Gamma)\)-algebra
* 1.3.34 The \(L(\Gamma)\)-algebra
* 1.3.4.5 The \(L(\Gamma)\)-algebra
* 1.3.5.6 The \(L(\Gamma)\)-algebra
* 1.3.7.7 The \(L(\Gamma)\)-algebra
* 1.3.8.9 The \(L(\Gamma)\)-algebra
* 1.3.9.1 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.20 The \(L(\Gamma)\)-algebra
* 1.3.21 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.22 The \(L(\Gamma)\)-algebra
* 1.3.23 The \(L(\Gamma)\)-algebra
* 1.3.34 The \(L(\Gamma)\)-algebra
* 1.3.5.7 The \(L(\Gamma)\)-algebra
* 1.3.6.8 The \(L(\Gamma)\)-algebra
* 1.3.9.9 The \(L(\Gamma)\)-algebra
* 1.3.19 The \(L(\Gamma)\)-algebra
* 1.3.19.1 The \(L(\Gamma)\)-algebra
* 1.3.20 The \(L(\Gamma)\)-algebra
* 1.3.31 The \(L(\Gamma)\)-algebra
* 1.3.32 The \(L(\Gamma)\)-algebra
* 1.3.33 The \(L(\Gamma)\)-algebra
* 1.3.4.2 The \(L(\Gamma)\)-algebra
## 1. Introduction and the statement of main results
The study of the space \(\operatorname{Sub}(\Gamma)\) of all closed subgroups of a given group \(\Gamma\) - _the Chabouty space_ - is nowadays a central study theme in geometric group theory and ergodic theory.
Interestingly, although this space was already defined in 1950, it experienced a dramatic revival over the last two decades. During the same era, Effros [1] defined a topology on the collection of von Neumann subalgebras \(\operatorname{SA}(\mathcal{M})\) of a given von Neumann algebra \(\mathcal{M}\), that we refer to as the _Effros-Marechal topology_.
In this paper, we focus on the study of \(\operatorname{SA}(L(\Gamma))\) for a countable discrete group \(\Gamma\) from a dynamical perspective: we consider \(\operatorname{SA}(L(\Gamma))\) as a \(\Gamma\)-space. Indeed, the natural conjugation action \(\Gamma\curvearrowright\operatorname{SA}(L(\Gamma))\) is continuous with respect to the Effros-Marechal topology. The space \(\operatorname{SA}(L(\Gamma))\) extends \(\operatorname{Sub}(\Gamma)\) (see Proposition 4.1) and hence, is a natural object to study from the group theoretic point of view. We find that as a \(\Gamma\)-space, it is somewhat more structured than the collection of subalgebras of the general \(\mathbb{B}(\mathcal{H})\) (see e.g., [13]).
The collection of amenable subgroups has received special attention in the sense that they reveal information about the group \(\Gamma\), e.g., whether \(\Gamma\) has unique trace property or is \(C^{*}\)-simple etc. Most of our focus in this paper is devoted to the study of the collection of all amenable subalgebras \(\operatorname{SA}_{\operatorname{am}}(L(\Gamma))\) of \(L(\Gamma)\).
We show that \(\operatorname{SA}_{\operatorname{am}}(L(\Gamma))\) is closed (see Proposition 4.7). This result is in contrast with \(\operatorname{SA}_{\operatorname{am}}(\mathbb{B}(\mathcal{H}))\) as shown in [13, Theorem 5.4]. Proposition 4.7 is similar in spirit to the closedness of \(\operatorname{Sub}_{\operatorname{am}}(\Gamma)\) for discrete group \(\Gamma\) (while for general locally compact groups it is still an open problem (cf. [10])).
Our first main result asserts the existence of a maximal amenable invariant subalgebra of \(L(\Gamma)\):
_Theorem_ A. Let \(\Gamma\) be a discrete countable group and let \(\operatorname{Rad}(\Gamma)\) denote the maximal normal amenable subgroup of \(\Gamma\). Then, \(L(\operatorname{Rad}(\Gamma))\) is the maximal amenable \(\Gamma\)-invariant subalgebra of \(L(\Gamma)\).
We remark that the classical proof of the existence of \(\operatorname{Rad}(\Gamma)\) does not clearly generalize to subalgerbas. In our proof, we use two main tools. The first is Furman's [14] beautiful characterization of \(\operatorname{Rad}(\Gamma)\) as the kernel of the action on the Furstenberg Boundary. The other tool used is singularity (cf. Lemma 3.3), which has been exploited in the past
for various rigidity results (see e.g., [17, 18, 19, 20] etc.). Moreover, our result answers [14, Conjecture 5.9] in positive.
### Invariant Random Subalgebras
A significant part of the study of the Chabouty space is devoted to invariant probability measures, which are invariant under the conjugation action. These are called Invariant Random Subgroups (IRSs) introduced by [1]. We refer the readers to the references in [11, 1, 15, 16] for more details. We specifically mention the result of [1] proving that IRSs that are supported on amenable subgroups are almost surely contained in the amenable radical.
This paper introduces the notion of Invariant Random Subalgebras (IRAs) in parallel to that of IRSs.
**Definition 1.1**.: A Borel probability measure on \(\operatorname{SA}(L(\Gamma))\) that is invariant under the \(\Gamma\)-conjugation action is called _IRA_, short for _Invariant Random Sub-Algebra_.
Our second main theorem generalizes the result of [1] in the context of IRAs.
_Theorem_ B.: Let \(\mu\) be an \(IRA\) on \(\operatorname{SA}(L(\Gamma))\) for a countable group \(\Gamma\). If \(\mu\)-almost every \(\mathcal{M}\in\operatorname{SA}(L(\Gamma))\) is amenable, then \(\mu\)-a.e \(\mathcal{M}\leq L(\operatorname{Rad}(\Gamma))\).
In particular, for groups \(\Gamma\) with trivial amenable radical, there are no amenable IRAs except for \(\delta_{\{\mathbb{C}\}}\).
### Organization of the paper
In addition to this section, there are three other sections. We begin by recalling some definitions and proving some preparatory results in Section 2, which we put to use in later parts. In Section 3, after establishing a singularity type result in the form of Lemma 3.3, we proceed to prove Thoerem A. In Section 4, we show that the push-forward of an IRS by the natural subalgebra map gives rise to an IRA (see Proposition 4.1). We also provide an example of an IRA which is not induced from an IRS (see Example 4.3). After introducing the notion of normal closure for an IRA, we devote the final subsections towards the proof of Theorem B.
### Acknowledgements
The authors thank Mehrdad Kalantar, Yongle Jiang and Chris Phillips for many helpful discussions and for sharing their insights. They also thank Mehrdad Kalantar, Ionut Chifan and
Yongle Jiang for reading through a near complete draft of the manuscript and for pointing out numerous typos and inaccuracies which improved the exposition greatly.
This work has received funding from the European Research Council (ERC) under the European Union's Seventh Framework Programme (FP7-2007-2013) (Grant agreement No. 101078193), and by the Israel Science Foundation (ISF 1175/18)
H.O. is partially supported by Early Stage Funding - Vice-rector for Research, Universitat Innsbruck, and the AIANI fellowship.
## 2. Preliminaries
In this section, we briefly recall the notions of Effros-Merechal topology, amenable von Neumann algebras and that of the Furstenberg boundary along with some other elementary observations for our later use.
### Topology on the subalgebras of a von Neumann algebra
Let \(\mathcal{N}\) be a von Neumann algebra with separable predual. We consider the set of all sub-von Neumann algebras \(\mathrm{SA}(\mathcal{N})\) of \(\mathcal{N}\) and equip it with the Effros-Merechal topology. We denote by \(so^{*}\) the strong-* operator topology on \(\mathcal{N}\) and by \(wo\) the weak-operator-topology.
**Definition 2.1**.: [1, Definition 2.2] Let \(\mathcal{M}_{n}\in\mathrm{SA}(\mathcal{N})\) for \(n\in\mathbb{N}\). We set
\[\liminf_{n\to\infty}\mathcal{M}_{n}:=\{x\in\mathcal{N}\mid\exists(x_{n})_{n \in\mathbb{N}}\in l^{\infty}(\mathbb{N},\mathcal{M}_{n})\,:\,\mathrm{so}^{*} \text{-}\lim_{n\to\infty}x_{n}=x\},\]
and
\[\limsup_{n\to\infty}\mathcal{M}_{n}:=\langle\{x\in\mathcal{N}\mid\exists(x_{n })_{n\in\mathbb{N}}\in l^{\infty}(\mathbb{N},\mathcal{M}_{n})\,:\,\mathrm{wo }\text{-}\lim_{n\to\infty}x_{n}=x\}\rangle\]
where \(\langle\cdot\rangle\) means the von Neumann algebra generated by the set. The Effros-Marechal topology is defined such that
\[\mathcal{M}_{n}\to\mathcal{M}\quad\text{iff}\quad\liminf_{n\to\infty} \mathcal{M}_{n}=\limsup_{n\to\infty}\mathcal{M}_{n}=\mathcal{M}.\]
It is well known that this topology gives a standard Borel structure on \(\mathrm{SA}(\mathcal{N})\) (see [10] and [11]).
Let us state the following fact [15, Corollary 2.12], which we will need later. Let \(\mathcal{M}_{n},\mathcal{M}\in\mathrm{SA}(\mathcal{N})\), \(n\in\mathbb{N}\), for a finite, separable von Neumann algebra \(\mathcal{N}\). If \(\mathcal{M}_{n}\to\mathcal{M}\) in Effros-Marechal topology, then
\[\mathbb{E}_{\mathcal{M}_{n}}(x)\xrightarrow{\mathrm{so}^{*}}\mathbb{E}_{ \mathcal{M}}(x),\ \forall x\in\mathcal{N}, \tag{1}\]
where \(\mathbb{E}_{\mathcal{M}}:\mathcal{N}\longrightarrow\mathcal{M}\) denotes the canonical conditional expectation.
### Amenable von Neumann algebras
Let \((\mathcal{M},\tau)\subset\mathbb{B}(\mathcal{H})\) be a tracial von Neumann algebra on a separable Hilbert space \(\mathcal{H}\).
**Definition 2.2**.: A \((\mathcal{M},\tau)\)-_hypertrace_ is a state \(\phi\) on \(\mathbb{B}(\mathcal{H})\) such that on \(\phi|_{\mathcal{M}}=\tau\) and \(\phi(Tm)=\phi(mT),\ \forall m\in\mathcal{M},\ \forall T\in\mathbb{B}( \mathcal{H})\).
We denote by \(\operatorname{Hyp}_{\tau}(\mathcal{M})\) the set of all \((\mathcal{M},\tau)\) hypertraces.
**Definition 2.3**.: [1] A tracial von Neumann algebra \((\mathcal{M},\tau)\) is called _amenable_ (also referred to as injective) if \(\operatorname{Hyp}_{\tau}(\mathcal{M})\neq\emptyset\).
From [11, Theorem 5.2], it follows that the collection of all amenable von Neumann sub-algebras of \(\mathcal{N}\), denoted by \(\operatorname{SA}_{am}(\mathcal{N})\), is a \(G_{\delta}\)-set in the Effros-Marechal-topology, thus in particular Borel measurable w.r.t. this topology. The following proposition holds the key for Theorem B. We denote by \(S(\mathcal{N})\) the collection of all states on \(\mathcal{N}\).
**Proposition 2.4**.: _Let \((\mathcal{M},\tau)\) be a finite vNa with a separable predual. Let \(\mathcal{N}\leq\mathcal{M}\) be an amenable sub-von Neumann algebra of \(\mathcal{N}\). Then, there exists an \(\mathcal{N}\)-hyperstate \(\varphi\in S(\mathbb{B}(L^{2}(\mathcal{M},\tau)))\) such that \(\varphi|_{\mathcal{M}}=\tau\)._
Proof.: Let \(\mathcal{N}\leq\mathcal{M}\) be an amenable von Neumann algebra. Since \(\mathcal{N}\) is amenable, \(\mathcal{N}\) is approximately finite dimensional (see [10, Theorem 6]). Hence, we can find an increasing sequence of unital finite dimensional von-Neumann subalgebras \(\{Q_{n}\}\) of \(\mathcal{N}\) such that \(\mathcal{N}=(\cup_{n}Q_{n})^{\prime\prime}\). Now, let us consider
\[\mathcal{C}=\left\{\varphi\in S(\mathbb{B}(L^{2}(\mathcal{M},\tau)):\varphi|_ {\mathcal{M}}=\tau\right\}.\]
Clearly, \(\mathcal{C}\) is weak\({}^{*}\)-compact. Let us note that the unitary group \(\mathcal{U}(Q_{n})\curvearrowright\mathcal{C}\) by conjugation action. Indeed, for \(u\in\mathcal{U}(Q_{n})\) and \(\varphi\in\mathcal{C}\), \(u.\varphi\in S(\mathbb{B}(L^{2}(\mathcal{M},\tau)))\). Moreover, for \(x\in\mathcal{M}\),
\[u.\varphi(x)=\varphi(uxu^{*})=\tau(uxu^{*})=\tau(x).\]
Since \(Q_{n}\) is finite-dimensional, the action \(\mathcal{U}(Q_{n})\curvearrowright\mathcal{C}\) is continuous and being amenable, we will get a \(\mathcal{U}(Q_{n})\)-fixed point in \(\mathcal{C}\), which we call \(\varphi_{n}\). Note that \(\varphi_{n}|_{\mathcal{M}}=\tau\). And, let \(\psi=\lim_{n\to\omega}\varphi_{n}\), \(\omega\in\beta\mathbb{N}\setminus\mathbb{N}\). Then, clearly, \(\psi|_{\mathcal{M}}=\tau\). We now show that \(\psi\) is \(\mathcal{N}\)-central, i.e., \(\psi(xT)=\psi(Tx)\) for all \(x\in\mathcal{N}\) and \(T\in\mathbb{B}(L^{2}(\mathcal{M},\tau))\). Fix \(0\neq T\in\mathbb{B}(L^{2}(\mathcal{M},\tau))\). Let us observe that \(\varphi_{n}(uT)=\varphi_{n}(Tu)\) for all \(u\in\mathcal{U}(Q_{n})\). Letting \(n\to\omega\), we see that \(\psi(uT)=\psi(Tu)\) for all \(u\in\mathcal{U}(Q_{n})\) and for every \(n\in\mathbb{N}\). Since the linear span of unitary elements are norm dense inside a finite-dimensional algebra, it follows that
\[\psi(xT)=\psi(Tx),\forall x\in\cup_{n}Q_{n}. \tag{2}\]
Now, fix an arbitrary element \(y\in\mathcal{N}\) and let \(\epsilon>0\) be given. We can find an element \(x\in\cup_{n}Q_{n}\) such that \(\|y-x\|_{\tau}<\frac{\epsilon}{2\|T\|}\). Now, using the triangle inequality along with equation (2), we see that
\[|\psi(yT)-\psi(Ty)|\] \[\leq|\psi(yT)-\psi(xT)|+|\psi(xT)-\psi(Tx)|+|\psi(Tx)-\psi(Ty)|\] \[=|\psi(yT)-\psi(xT)|+|\psi(Tx)-\psi(Ty)|\]
Appealing to the Cauchy-Schwartz inequality, we see that
\[|\psi(yT)-\psi(xT)|\leq\|y-x\|_{\tau}\|T\|<\frac{\epsilon}{2}.\]
Similarly, \(|\psi(Tx)-\psi(Ty)|<\frac{\epsilon}{2}\). Putting these together, we see that \(|\psi(yT)-\psi(Ty)|<\epsilon\). Since \(\epsilon>0\) is arbitrary, the claim follows.
### Furstenberg boundary
Furstenberg [14] introduced the notion of "topological boundary". One purpose is to relate lattices of semisimple lie groups with their ambient groups (e.g., \(SL_{n}(\mathbb{Z})\) in \(SL_{n}(\mathbb{R})\)). We make this notion precise below.
Let \(\Gamma\) be a discrete countable group and \(X\) be a \(\Gamma\)-space, i.e., \(X\) is a compact Hausdorff space and \(\Gamma\curvearrowright X\) by homeomorphisms. The action \(\Gamma\curvearrowright X\) is called a boundary action if \(\{\delta_{x}:x\in X\}\subset\overline{\Gamma\nu}^{\text{weak}^{*}}\) for every \(\nu\in\operatorname{Prob}(X)\). If \(\Gamma\curvearrowright X\) is a boundary action, then \(X\) is called a \(\Gamma\)-boundary. Note that this action is always minimal.
**Proposition 2.5**.: _[_14_, 14_]_ _The Furstenberg boundary of \(\Gamma\), \(\partial_{F}\Gamma\) is a \(\Gamma\) boundary which is universal in the sense that every other \(\Gamma\)-boundary \(Y\) is a \(\Gamma\)-equivariant continuous image of \(\partial_{F}\Gamma\)._
The fact that such a space exists follows from a standard product argument involving the representatives of all boundaries (see, e.g., [14, 14]). Moreover, \(\partial_{F}\Gamma\) is unique up to \(\Gamma\)-equivariant homeomorphism. Over the years, this notion has been used to study the properties of the groups. For example, Kalantar and Kennedy [13] gave a dynamical characterization of \(C^{*}\)-simplicity in terms of the action \(\Gamma\curvearrowright\partial_{F}\Gamma\) on the Furstenberg boundary \(\partial_{F}\Gamma\). It is also known that the group \(\Gamma\) is amenable if and only if the associated Furstenberg boundary \(\partial_{F}\Gamma\) is trivial (see, e.g., [1, Theorem 3.1, Chapter 3]). More generally, Furman [14, Proposition 7] showed that \(\operatorname{Rad}(\Gamma)\) is exactly the kernel of the action \(\Gamma\curvearrowright\partial_{F}\Gamma\) (also see [1, Proposition 2.8]). It is this later criterion which we exploit for our purposes.
## 3. Maximal amenable \(\Gamma\)-invariant subalgebra
Let \(\Gamma\) be a discrete countable group. It follows from Day's result that there is a unique maximal normal subgroup of \(\Gamma\), called the amenable radical. We denote it by \(\text{Rad}(\Gamma)\). By \(L(\Gamma)\) we denote the group von Neumann algebra, i.e. \(L(\Gamma)=\overline{\text{span}(\lambda(\Gamma))}^{so}\), where \(\lambda\) denotes the left regular representation. It then immediately follows that \(L(\text{Rad}(\Gamma))\) is a \(\Gamma\)-invariant amenable von Neumann subalgebra of \(L(\Gamma)\). We show that \(L(\text{Rad}(\Gamma))\) is also the largest \(\Gamma\)-invariant amenable subalgebra of \(L(\Gamma)\).
**Proposition 3.1**.: _Let \(\Gamma\) be a discrete countable group. Let \(\mathcal{M}\leq L(\Gamma)\) be a \(\Gamma\)-invariant amenable subalgebra. Then, \(\mathcal{M}\subset L(\text{Rad}(\Gamma))\). In particular, \(L(\text{Rad}(\Gamma))\) is the largest amenable \(\Gamma\)-invariant subalgebra of \(L(\Gamma)\)._
The classical argument (see for example [10]) in the setting of groups does not have an obvious modification to encompass the von Neumann setup. To prove our result, we resort to the dynamical characterization of the amenable radical in terms of the action \(\Gamma\curvearrowright\partial_{F}\Gamma\). Let \(\tau_{0}\) denote the canonical trace on \(L(\Gamma)\).
**Definition 3.2**.: Let \(\Gamma\) be a discrete countable group. Let \(\mathcal{A}\subset\mathbb{B}(\ell^{2}(\Gamma))\) be a unital \(C^{*}\)-algebra and \(\mathcal{M}\leq L(\Gamma)\) be a von Neumann subalgebra. A state \(\varphi\in S(\mathcal{A})\) is called \((\mathcal{M},\tau_{0})\)_-invariant_ if there exists an \(\mathcal{M}\)-hyperstate \(\psi\in S(\mathbb{B}(\ell^{2}(\Gamma)))\) such that \(\psi|_{\mathcal{A}}=\varphi\) and \(\psi|_{L(\Gamma)}=\tau_{0}\). We denote the collection of such states by \(S^{\mathcal{M}}_{\tau_{0}}(\mathcal{A})\).
Let \(X\) be a minimal \(\Gamma\)-space. We can view \(C(X)\) as multiplication operators on \(\mathbb{B}(\ell^{2}(\Gamma))\). Fix \(x_{0}\in X\). For \(f\in C(X)\), the map \(M(f):\ell^{2}(\Gamma)\to\ell^{2}(\Gamma)\) defined by \(M(f)(\delta_{t})=f(t.x_{0})\delta_{t}\) is linear and bounded. Since the action \(\Gamma\curvearrowright X\) is minimal, we see that \(\|M(f)\|=\|f\|_{\infty}\) for every \(f\in C(X)\). Therefore, we can identity \(C(X)\) with its image \(M(C(X))\) inside \(\mathbb{B}(\ell^{2}(\Gamma))\). We note that this embedding is not canonical. However, it is enough to fix one embedding for our purposes.
**Lemma 3.3**.: _Let \(\Gamma\) be a discrete group and \(\mathcal{M}\leq L(\Gamma)\) be an amenable von Neumann subalgebra. Let \(s\in\Gamma\setminus\{e\}\). Assume that there exists a minimal \(\Gamma\)-space \(X\) and an embedding of \(C(X)\leq\mathbb{B}(\ell^{2}(\Gamma))\) (possibly depending on s) such that the following hold: There exists \(x\in X\) such that_
1. \(sx\neq x\)_, and_
2. \(\delta_{x}\in S^{\mathcal{M}}_{\tau_{0}}(C(X))\)
_Then, \(\tau_{0}(a\lambda(s)^{*})=0\) for all \(a\in\mathcal{M}\)._
Proof.: Let \(\mathcal{M}\leq L(\Gamma)\) be an amenable von Neumann algebra. Let \(s\in\Gamma\setminus\{e\}\). Assume a minimal \(\Gamma\)-space \(X\) satisfies the above conditions and let \(x\in X\) be such that \(sx\neq x\). Since \(\delta_{x}\in S^{\mathcal{M}}_{\tau_{0}}(C(X))\), there exists a \(\mathcal{M}\)-hyperstate \(\varphi\) such that \(\varphi|_{C(X)}=\delta_{x}\) and \(\varphi|_{L(\Gamma)}=\tau_{0}\). Choose \(f\in C(X)\) such that \(f(x)=1\) and \(f(sx)=0\) with \(0\leq f\leq 1\). One can view \(C(X)\) as a subset of \(\mathbb{B}(\ell^{2}(\Gamma))\). Now, since \(C(X)\) falls in the multiplicative domain of \(\varphi\), we see that for any \(a\in\mathcal{M}\),
\[\tau_{0}(a\lambda(s)^{*}) =\varphi(a\lambda(s)^{*})\] \[=\varphi\left(a\lambda(s)^{*}f\right)\] \[=\varphi\left(a(s^{-1}.f)\lambda(s)^{*}\right)\] \[=\varphi\left(a\sqrt{(s^{-1}.f)}\sqrt{(s^{-1}.f)}\lambda(s)^{*} \right).\]
Now, using Cauchy-Schwartz inequality, we see that
\[|\varphi(a\lambda(s)^{*})|\] \[=\left|\varphi\left(a\sqrt{(s^{-1}.f)}\sqrt{(s^{-1}.f)}\lambda(s )^{*}\right)\right|\] \[\leq\sqrt{\varphi(a(s^{-1}.f)a^{*})}\sqrt{\varphi(\lambda(s)(s^{- 1}.f)\lambda(s)^{*})}\] \[=\sqrt{\varphi((s^{-1}.f)a^{*}a)}\sqrt{\varphi(f)} (\varphi\text{ is a $\mathcal{M}$-hyperstate})\] \[=\sqrt{(s^{-1}.f)(x)\varphi(a^{*}a)}\sqrt{\varphi(f)} (\varphi|_{C(X)}=\delta_{x})\] \[=\sqrt{f(sx)\varphi(a^{*}a)}\sqrt{\varphi(f)}\] \[=0\]
The claim follows.
Proof of Proposition 3.1.: Suppose that \(\mathcal{M}\) is an amenable \(\Gamma\)-invariant subalgebra of \(L(\Gamma)\). Let us consider \(\operatorname{Hype}_{\tau_{0}}(\mathcal{M})\), the collection of all \((\mathcal{M},\tau_{0})\) -hyperstates whose restrictions to \(L(\Gamma)\) is the canonical trace \(\tau_{0}\). Due to Proposition 2.4, this is a non-empty set. Let \(\partial_{F}\Gamma\) denote the Furstenberg boundary of the group \(\Gamma\) and fix an embedding of \(C(\partial_{F}\Gamma)\) inside \(\mathbb{B}(\ell^{2}(\Gamma))\). The group \(\Gamma\) acts on \(S(\mathbb{B}(\ell^{2}(\Gamma)))\) by \((\gamma.\phi)(T)=\phi(\lambda(\gamma)T\lambda(\gamma^{-1}))\) for \(\phi\in S(\mathbb{B}(\ell^{2}(\Gamma)))\), \(\gamma\in\Gamma\), \(T\in\mathbb{B}(\ell^{2}(\Gamma))\). We note that \(\operatorname{Hype}_{\tau_{0}}(\mathcal{M})|_{C(\partial_{F}\Gamma)}\) is a \(\Gamma\)-invariant weak\({}^{*}\)-closed convex subset of \(\operatorname{Prob}(\partial_{F}\Gamma)\). Using the irreducibility of \(\operatorname{Prob}(\partial_{F}\Gamma)\), we obtain that \(S^{\mathcal{M}}_{\tau_{0}}(C(\partial_{F}\Gamma))=\operatorname{Prob}( \partial_{F}\Gamma)\). This, in particular, says that every probability measure \(\nu\) on \(\partial_{F}\Gamma\) can be realized as \(\varphi|_{C(\partial_{F}\Gamma)}\) for some \(\varphi\in\operatorname{Hype}_{\tau_{0}}(\mathcal{M})\). We now proceed to show that \(\mathcal{M}\subset L(\operatorname{Rad}(\Gamma))\). To
do so, it is enough to show that \(\tau_{0}(a\lambda(s)^{*})=0\) for all \(a\in\mathcal{M}\) and \(s\in\Gamma\setminus(\operatorname{Rad}(\Gamma))\). Let \(a\in\mathcal{M}\) and choose \(s\in\Gamma\setminus\operatorname{Rad}(\Gamma)\). Since \(\operatorname{Rad}(\Gamma)=\operatorname{Ker}\left(\Gamma\curvearrowright \partial_{F}\Gamma\right)\), there exists a point \(x\in\partial_{F}\Gamma\) such that \(sx\neq x\). It now follows from Lemma 3.3 that \(\tau_{0}(a\lambda(s)^{*})=0\). Since \(s\in\Gamma\setminus\operatorname{Rad}(\Gamma)\) is arbitrary, we see that \(\mathcal{M}\subset L\left(\operatorname{Rad}(\Gamma)\right)\).
**Corollary 3.4**.: _Let \(\Gamma\) be a countable group with trivial amenable radical. Then every \(\Gamma\)-invariant von Neumann subalgebra \(\mathcal{M}\leq L(\Gamma)\) is a subfactor._
Proof.: Let \(\mathcal{M}\leq L(\Gamma)\) be a \(\Gamma\)-invariant subalgebra. Then \(\mathcal{Z}(\mathcal{M})\), the center of \(\mathcal{M}\), is a \(\Gamma\)-invariant, amenable subalgebra. It now follows from Proposition 3.1 that \(\mathcal{Z}(\mathcal{M})\leq L(\operatorname{Rad}(\Gamma))=\mathbb{C}\).
## 4. Amenable Invariant Random Algebras
Using the following proposition, we see IRAs of \(L(\Gamma)\) naturally extend IRSs.
**Proposition 4.1**.: _Let \(\Gamma\) be a countable discrete group and \(\text{Sub}(\Gamma)\), the collection of all subgroups of \(\Gamma\). Let \(\lambda\in\text{IRS}(\Gamma)\). Consider the map \(L:\text{Sub}(\Gamma)\to\text{SA}(L(\Gamma))\) defined by \(H\mapsto L(H),\ H\in\text{Sub}(\Gamma)\). \(L\) is continuous with respect to the Chabauty topology on \(\text{Sub}(\Gamma)\) and Effros-Marechal topology on \(\text{SA}(L(\Gamma))\)._
Proof.: Let \(H_{n}\in\text{Sub}(\Gamma)\) be such that \(H_{n}\to H\) in the Chabauty topology. We show \(L(H_{n})\to L(H)\) in the Effros-Marechal topology. Let \(x\in L(H)\) be given and choose \(\epsilon>0\). Then, we can find \(h_{1},h_{2},\ldots,h_{m}\in H\) and \(c_{1},c_{2},\ldots,c_{m}\in\mathbb{C}\) such that \(\|x-\sum_{i=1}^{m}c_{i}\lambda(h_{i})\|_{2}<\epsilon\). We remark that the \(\|.\|_{2}\)-norm is with respect to the canonical trace \(\tau_{0}\) on \(L(\Gamma)\). Since \(H_{n}\to H\), there exists a \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\), \(h_{i}\in H_{n}\) for all \(i=1,2,\ldots,m\). Consequently, it follows that \(\sum_{i=1}^{m}c_{i}\lambda(h_{i})\in L(H_{n})\) for all \(n\geq n_{0}\). Hence, for all \(n\geq n_{0}\), we can find \(x_{n}=\sum_{i=1}^{n}c_{i}\lambda(h_{i})\in L(H_{n})\) such that \(x_{n}\xrightarrow{\text{so}}x\). Using the Uniform boundedness principle (or, Kaplansky density theorem) we can assume \(\|x_{n}\|\leq\|x\|\). On the ball \(B(0,\|x\|)\), the so- and so\({}^{*}\)-topology coincide. So, \(x_{n}\xrightarrow{\text{so}^{*}}x\). Therefore, \(x\in\liminf_{n}L(H_{n})\).
Now, let \(x\in\limsup_{n}L(H_{n})\) be such that we can find \(x_{n}\in L(H_{n})\) with \(x_{n}\xrightarrow{\text{so}}x\). Let \(g\in\Gamma\setminus H\). Since \(H_{n}\to H\), there exists a subsequence \(\{n_{k}\}_{k}\) such that \(g\not\in H_{n_{k}}\) for all \(k\in\mathbb{N}\). Let \(\tau_{0}\) denote the canonical trace on \(L(\Gamma)\). Since \(\tau_{0}(\cdot)=\langle(\cdot)\delta_{e},\delta_{e}\rangle\), \(\tau_{0}\) is wo-continuous (see [1, Proposition 2.1.1]). Since \(x_{n_{k}}\lambda(g^{-1})\xrightarrow{\text{so}}x\lambda(g^{-1})\), it
follows that \(\tau_{0}\left(x_{n_{k}}\lambda(g^{-1})\right)\xrightarrow{k\to\infty}\tau_{0} \left(x\lambda(g^{-1})\right)\). Since \(g\not\in H_{n_{k}}\) and \(x_{n_{k}}\in L(H_{n_{k}})\), for every \(k\in\mathbb{N}\), we see that
\[\tau_{0}\left(x_{n_{k}}\lambda(g^{-1})\right)=\tau_{0}\left(\mathbb{E}_{H_{n_{ k}}}(x_{n_{k}}\lambda(g^{-1}))\right)=\tau_{0}\left((x_{n_{k}}\mathbb{E}_{H_{n_{ k}}}(\lambda(g^{-1}))\right)=0,\]
where \(\mathbb{E}_{H}:L(\Gamma)\longrightarrow L(H)\) denotes the canonical conditional expectation, confer [1]. This implies that \(\tau_{0}(x\lambda(g^{-1}))=0\) for all \(g\in\Gamma\setminus H\). This implies that \(x\in L(H)\). Therefore,
\[\limsup_{n}L(H_{n})\] \[=\left\langle\left\{x\in L(H)\ |\ \exists(x_{n})_{n\in\mathbb{N}} \in l^{\infty}(\mathbb{N},L(H_{n}))\,:\,\text{wo-}\lim_{n\to\infty}x_{n}=x \right\}\right\rangle\] \[\leq L(H).\]
Consequently, we see that
\[L(H)\leq\liminf_{n}L(H_{n})\leq\limsup_{n}L(H_{n})\leq L(H).\]
It now follows from Definition 2.1 that \(\lim_{n}L(H_{n})=L(H)\) in the Effros-Marechal topology, which in turn implies that the map \(L\) is continuous. This completes the proof.
**Example 4.2** (IRAs coming from IRSs).: Let \(\lambda\in IRS(\Gamma)\). It follows from Proposition 4.1 that the map \(L:\operatorname{Sub}(\Gamma)\to\operatorname{SA}(L(\Gamma))\) is continuous, hence measurable. Since \(L\) is equivariant, \(L_{*}\lambda\in\operatorname{IRA}(L(\Gamma))\).
It is not true that every IRA is of this form. We construct such an example below. The above results are a proper generalization of [1] since there are IRAs that are supported on collections of sub-von Neumann algebras that are not of the form \(LK\) for any subgroup \(K\) of \(\Gamma\).
**Example 4.3** (IRAs not coming from IRSs).: Consider the lamplighter group
\[\Gamma=\operatorname{LL}(\mathbb{Z},\mathbb{Z}/3\mathbb{Z})=\mathbb{Z}\ltimes \operatorname{Lamps}\]
with \(\operatorname{Lamps}:=\{f:\mathbb{Z}\longrightarrow\mathbb{Z}/3\mathbb{Z}\) with \(|\operatorname{supp}(f)|<\infty\}\). For any \(S\subseteq\mathbb{Z}\) set
\[\operatorname{Lamps}_{S}:=\{f\in\operatorname{Lamps}\,:\,\operatorname{ supp}(f)\subseteq S\}.\]
Observe that \(\Lambda_{S}:=\{0\}\ltimes\operatorname{Lamps}_{S}\) is a group with \((0,f)\cdot(0,h)=(0,f+h)\) for any \(f,h\in\operatorname{Lamps}_{S}\). Let \(\sigma:\mathbb{Z}/3\mathbb{Z}\longrightarrow\mathbb{Z}/3\mathbb{Z}\) be the non-trivial homomorphism. Then, the set
\[\mathcal{M}_{S}:=\overline{\{\sum_{f\in\operatorname{Lamps}_{S}}a_{f}\lambda( (0,f))\ :\ a_{f}=a_{\sigma\circ f},\ a_{f}\in\mathbb{C}\}}^{\text{wo}}\]
is a non-trivial von Neumann sub-algebra of \(L(\Lambda_{S})\). Moreover, we also observe that \(\{\mathcal{M}_{S}:S\subset\mathbb{Z}\}\) is \(\Gamma\)-invariant. Indeed, let
\[x:=\sum_{f\in\operatorname{Lamps}_{S}}a_{f}\lambda((0,f))\in\mathcal{M}_{S},\]
then
\[x^{*}=\sum_{f\in\operatorname{Lamps}_{S}}\overline{a_{f}}\lambda((0,-f))\ \in \mathcal{M}_{S},\]
because the coefficient at the position \((0,-f)\) is \(c_{-f}:=\overline{a_{f}}=\overline{a_{\sigma(f)}}=c_{-\sigma(f)}=c_{\sigma(-f)}\), since by assumption \(a_{f}=a_{\sigma(f)}\). Further, for \(y=\sum\limits_{h\in\operatorname{Lamps}_{S}}\overline{b_{h}}\lambda((0,h))\ \in \mathcal{M}_{S}\), we see that
\[xy=\sum_{h,f\in\operatorname{Lamps}_{S}}a_{f}b_{h}\lambda((0,f+h))\ \in \mathcal{M}_{S},\]
due to \(d_{f+h}:=a_{f}b_{h}=a_{\sigma(f)}b_{\sigma(h)}=d_{\sigma(f)+\sigma(h)}=d_{ \sigma(f+h)}\).
Now, as pointed out by [1, Example 3.5], for \(\sigma\neq id\), there is no sub-group \(K\) of \(\Gamma\) such that \(\mathcal{M}_{S}\) can be written as \(L(K)\). Indeed, if we had \(\mathcal{M}_{S}=L(K)\), then for any \((0,f)\in K\) we would obtain e.g. \(5\lambda(0,-f)+\lambda(0,f)\in\mathcal{M}_{S}\) but by construction \(a_{f}=a_{-f}\) which is a contradiction.
Moreover, \(\mathcal{M}_{S}\) is clearly \(\Gamma\)-invariant.
Let us consider the map
\[\psi:\{0,1\}^{\mathbb{Z}}\longrightarrow\operatorname{SA}(L(\Gamma)),\ S \mapsto\mathcal{M}_{S}.\]
Then, \(\psi\) is Borel measurable w.r.t. Effros-Marechal topology on \(\operatorname{SA}(L(\Gamma))\): For \(S\in\{0,1\}^{\mathbb{Z}}\), let \(H_{S}:=\{(0,f)\ :\ \operatorname{supp}(f)\subseteq S\}\). We first observe that the map \(\operatorname{Sub}:\{0,1\}^{\mathbb{Z}}\rightarrow\operatorname{Sub}(\Gamma ),\ S\mapsto H_{S}\) is a continuous map with respect to the Chabauty topology. Furthermore, it follows from Proposition 4.1 above that the map \(L:\operatorname{Sub}(\Gamma)\rightarrow\operatorname{SA}(L(\Gamma)),\ H_{S} \mapsto L(H_{S})\) is continuous. Let \(j:\operatorname{SA}(L(\Gamma))\rightarrow\operatorname{SA}(L(\Gamma)) \times\operatorname{SA}(L(\Gamma))\) be given by \(j(\mathcal{M})=(\mathcal{M},\mathcal{M}_{\mathbb{Z}})\). We equip \(\operatorname{SA}(L(\Gamma))\times\operatorname{SA}(L(\Gamma))\) with the product topology. \(j\) is continuous with respect to the Effros-Marechal topology. It follows from [1, Corollary 2] that the map
\[\operatorname{Ints}:\operatorname{SA}(L(\Gamma))\times\operatorname{SA}(L( \Gamma))\longrightarrow\operatorname{SA}(L(\Gamma)),(\mathcal{M},\mathcal{N}) \mapsto\mathcal{M}\cap\mathcal{N}\]
is Borel measurable w.r.t. the Effros-Marechal topology. Therefore,
\[\operatorname{Ints}\circ j\circ L\circ\operatorname{Sub}:\{0,1\}^{\mathbb{Z }}\rightarrow\operatorname{SA}(L(\Gamma)),\ S\mapsto\mathcal{M}_{\mathbb{Z}} \cap L(H_{S})\]
being a composition of measurable maps is measurable. Let us now observe that
\[\mathcal{M}_{\mathbb{Z}}\cap L(H_{S})\] \[=\overline{\{\sum_{f\in\text{Lamps}}a_{f}\lambda((0,f))\ :\ a_{f}=a_{\sigma(f)},\ a_{f}\in\mathbb{C}\}\cap\text{span}(\lambda(H_{S}))}^{ \text{wo}}\] \[=\mathcal{M}_{S}.\]
Therefore, the map \(\psi=\text{Ints}\circ j\circ L\circ\text{Sub}\) is measurable.
Moreover, \(\psi\) is \(\Gamma\)-equivariant, where \(\Gamma\) acts by shifting on \(\{0,1\}^{\mathbb{Z}}\): The map \(L\) is clearly \(\Gamma\)-equivariant. Since \(\mathcal{M}_{\mathbb{Z}}\) is a \(\Gamma\)-invariant subalgebra, the maps \(\text{Ints}\) and \(j\) are \(\Gamma\)-equivariant. Therefore, it is enough to show that the map \(\text{Sub}\) is \(\Gamma\)-equivariant. Let \(g=(m,h)\in\Gamma\), then \(g\cdot S=(m,0)S\) shifts the position of the ones in the set \(S\) to the left by \(m\in\mathbb{Z}\). So,
\[H_{gS}=\{(0,f)\ :\ \text{supp}(f)\subseteq gS\}=\{(0,f)\ :\ \text{supp}(g^{-1}f) \subseteq S\}\]
\[=\{(0,mf)\ :\ \text{supp}(f)\subseteq S\}=gH_{S}g^{-1}\]
since \(g(0,f)g^{-1}=(m-m,h+mf-h)=(0,mf)\) for \(g=(m,h)\).
Therefore, we can push forward any \(\Gamma\)-invariant probability measure \(\kappa\) on \(\{0,1\}^{\mathbb{Z}}\), e.g. the Bernoulli measure, to obtain an IRA on \(\text{SA}(L(\Gamma))\) via the map \(\psi\). Observe that by the above, \(\psi_{*}\kappa\) cannot be purely supported on von Neumann sub-algebras of the form \(L(K)\) for some subgroup \(K\) of \(\Gamma\) since it lives on sub-algebras of the form \(\mathcal{M}_{S}\) for \(S\in\{0,1\}^{\mathbb{Z}}\).
### Normal closure of IRAs
In [16] the so-called _normal closure_ of an IRSs was introduced. We mimic this concept in the von Neumann algebra setting for IRAs.
Let \(\mathcal{M}\) be a von Neumann algebra with separable Hilbert space on which \(\Gamma\) acts.
**Lemma 4.4**.: _Let \(\mu\in\text{IRA}(\mathcal{M})\). Then there exists a unique minimal sub-von Neumann algebra \(\mathcal{N}\in\text{SA}(\mathcal{M})\) such that \(\mu(\text{SA}(\mathcal{M}))=1\). Moreover, this von Neumann algebra \(\mathcal{N}\) is \(\Gamma\)-invariant. We denote this von Neumann algebra \(\mathcal{N}\) by \(\overline{\langle\mu\rangle}\) and call it the normal closure of \(\mu\)._
Proof.: We follow the argument outlined in [11, Lemma 2.3] and adapt it to the von Neumann setting. Let us denote
\[\mathcal{W}=\left\{\tilde{\mathcal{M}}\in\text{Sub}(\mathcal{M}):\mu(\text{ SA}(\tilde{\mathcal{M}}))=1\right\}.\]
Let
\[\mathcal{N}:=\bigcap_{\tilde{\mathcal{M}}\in\mathcal{W}}\tilde{\mathcal{M}}.\]
Clearly, \(\mathcal{N}\) is a von Neumann algebra again and it is \(\Gamma\)-invariant since \(\mu\) is \(\Gamma\)-invariant by assumption. It is unique since any algebra \(\tilde{\mathcal{M}}\) fulfilling \(\mu(\operatorname{SA}(\tilde{\mathcal{M}}))=1\) contains \(\mathcal{N}\), and we want a minimal one. It is left to show that \(\mu(\operatorname{SA}(\mathcal{N}))=1\). We now observe that
\[\operatorname{SA}(\mathcal{N})=\cap_{\tilde{\mathcal{M}}\in\mathcal{W}} \operatorname{SA}(\tilde{\mathcal{M}}).\]
Now, let \(\mathcal{K}\in\operatorname{SA}(\mathcal{M})\setminus\operatorname{SA}( \mathcal{N})\). This means that there exists \(\mathcal{M}_{\mathcal{K}}\in\mathcal{W}\) such that \(\mathcal{K}\not\in\operatorname{SA}(\mathcal{M}_{\mathcal{K}})\). Since \(\operatorname{SA}(\mathcal{M}_{\mathcal{K}})\) is closed, we can find \(\epsilon_{\mathcal{K}}>0\) such that
\[B(\mathcal{K},\epsilon_{\mathcal{K}})\cap\operatorname{SA}(\mathcal{M}_{ \mathcal{K}})=\emptyset. \tag{3}\]
In particular, we see that
\[\operatorname{SA}(\mathcal{M})\setminus\operatorname{SA}(\mathcal{N})=\bigcup _{\mathcal{K}\in\operatorname{SA}(\mathcal{M})\setminus\operatorname{SA}( \mathcal{N})}B(\mathcal{K},\epsilon_{\mathcal{K}})\]
Since \(\operatorname{SA}(\mathcal{M})\) is a separable metric space, hence, second countable, and therefore, hereditarily Lindelof. Since \(\operatorname{SA}(\mathcal{N})\) is closed in Effros-Marechal topology ([13, Theorem 2.8]), \(\operatorname{SA}(\mathcal{M})\setminus\operatorname{SA}(\mathcal{N})\) is open. In a hereditary Lindelof space, every open set is Lindelof. Therefore, we can find \(\mathcal{K}_{n}\in\operatorname{SA}(\mathcal{M})\setminus\operatorname{SA}( \mathcal{N})\) and \(\epsilon_{\mathcal{K}_{n}}>0\) such that
\[\operatorname{SA}(\mathcal{M})\setminus\operatorname{SA}(\mathcal{N})=\bigcup _{n}B(\mathcal{K}_{n},\epsilon_{\mathcal{K}_{n}}) \tag{4}\]
Moreover, using equation (3), we see that \(\operatorname{SA}(\mathcal{M}_{\mathcal{K}_{n}})\subseteq\left(B(\mathcal{K}, \epsilon_{\mathcal{K}_{n}})\right)^{c}\) for each \(n\in\mathbb{N}\). Therefore,
\[\bigcap_{n}\operatorname{SA}(\mathcal{M}_{\mathcal{K}_{n}})\subseteq\bigcap_{ n}\left(B(\mathcal{K},\epsilon_{\mathcal{K}_{n}})\right)^{c} \tag{5}\]
Putting together equations (4) and (5), we see that
\[\bigcap_{n}\operatorname{SA}(\mathcal{M}_{\mathcal{K}_{n}})\subseteq\bigcap_ {n}\left(B(\mathcal{K},\epsilon_{\mathcal{K}_{n}})\right)^{c}\subseteq \operatorname{SA}(\mathcal{N})\]
Therefore,
\[\mu(\operatorname{SA}(\mathcal{N}))\geq\lim_{m\to\infty}\mu(\bigcap_{n=1}^{m} \operatorname{SA}(\mathcal{M}_{\mathcal{K}_{n}}))=1,\]
since \(\mu\left(\operatorname{SA}(\mathcal{M}_{\mathcal{K}_{n}})^{c}\right)=0\) by construction.
_Remark 4.5_.: Let \(\mathcal{Z}=L\Gamma\) and \(\mu\in IRA(\mathcal{Z})\). Let
\[H_{0}=\{g\in\Gamma\ :\ \mu(\{\mathcal{M}\ni\lambda(g)\})>0\}.\]
If \(H\) is the subgroup of \(\Gamma\) generated by \(H_{0}\), then, \(\overline{\langle\mu\rangle}\supseteq LH\). Indeed, let us write for simplicity \(\{\mathcal{M}\ni m\}:=\{\mathcal{M}\in\mathrm{SA}(L\Gamma)\,:\,m\in\mathcal{M}\}\) for \(m\in L\Gamma\) and first verify that this is a measurable set. It is closed in the Effros-Marechal topology: If \(\mathcal{M}_{\alpha}\in\{\mathcal{M}\ni m\}\) with \(\mathcal{M}_{\alpha}\to\mathcal{M}\), then by Definition 2.1 the elements in \(\mathcal{M}\) are precisely the so\({}^{*}\)-limit points of elements in \(\mathcal{M}_{\alpha}\), in particular the constant sequence \(x_{\alpha}=m\) converges and thus \(m\in\mathcal{M}\). Assume there exists some \(g\in\Gamma\) such that \(\lambda(g)\notin\overline{\langle\mu\rangle}\). Then clearly, \(\mathrm{SA}(\overline{\langle\mu\rangle})\cap\{\mathcal{M}\ni\lambda(g)\}=\emptyset\). Since \(\mu(\mathrm{SA}(\overline{\langle\mu\rangle}))=1\) we thus obtain \(\mu(\{\mathcal{M}\ni\lambda(g)\})=0\). Therefore \(g\notin H_{0}\). Thus \(LH\subseteq\overline{\langle\mu\rangle}\).
_Remark 4.6_.: Let us recall that \(\Gamma\) is said to have Invariant Subgroup Rigidity property (ISR-property) if every \(\Gamma\)-invariant subalgebra \(\mathcal{M}\leq L(\Gamma)\) is of the form \(L(N)\) for some normal subgroup \(N\triangleleft\Gamma\). This property was studied in the context of higher rank lattices in [1, 1] and in the context of hyperbolic groups in [1, 2].
If \(\Gamma\) has the (ISR)-property, then, the von Neumann algebra \(\overline{\langle\mu\rangle}\) is of the form \(LN\) for some normal subgroup \(N\) of \(\Gamma\) with \(N\geq H\) as in Remark 4.5 above.
### Closedness of amenable subalgebras
**Proposition 4.7**.: _Let \((\mathcal{M},\tau)\) be a finite von Neumann algebra with a separable predual. Then, the collection of amenable subalgebras of \(\mathcal{M}\) is closed in the Effros-Marechal topology._
Proof.: Let \(\mathcal{M}_{n}\) be a sequence of amenable subalgebras of \(\mathcal{M}\) such that \(\mathcal{M}_{n}\to\tilde{\mathcal{M}}\) and \(\varphi_{\mathcal{M}_{n}}\)the \(\mathcal{M}_{n}\)-hyperstates on \(\mathbb{B}(L^{2}(\mathcal{M},\tau))\) such that \(\varphi_{\mathcal{M}_{n}}|_{\mathcal{M}}=\tau\). The existence is guaranteed by Proposition 2.4. We shall denote \(\varphi_{\mathcal{M}_{n}}\) by \(\varphi_{n}\) for ease of notation. Then, there exists a sub-sequence \((n_{k})\) and \(\varphi\in S(\mathbb{B}(L^{2}(\mathcal{M},\tau)))\) such that
\[\varphi_{n_{k}}\xrightarrow{\mathrm{weak}^{*}}\varphi.\]
We shall show that \(\varphi\in\mathrm{Hype}_{\tau}(\tilde{\mathcal{M}})\) and this shall complete the proof. Clearly, \(\varphi|_{\mathcal{M}}=\tau\). Let \(x\in\mathcal{M}\) be given and \(0\neq T\in\mathbb{B}(L^{2}(\mathcal{M},\tau))\) be fixed. It follows from Definition 2.1 that there exist \(x_{n}\in\mathcal{M}_{n}\) such that \(x_{n}\xrightarrow[n\to\infty]{\text{so}^{*}}x\). In particular, \(\|x_{n_{k}}-x\|_{\tau}\xrightarrow[n\to\infty]{\text{so}}0\). Let \(\epsilon>0\) be given.
We can find \(k_{0}\in\mathbb{N}\) such that for all \(k,j\geq k_{0}\),
\[\|x_{n_{k}}-x\|_{\tau}<\frac{\epsilon}{6\|T\|}\text{ and }\|x_{n_{k}}-x_{n_{j}}\|_{ \tau}<\frac{\epsilon}{6\|T\|}.\]
Since \(\varphi_{n_{k}}\to\varphi\) in the weak\({}^{*}\)-topology, we can find \(j\geq k_{0}\) such that
\[\begin{split}\left\|\varphi_{n_{j}}\left(x_{n_{k_{0}}}T\right)- \varphi\left(x_{n_{k_{0}}}T\right)\right\|&<\frac{\epsilon}{6}, \\ \left\|\varphi_{n_{j}}\left(Tx_{n_{k_{0}}}\right)-\varphi\left(Tx_ {n_{k_{0}}}\right)\right\|&<\frac{\epsilon}{6}.\end{split} \tag{6}\]
Now, using the triangle inequality, we see that
\[\begin{split}&\|\varphi(xT-Tx)\|\\ &\leq\left\|\varphi(xT)-\varphi(x_{n_{k_{0}}}T)\right\|+\left\| \varphi(x_{n_{k_{0}}}T)-\varphi_{n_{j}}(x_{n_{k_{0}}}T)\right\|\\ &+\left\|\varphi_{n_{j}}(x_{n_{k_{0}}}T)-\varphi_{n_{j}}(x_{n_{j} }T)\right\|\\ &+\left\|\varphi_{n_{j}}(x_{n_{j}}T)-\varphi_{n_{j}}(Tx_{n_{j}}) \right\|+\left\|\varphi_{n_{j}}(Tx_{n_{j}})-\varphi_{n_{j}}(Tx_{n_{k_{0}}}) \right\|\\ &+\left\|\varphi_{n_{j}}(Tx_{n_{k_{0}}})-\varphi(Tx_{n_{k_{0}}}) \right\|+\left\|\varphi(Tx_{n_{k_{0}}})-\varphi(Tx)\right\|\end{split}\]
Using Cauchy-Schwartz inequality, we observe that
\[\begin{split}\left\|\varphi(xT)-\varphi(x_{n_{k_{0}}}T)\right\|& =\left\|\varphi\left((x-x_{n_{k_{0}}})T\right)\right\|\\ &\leq\sqrt{\varphi\left((x-x_{n_{k_{0}}})(x^{*}-x_{n_{k_{0}}}^{*} )\right)}\sqrt{\varphi(T^{*}T)}\end{split}\]
Since \(x,x_{n_{k_{0}}}\in\mathcal{M}\) and \(\varphi|_{\mathcal{M}}=\tau\), we see that
\[\sqrt{\varphi\left((x-x_{n_{k_{0}}})(x^{*}-x_{n_{k_{0}}}^{*})\right)}=\|x-x_ {n_{k_{0}}}\|_{\tau}.\]
Moreover, \(\sqrt{\varphi(T^{*}T)}\leq\|T\|\). Hence,
\[\left\|\varphi(xT)-\varphi(x_{n_{k_{0}}}T)\right\|\leq\|x-x_{n_{k_{0}}}\|_{ \tau}\|T\|<\frac{\epsilon}{6}.\]
Similarly,
\[\begin{split}\left\|\varphi(Tx)-\varphi(Tx_{n_{k_{0}}})\right\|& \leq\sqrt{\varphi\left((x^{*}-x_{n_{k_{0}}}^{*})(x-x_{n_{k_{0}}}) \right)}\sqrt{\varphi(TT^{*})}\\ &\leq\|x-x_{n_{k_{0}}}\|_{\tau}\|T\|<\frac{\epsilon}{6}.\end{split}\]
Also, since \(\varphi_{n_{j}}|_{\mathcal{M}}=\tau\), we see that
\[\left\|\varphi_{n_{j}}(x_{n_{k_{0}}}T)-\varphi_{n_{j}}(x_{n_{j}}T)\right\|\]
\[\leq\sqrt{\varphi_{n_{j}}\left((x_{n_{k_{0}}}-x_{n_{j}})(x_{n_{k_{0}} }^{*}-x_{n_{j}}^{*})\right)}\sqrt{\varphi_{n_{j}}(T^{*}T)}\] \[\leq\|x_{n_{k_{0}}}-x_{n_{j}}\|_{\tau}\|T\|<\frac{\epsilon}{6}.\]
Since \(\varphi_{n_{j}}\) is a hyperstate for \(\mathcal{M}_{n_{j}}\), \(\varphi_{n_{j}}(x_{n_{j}}T)=\varphi_{n_{j}}(Tx_{n_{j}})\). Putting all of these together, we see that \(\|\varphi(xT-Tx)\|<\epsilon\) for every \(\epsilon>0\). As a result, \(\varphi\in\text{Hype}_{\tau}(\tilde{\mathcal{M}})\).
### Uppersemicontinuity of the Hype-map
Let \(\mathcal{M}\leq L(\Gamma)\) be an amenable von Neumann algebra. Then, \(\text{Hype}(\mathcal{M})\), (the collection of all \(\mathcal{M}\)-hyperstates \(\varphi_{\mathcal{M}}\) on \(\mathbb{B}(\ell^{2}(\Gamma))\)) is a non-empty compact (weak*-sense) and convex subset. Let us now fix a metrizable and minimal \(\Gamma\)-space \(X\). Let
\[\text{Hyp}_{X}=\left\{\text{Hype}(\mathcal{M})|_{C(X)}:\mathcal{M}\text{ is an amenable subalgebra of }L(\Gamma)\right\}\]
We want to give a convex structure to Hyp, and we do this by following [1]. Since \(X\) is metrizable and compact, \(C(X)\) is separable. Let \(\{f_{n}\}_{n=1}^{\infty}\) be a dense subset of the unit ball of \(C(X)\). For each \(n\in\mathbb{N}\), we define
\[f_{n}^{+}(\text{Hype}(\mathcal{M})|_{C(X)})=\sup_{\varphi\in\text{Hype}( \mathcal{M})}\text{Real}(\varphi)(f_{n}).\]
We can then define the map
\[f:\text{Hyp}_{X}\to\prod_{n}[0,1],\ \text{Hype}(\mathcal{M})|_{C(X)}\to \left(f_{n}^{+}\left(\text{Hype}(\mathcal{M})|_{C(X)}\right)\right)_{n\in \mathbb{N}} \tag{7}\]
Applying the Hahn-Banach separation theorem implies that the map \(f\) is injective. The topology on \(\text{Hyp}_{X}\) is then induced from the product topology on \(\prod_{n}[0,1]\). Moreover, the convex structure of \(\prod_{n}[0,1]\) carries over to \(\text{Hyp}_{X}\) as described in [1].
The action of \(\Gamma\) on \(\text{S}(\mathbb{B}(\ell^{2}(\Gamma))\Gamma)\) is defined as
\[(g\cdot\phi)(x):=\phi(\lambda(g^{-1})x\lambda(g)),\ g\in\Gamma,\phi\in\text{S }(\mathbb{B}(\ell^{2}(\Gamma))\Gamma),x\in\mathbb{B}(\ell^{2}(\Gamma))\Gamma.\]
Let \(\text{Hype}_{\tau_{0}}(\mathcal{M})\) denote the collection of all those hyperstates whose restriction to \(L(\Gamma)\) is the canonical trace \(\tau_{0}\). Note that this set is non-empty thanks to Proposition 2.4.
**Lemma 4.8**.: _The map_
\[\text{Hyp}:\left\{\mathcal{M}\in\text{SA}(L(\Gamma))\ :\ \text{amenable} \right\}\longrightarrow\left\{\text{Hype}_{\tau_{0}}(\mathcal{M})\right\}\]
_is \(\Gamma\)-equivariant._
Proof.: To verify \(\Gamma\)-equivariance, it suffices to show that
\[g\cdot\operatorname{Hyp}(\mathcal{M})\subseteq\operatorname{Hyp}(g\cdot \mathcal{M}),\ \forall g\in\Gamma,\ \forall\mathcal{M}\in\operatorname{SA}(\mathcal{Z})\text{ amenable},\]
because the reverse inclusion follows by replacing \(\mathcal{M}\) with \(g^{-1}\cdot\mathcal{M}\). Let \(g\in\Gamma\) and \(\mathcal{M}\in\operatorname{SA}(\mathcal{Z})\) and \(\phi\in\operatorname{Hyp}(\mathcal{M})\). We shall show that \(g\cdot\phi\in\operatorname{Hyp}(g\cdot\mathcal{M})\). Recall that \(g\cdot\phi(x):=\phi(\lambda(g^{-1})x\lambda(g))\) and \(g\cdot\mathcal{M}=\lambda(g)\mathcal{M}\lambda(g^{-1})\). For any \(m\in\mathcal{M}\) and \(T\in\mathbb{B}(\ell^{2}(\Gamma))\Gamma\), we have
\[(g\cdot\phi)(\lambda(g)m\lambda(g^{-1})T) =\phi(\lambda(g^{-1})\lambda(g)m\lambda(g^{-1})T\lambda(g))\] \[=\phi(m\lambda(g^{-1})T\lambda(g))\] \[=\phi(\lambda(g^{-1})T\lambda(g)m)\] \[=\phi(\lambda(g^{-1})T\lambda(g)m\lambda(g^{-1})\lambda(g))\] \[=(g\cdot\phi)(T\lambda(g)m\lambda(g^{-1})).\]
Note that the last equality follows since \(\lambda(g^{-1})T\lambda(g)\in\mathbb{B}(\ell^{2}(\Gamma))\Gamma\) and \(\phi\) is a hyperstate for \(\mathcal{M}\). Moreover, \((g\cdot\phi)_{|_{g\cdot\mathcal{M}}}=\phi_{|_{\mathcal{M}}}=\tau_{0}.\) Therefore \(g\cdot\phi\) is a hyperstate for \(g\cdot\mathcal{M}\), which was to show.
Let \(X\) be a metrizable minimal \(\Gamma\)-space.
**Proposition 4.9**.: _The map_
\[\text{Hyp}:\{\mathcal{M}\in\text{SA}(L(\Gamma))\,:\ \text{ amenable}\}\longrightarrow\big{\{}\text{Hyp}_{\tau_{0}}(\mathcal{M})|_{C(X)}\big{\}}\]
_is upper semi-continuous._
Proof.: Let \(\{f_{n}\}_{n}\) be a countable dense subset of \(C(X)\). It is enough to prove that \(\mathcal{M}\to f_{n}^{+}(\operatorname{Hyp}_{\tau_{0}}(\mathcal{M})|_{C(X)})\) is upper semi-continuous for each \(n\in\mathbb{N}\). Now, fix \(n_{0}\in\mathbb{N}\). We can find \(\varphi_{\mathcal{M}}\in\operatorname{Hyp}_{\tau_{0}}(\mathcal{M})\) such that
\[f_{n_{0}}^{+}(\operatorname{Hyp}_{\tau_{0}}(\mathcal{M})|_{C(X)})=\operatorname {Re}(\varphi_{\mathcal{M}}|_{C(X)}(f_{n_{0}}))\]
due to compactness of \(\operatorname{Hyp}_{\tau_{0}}(\mathcal{M})\). Let \(\mathcal{M}_{n}\) be a sequence of amenable subalgebras of \(L(\Gamma)\) such that \(\mathcal{M}_{n}\to\mathcal{M}\) and \(\varphi_{\mathcal{M}_{n}}\) as above for each \(n\in\mathbb{N}\). We shall denote \(\varphi_{\mathcal{M}_{n}}\) by \(\varphi_{n}\) for ease of notation. Then, there exists a sub-sequence \((n_{k})\) and \(\varphi\in S(\mathbb{B}(\ell^{2}(\Gamma))\) such that
\[\varphi_{n_{k}}\xrightarrow{\text{weak}^{*}}\varphi.\]
It follows from Proposition 4.7 that \(\varphi\in\operatorname{Hyp}_{\tau_{0}}(\mathcal{M})\). Then, we see that
\[f_{n_{0}}^{+}(\operatorname{Hyp}_{\tau_{0}}(\mathcal{M}))\geq\operatorname{ Re}(\varphi|_{C(X)}(f_{n_{0}}))=\lim_{k}\operatorname{Re}(\varphi_{n_{k}}|_{C(X)}(f _{n_{0}}))\]
Hence, it follows that
\[f_{n_{0}}^{+}(\operatorname{Hype}_{\tau_{0}}(\mathcal{M}))\geq\limsup_{n}f_{n_{0} }^{+}(\operatorname{Hype}_{\tau_{0}}(\mathcal{M}_{n})),\]
thereby implying that the map \(\mathcal{M}\to f_{n}^{+}(\operatorname{Hype}_{\tau_{0}}(\mathcal{M})|_{C(X)})\) is upper semi-continuous.
### Amenable IRAs and amenable Radical
Proof of Theorem B.: Let \(\mu\) be an amenable IRA on \(L(\Gamma)\). We shall show that \(\mathcal{M}\leq L(\operatorname{Rad}(\Gamma))\) for \(\mu\)-a.e. \(\mathcal{M}\leq L(\Gamma)\). Let \(s\in\Gamma\setminus\operatorname{Rad}(\Gamma)\). Using [23, Proposition 7(ii)], we can find a metrizable \(\Gamma\)-boundary \(X_{s}\) such that \(s\not\in\operatorname{Ker}(\Gamma\curvearrowright X_{s})\). Consider the map
\[\operatorname{Hyp}:\left\{\mathcal{M}\in\operatorname{SA}(L(\Gamma))\,:\,\, \text{amenable}\right\}\longrightarrow\left\{\operatorname{Hype}_{\tau_{0}}( \mathcal{M})|_{C(X_{s})}\right\}.\]
By Lemma 4.8 and Proposition 4.9, this map is \(\Gamma\)-equivariant and measurable. We now push forward \(\mu\) to obtain an invariant probability measure on a subset of \(\mathcal{CC}(\operatorname{Prob}(X_{s}))\), the space of all compact, convex subsets of \(\operatorname{Prob}(X_{s})\cong S(C(X_{s}))\). Since \(\operatorname{Prob}(X_{s})\) is irreducible, it follows from [1, Lemma 2.3] that \((\operatorname{Hyp})_{*}\mu=\delta_{\operatorname{Prob}(X_{s})}\). Hence,
\[\text{for $\mu$-a.e $\mathcal{M}\in\operatorname{SA}(L(\Gamma))\,:\,\, \operatorname{Hyp}_{\tau_{0}}(\mathcal{M})|_{C(X_{s})}=\operatorname{Prob}(X_{ s})$}.\]
In particular, every \(\nu\in\operatorname{Prob}(X_{s})\) is \((\mathcal{M},\tau_{0})\)-invariant for \(\mu\)-a.e \(\mathcal{M}\). Let \(V_{s}\subset\operatorname{Subalg}(L(\Gamma))\) be a co-null measurable set such that \(\nu\in\operatorname{Prob}(X_{s})\) is \((\mathcal{M},\tau_{0})\)-invariant for every \(\mathcal{M}\in V_{s}\), i.e.,
\[\operatorname{Prob}(X_{s})=S_{\tau_{0}}^{\mathcal{M}}(C(X_{s})),\,\,\forall \mathcal{M}\in V_{s}.\]
Fix \(\mathcal{M}\in V_{s}\). Let \(s\in\Gamma\setminus\operatorname{Rad}(\Gamma)\). By assumption, we can find an element \(x_{s}\in X_{s}\) such that \(sx_{s}\neq x_{s}\). Using Lemma 3.3, we see that
\[\tau_{0}(a\lambda(s)^{*})=0,\forall a\in\mathcal{M},\,\,\forall \mathcal{M}\in V_{s}.\]
Let \(V=\cap_{s\in\Gamma\setminus\operatorname{Rad}(\Gamma)}V_{s}\) be the measurable co-null set obtained by taking the countable intersection of the co-null measurable sets \(V_{s}\). Let \(\mathcal{M}\in V\). We see that \(\tau_{0}(a\lambda(s)^{*})=0\) for all \(a\in\mathcal{M}\) and for all \(s\in\Gamma\setminus\operatorname{Rad}(\Gamma)\). Consequently, \(\mathcal{M}\subset L(\operatorname{Rad}(\Gamma))\) for all \(\mathcal{M}\in V\). Therefore, \(\overline{\langle\mu\rangle}\subset L(\operatorname{Rad}(\Gamma))\). Since \(\left(L(\operatorname{Rad}(\Gamma)),\tau_{0}|_{\operatorname{Rad}(\Gamma)}\right)\) is a tracial von Neumann algebra, \(\overline{\langle\mu\rangle}\) is in the image of a conditional expectation \(\mathbb{E}_{\overline{\langle\mu\rangle}}:L(\operatorname{Rad}(\Gamma)) \rightarrow\overline{\langle\mu\rangle}\) (see [1, Theorem 9.1.2]). Since amenability passes to subalgebras in the image of a conditional expectation, it follows that \(\overline{\langle\mu\rangle}\) is amenable. |
2309.12283 | Performance Conditioning for Diffusion-Based Multi-Instrument Music
Synthesis | Generating multi-instrument music from symbolic music representations is an
important task in Music Information Retrieval (MIR). A central but still
largely unsolved problem in this context is musically and acoustically informed
control in the generation process. As the main contribution of this work, we
propose enhancing control of multi-instrument synthesis by conditioning a
generative model on a specific performance and recording environment, thus
allowing for better guidance of timbre and style. Building on state-of-the-art
diffusion-based music generative models, we introduce performance conditioning
- a simple tool indicating the generative model to synthesize music with style
and timbre of specific instruments taken from specific performances. Our
prototype is evaluated using uncurated performances with diverse
instrumentation and achieves state-of-the-art FAD realism scores while allowing
novel timbre and style control. Our project page, including samples and
demonstrations, is available at benadar293.github.io/midipm | Ben Maman, Johannes Zeitler, Meinard Müller, Amit H. Bermano | 2023-09-21T17:44:57Z | http://arxiv.org/abs/2309.12283v1 | # Performance Conditioning for Diffusion-Based Multi-Instrument Music Synthesis
###### Abstract
Generating multi-instrument music from symbolic music representations is an important task in Music Information Retrieval (MIR). A central but still largely unsolved problem in this context is musically and acoustically informed control in the generation process. As the main contribution of this work, we propose enhancing control of multi-instrument synthesis by conditioning a generative model on a specific performance and recording environment, thus allowing for better guidance of timbre and style. Building on state-of-the-art diffusion-based music generative models, we introduce _performance conditioning_ - a simple tool indicating the generative model to synthesize music with style and timbre of specific instruments taken from specific performances. Our prototype is evaluated using uncurated performances with diverse instrumentation and achieves state-of-the-art FAD realism scores while allowing novel timbre and style control. Our project page, including samples and demonstrations, is available at benadar293.github.io/midipm.
Ben Maman\({}^{\star}\) + Johannes Zeitler\({}^{\dagger}\) + Meinard Muller\({}^{\dagger}\) + Amit H. Bermano\({}^{\star}\)\({}^{\star}\) Tel Aviv University
\({}^{\dagger}\)International Audio Laboratories Erlangen, Germany Multi-Instrument Synthesis, Diffusion
Footnote †: The International Audio Laboratories Erlangen are a joint institution of the Friedrich-Alexander-Universität Erlangen-Nümberg (FAU) and the Fraunhofer Institute for Integrated Circuits IIS.
## 1 Introduction
Multi-Instrument Music Synthesis is the task of generating audio from MIDI files, emulating specific instruments played with desired notes and timbre. It is a novel task in Music Information Retrieval (MIR), attracting increasing attention in recent years, with applications in music creation and production for all proficiency levels. It comprises several challenges, one of the central thereof is control over the style of the synthesized pieces.
Since physically modeling and specifying the different sound phenomena (e.g., vibrato, intensity, echo, and specific instrument timbre) for generation is unfeasible, current approaches avoid the flat and unrealistic sound of traditional concatenative synthesizers and naturally infer the aspects of generation from data, typically using denoising diffusion probabilistic models (DDPMs) [1, 2]. However, the subtle nuances of expression are often lost in the generation process, resulting in less realistic audio, and other phenomena such as instrument drift, where the same instrument is not rendered coherently in different parts of the generated piece.
In this work, we introduce a mechanism that specifically tackles style control; Except for conditioning the model on notes and instruments, our synthesizer is also conditioned on the performance, enabling generation with performance-specific characteristics, such as timbre, style, and recording environment. This _performance conditioning_ not only increases realism but also allows the sound of, e.g., a specific guitar to be reproduced in a specific acoustic environment.
To give a concrete example, our approach enables reproducing the sound of the guitar in a 1975 recording of Segovia playing Albeniz's Capriccio Catalan, now playing another piece, such as Jobim's Felicidad. To the best of our knowledge, our work is the first to offer this capability in a multi-instrument setting.
Performance conditioning is integrated, like the iteration parameter \(t\) of the diffusion process, using FiLM layers [3], and a sampling-overlapping technique ensures smooth transitions between generated segments, together providing coherent generation of long sequences.
Through extensive evaluation with established score metrics [4], e.g., Frechet Audio Distance (FAD), we show that our concept enhances perceptual similarity to the desired performance while improving realism.
## 2 Related Work
Audio synthesis in current literature can be done auto-regressively, where models directly construct a waveform sample-by-sample [5, 6, 7]. Another approach, which we take, operates in the spectral domain. This requires a subsequent step to convert the generated spectral representation (STFT, mel) into a waveform, but it is computationally more efficient.
For a data-driven approach, large amounts of labeled data are required, i.e., paired datasets of audio recordings and their corresponding time-aligned transcriptions. While such datasets exist for instruments such as the piano thanks to special equipment (e.g., Diskavier), this is not the case for other instruments. Thus, previous works mainly focus on the generation of piano performances, monophonic (single-voice) music, or music produced by a concatenative synthesizer [1].
Table 1 provides a summary of existing methods. [8] uses a U-Net to synthesize solo violin, cello, and flute performances, requiring a separate model for each instrument. [9] uses a Transformer architecture to synthesize solo violin or piano. Both works produce only monophonic and single-instrument music (i.e., only one note played by a specific instrument is synthesized at any given time).
[10] learns a parametric model of a musical performance, synthesizing from controls such as intensity, vibrato, etc. Although promising, the main drawback is the requirement of elaborate performance controls at inference time (vibrato etc.), which requires much effort even from a skilled musician. Instead, we rely on the diffusion model to generate such aspects of expression. In addition, [10] only operate on monophonic single-instrument data, similar to former works, due to a lack of high-quality polyphonic training data.
In multi-instrument synthesis, Hawthorne et al. [1] use a T5 Transformer-based diffusion model. While this method is promising and produces high-fidelity audio, it has several limitations: It does
not have control over the performance characteristics and style (e.g., type of organ, recording environment) and produces less realistic sound, as shown on our supplemental website with audio examples at benadar293.github.io/midipm.
## 3 Method
An overview of our method is depicted in Figure 1. We seek to enhance the generation quality and control of an off-the-shelf diffusion-based music generator using performance conditioning. Hence, starting from a dataset \(\mathcal{D}=\{(a_{i},m_{i},p_{i})\}_{i=1}^{N}\), comprising audio performances \(a_{i}\), their symbolic MIDI annotation \(m_{i}\), and information regarding the identity of the performing ensemble and recording environment \(p_{i}\), we train a music synthesizer using state-of-the-art architectures (see Section 3.1), infused with performance conditioning.
As previously mentioned, we choose to operate in the spectral domain, using the mel-spectogram representation, mainly for computational purposes. We postulate our method can be adapted to larger spectral representations (e.g., STFT), or the waveform domain, at significantly higher computational costs. To convert mel-spectrograms into audio, we rely on the state-of-the-art GAN-based Soundstream vocoder [12] (which is the same vocoder used by Hawthorne et al. [1]). We represent a performance condition ID \(p_{i}\) as an integer number, where recordings performed by the same ensemble in the same recording environment are assigned the same number. Each condition can represent a single recording of a few minutes (e.g., Segovia playing Albeniz's Capriccio Catalan on the guitar) in the training set, or a set of recordings of several hours (e.g., of Beethoven's concertos for piano and orchestra performed by Mitsuko Uchida and The Orchestra of The Bavarian Radio). Performance conditioning is implemented using FiLM layers [3], a popular technique used to condition the denoising diffusion process on the iteration parameter \(t\) (see Section 3.2 for more details).
During training, the synthesizer is trained to reconstruct performances, based on their MIDI representation and corresponding performance condition ID. During inference, new unseen MIDI performances are combined with different performance conditions that correspond to the performances seen during training to yield note control from the MIDI representation, and style and timbre from the performance conditioning.
Finally, to generate longer sequences and ensure their seamless transitions between generated intervals, we adapt an overlapping technique, borrowed from visual generation (Section 3.3).
### Architecture
We experiment with two architectures: A U-Net originally used for images [13], and a T5 Transformer [1].
We adapt the **U-Net** to spectrogram synthesis by using 1D convolution, attention, and group normalization rather than 2D, regarding the frequencies as channels. This is more expressive than spatial 2D convolutions, as it captures interactions between distant frequencies, inherent in spectrograms (partial frequencies). It is a common practice in spectrogram synthesis (e.g. [8] use a 1D U-Net without diffusion).
The **T5**, borrowed from Hawthorne et al. [1], comprises a transformer encoder, encoding the MIDI condition, and a transformer decoder, processing the noise itself. It receives the encoded MIDI through cross-attention layers.
As can be seen in Section 4, the SOTA transformer-based architecture indeed achieves slightly better results, but the U-Net architecture requires significantly less training time.
### Conditioning
For **Performance Conditioning**, we apply FiLM layers. FiLM layers [3] apply learned conditional affine transformations on network features. They are suitable for multi-modal cases, where a model learns many similar tasks simultaneously. They enable the different tasks to share most parameters while maintaining flexibility.
We apply them by predicting an affine transformation for each block of the network (T5 or U-Net), using MLPs. For the U-Net and T5 decoder, we concatenate the diffusion timestep representation with the performance ID representation. We observed that conditioning also the T5 note encoder on performance using FiLM layers produced better results than conditioning the T5 decoder alone.
For **Note Conditioning**, we insert the conditioning notes into the encoder in the T5 model, and into the input layer in the U-Net. The note input contains the information of note _onset_ (beginning time), _instrument_, and approximate _duration_.
### Temporal Coherency & Smooth Transitions
We generate long performances of several minutes, by segments of \(\sim\)5 seconds each (dictated by memory constraints). For smooth transition between segments, we generate the segments with short overlaps, smoothly interpolating between consecutive segments, in each step of the sampling process. We use an interpolation coefficient linearly decreasing from \(1\) to \(0\) along the overlap, in the predicted sample \(x_{0}=\frac{x_{a}-\epsilon_{e}\epsilon}{\alpha_{t}}\) of each step, derived from the predicted noise \(\epsilon\). Borrowed from motion generation [14, 15], this is an effective and convenient approach, performed solely at the sampling stage, and requiring no additional training components, contrary to [1].
### Diffusion Process
We use a cosine noise schedule, with \(T=1000\) training steps, and \(T=250\) sampling steps. We use **Classifier-Free Guidance (CFG)** to control the conditioning strength, for both performance and notes - we train with condition dropout of probability \(0.1\), for notes and performance independently, and sample with guidance weights of \(1.25\) for both, which gave best overall results after a parameter search.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline & **Multi** & **Perf.** & **Symbol.** & **Data** & **Real\%** \\ \hline [7] & ✗ & ✓ & ✗ & \(\sim\)140H & 100\% \\ \hline [8] & ✗ & ✗ & ✗ & \(\sim\)1H & 100\% \\ \hline [11] & ✗ & ✗ & ✗ & \(\sim\)1H & 0\% \\ \hline [9] & ✗ & ✗ & ✗ & \(\sim\)1H & 100\% \\ \hline [10] & ✗ & ✗ & ✗ & \(\sim\)3H & 100\% \\ \hline [1] & ✓ & ✗ & ✗ & \(\sim\)1500H & \(\sim 2\%\) \\ \hline Ours & ✓ & ✓ & ✓ & \(\sim\)58H & 100\% \\ \hline \end{tabular}
\end{table}
Table 1: Overview of previous work indicating the ability to render multiple instruments simultaneously (**Multi**), reproducing specific performance style (**Perf.**), generating orchestral symbolines **Symph.**), **Data** size, and ratio of real vs. synthetic data used for training (**Real%**).
## 4 Experiments and Evaluation
In the following section, we first give an overview of the datasets used. Next, we present experimental results showing the potential of our proposed method in correctly synthesizing MIDI files, as well as the effect of performance conditioning on reproducing a desired timbre and style. Lastly, we briefly describe the listening examples which can be found on our supplemental website.
### Datasets
We train on 197 performances of western classical music, comprising 19 instruments (including symphonies, chamber music, solo and other instrumentations), totaling in 58:06:07 hours. The data consists of performances from YouTube [16] and Musopen [17], with corresponding MIDI transcriptions from www.kunstderfuge.com, aligned as proposed by Maman and Bermano [18]. Following [18], we augment the data by pitch-shifting up to \(\pm 2\) semitones. We label the data with performance IDs by assigning numerical indices to the different performances, where typically the same index is given to an entire set of recordings (e.g., a CD box with Beethoven's Piano Trios recorded by the same ensemble in the same studio).
We evaluate our models with 58 MIDI performances of western classical pieces of a total duration of 5:09:30 hours, none of which appear in the train set, but containing the same instruments. For each test MIDI, we randomly sample 3 conditioning performances for synthesis. For example, the test MIDI can be of Mozart's 40th symphony and the condition can be the performance of the Berlin Philharmonic Orchestra playing Brahms' Haydn Variations. See our website for the complete ensemble distribution of the data.
### Frechet Audio Distance (FAD)
To evaluate the fidelity and resemblance of our generated performances to the conditioning performances, we use the **Frechet Audio Distance (FAD)**[4] - a perceptual score with origins in computer vision [19]. FAD is based on large DNN-based models such as TRILL [20], trained on large real-world datasets to predict embedding vectors from snippets of input audio. The assumption is that perceptually similar audio snippets yield closely spaced embedding vectors. To compute FAD between two datasets (e.g., a set \(\mathcal{D}_{1}\) of MIDI files synthesized with a specific performance condition, and a set \(\mathcal{D}_{2}\) of real recordings of the same performance), the mean vectors \(\mu_{1,2}\) and the covariance matrices \(\Sigma_{1,2}\) are computed over all embedding vectors generated from the respective datasets. The FAD is then defined as:
\[\mathrm{FAD}(\mathcal{D}_{1},\mathcal{D}_{2})=\left|\mu_{1}-\mu_{2}\right|^{2 }+\mathrm{tr}\left(\Sigma_{1}+\Sigma_{2}-2(\Sigma_{1}\Sigma_{2})^{1/2}\right)\,.\]
Kilgour et al. [4] show that FAD correlates with human perception and that increasing distortions increase the FAD. We use two models as backbones for FAD, also used by Hawthorne et al. [1]: TRILL [20] (5.9 embeddings/sec.), and VGGish [21] (1 embedding/sec.). We measure FAD in two ways, differing in the choice of the compared datasets: **All** (Section 4.2.1), and **Group** (Sect. 4.2.2).
#### 4.2.1 All-FAD - General Quality
To measure quality and fidelity, we measure FAD comparing the entire evaluation set to the entire train set. This measures the general similarity of the synthesized performances to real performances, rather than resemblance to a specific performance. We refer to this metric as **All-FAD** (Table 2). While this metric is important to measure fidelity, the main metric for evaluating performance conditioning is the **Group-FAD** (Section 4.2.2, Table 3).
#### 4.2.2 Group-FAD - Resemblance to Target Performance
Figure 2 shows a t-SNE visualization from the train set's TRILL embedding distribution. Each point represents the mean embedding of an audio track (e.g., a movement in a symphony), and each color represents a recording, comprising multiple such tracks. It can be seen that tracks of the same performance form close clusters.
Following this insight, we define the **Group-FAD** metric (Table 3): To measure how well our synthesized performances resemble the target conditioning performance in timbre, room acoustics, etc., we compute FAD comparing each performance synthesized with a performance condition \(p\), to the subset of the train set corresponding to \(p\). Furthermore, we use this metric to classify our generated performances, according to Group-FAD-nearest over all training performances.
Figure 1: Overview of our proposed diffusion-based synthesis model. Performance conditioning (determining the style, recording environment, and specific timbre) is done through FiLM layers at each block, which can be applied to both to a T5 transformer and a U-Net. The performance condition ID is inserted at each layer by concatenation with the diffusion timestep.
### Correct Reproduction of Notes and Instruments
To evaluate whether our models render the MIDI files correctly, i.e., whether the generated audio files contain the right notes at the right time, played by the correct instruments, we use **transcription** metrics: We measure the transcription accuracy of the synthesized performances using a transcriber (as in [7, 18]) trained on the same data as the synthesizer. We compare the note events specified by the input MIDI to the transcription of the synthesized performance and measure the F1 score for **note** (pitch + onset within 50ms) and **note-with-instrument** (also correct instrument).
### Results
Results are reported in Tables 2 (All-FAD and transcription), and 3 (Group-FAD). Note that for FAD, the smaller the value, the greater the similarity (as desired). For the T5 model, for example, in Table 2, the TRILL-based All-FAD is 0.12 without performance conditioning and improves to 0.09 when performance conditioning is used. Similar tendencies (with the exception of U-Net and VGGish) can also be observed for the other models and the VGGish-based FAD.
Next, we look at the transcription accuracy, indicating whether the synthesized performances actually realize the notes specified by the input MIDI files. As shown in Table 2, the T5 model reaches an accuracy of 63\(\%\) (note-level), which is of reasonable magnitude when considering the complexity of highly polyphonic orchestral music. This is slightly higher than w/o performance conditioning, however, the more significant impact of performance conditioning on transcription is on the note-with-instrument level, which reaches 47\(\%\) (w/) compared to 36\(\%\) (w/o), indicating instrument identity is better preserved with performance conditioning. For the U-Net, performance conditioning does not significantly impact transcription.
With the All-FAD and transcription metrics, we discussed the quality of our model's generated performances, in terms of general similarity to real performances, and producing the desired notes. We now discuss Group-FAD, to evaluate performance conditioning, which is the main focus of our work. It can be seen in Table 3 that performance conditioning consistently improves Group-FAD. For example, the VGGish-based Group-FAD drops from 7.1 (w/o) to 5.1 (w/) for the U-Net.
To get another perspective on the potential of our conditioning strategy, we performed a classification experiment. For each test performance synthesized with a conditioning performance \(p\), we search for the performance in the train set, that is its nearest neighbor in terms of Group-FAD. As shown in Table 3, the top-1 classification accuracy is only 36\(\%\) for the model T5 without conditioning. When using conditioning, it increases dramatically to 68\(\%\). Similar improvements can also observed for the U-Net, and when looking at top-3 accuracy values. These results suggest that conditioning helps adapt to the specific timbre and room acoustics of a performance.
When comparing the actual FAD values in Tables 2, 3, one can see the All-FAD values are lower than the Group-FAD. We attribute this to the fact that the mean vectors and covariance matrices for All-FAD are computed over significantly larger evaluation and reference datasets than for Group-FAD and therefore yield less statistical fluctuations, resulting in an overall lower FAD score.
While the quantitative analysis in this section indicates that performance conditioning indeed improves perceptual similarity in music synthesis, it cannot replace listening to the actual generated samples. Therefore, in order to assess the potential of performance conditioning in the context of diffusion-based music synthesis, we strongly encourage the reader to listen to the samples provided on our supplemental website (benadar293.github.io/midipm). We provide comparisons of MIDI files sonified with a simple concatenative synthesizer, the baseline approach [1], and our proposed method. Furthermore, we show the conditioning effect when synthesizing the same MIDI file with a variety of different performance conditions. Among others, we render Bach's 8th Invention on eight different harpischords, and Beethoven's Pastoral Symphony with four different orchestras and recording environments.
## 5 Discussion
We presented a framework for training neural synthesizers on real performances, using diffusion models conditioned on notes and a performing style. We demonstrated that the latter condition both improves realism of multi-instrument performances of classical music, including orchestral symphonies, and adapts to the specific characteristics of a given performance, such as timbre and recording environment. Important future work includes extension to other genres, such as jazz, ethnic, pop music, and even human singing. Another important direction is exploring other spectral domains, such as STFT, CQT, etc. Yet another direction involves human speech - we believe a unified diffusion-based framework for music and speech is possible, by providing additional textual or phonemic conditions.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multicolumn{4}{c|}{All-FAD\(\downarrow\)} & \multicolumn{2}{c|}{Transcription} \\ \cline{2-7} & \multicolumn{2}{c|}{VGGish} & \multicolumn{2}{c|}{TRILL} & \multicolumn{2}{c|}{N/N+1\(\uparrow\)} \\ \hline P Con. & w/o & w/ & w/o & w/ & w/o & w/ \\ \hline T5 & 3.9 & 3.5 & 0.12 & **0.09** & 62/38\(\%\) & **63/47\(\%\)** \\ U-Net & **3.4** & 3.9 & 0.12 & 0.11 & 63/47\(\%\) & 62/46\(\%\) \\ \hline \end{tabular}
\end{table}
Table 2: Results for All-FAD, and transcription accuracy for note (N) and note-with-instrument (N+I). For each metric, the best result is bold, and the next-best is underlined.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{} & \multicolumn{4}{c|}{Group-FAD\(\downarrow\)} & \multicolumn{2}{c|}{Perf. Acc.\(\%\)} \\ \cline{2-7} & \multicolumn{2}{c|}{VGGish} & \multicolumn{2}{c|}{TRILL} & \multicolumn{2}{c|}{Top-1/3\(\uparrow\)} \\ \hline P Con. & w/o & w/ & w/o & w/ & w/o & w/ \\ \hline T5 & 5.8 & 5.5 & 0.43 & 0.35 & 36/60\(\%\) & **68/90\(\%\)** \\ U-Net & 7.1 & **5.1** & 0.5 & **0.33** & 14/30\(\%\) & 56/73\(\%\) \\ \hline \end{tabular}
\end{table}
Table 3: Results for Group-FAD, and performance classification accuracy with TRILL FAD-nearest (Acc.%). The best result in each metric is bold, and the next-best is underlined.
Figure 2: T-SNE Visualization of the TRILL embedding space. Points represent audio tracks, and colors represent complete recordings of specific performance IDs, comprising multiple such tracks. |
2309.14194 | Uniqueness of solutions to some classes of anisotropic and isotropic
curvature problems | In this paper, we apply various methods to establish the uniqueness of
solutions to some classes of anisotropic and isotropic curvature problems.
Firstly, by employing integral formulas derived by S. S. Chern \cite{Ch59}, we
obtain the uniqueness of smooth admissible solutions to a class of
Orlicz-(Christoffel)-Minkowski problems. Secondly, inspired by Simon's
uniqueness result \cite{Si67}, we then prove that the only smooth strictly
convex solution to the following isotropic curvature problem
\begin{equation}\label{ab-1}
\left(\frac{P_k(W)}{P_l(W)}\right)^{\frac{1}{k-l}}=\psi(u,r)\quad \text{on}\
\mathbb{S}^n \end{equation} must be an origin-centred sphere, where
$W=(\nabla^2 u+u g_0)$, $\partial_1\psi\ge 0,\partial_2\psi\ge 0$ and at least
one of these inequalities is strict. As an application, we establish the
uniqueness of solutions to the isotropic Gaussian-Minkowski problem. Finally,
we derive the uniqueness result for the following isotropic $L_p$ dual
Minkowski problem \begin{equation}\label{ab-2}
u^{1-p} r^{q-n-1}\det(W)=1\quad \text{on}\ \mathbb{S}^n, \end{equation} where
$-n-1<p\le -1$ and $n+1\le q\le
n+\frac{1}{2}+\sqrt{\frac{1}{4}-\frac{(1+p)(n+1+p)}{n(n+2)}}$. This result
utilizes the method developed by Ivaki and Milman \cite{IM23} and generalizes a
result due to Brendle, Choi and Daskalopoulos \cite{BCD17}. | Haizhong Li, Yao Wan | 2023-09-25T14:53:45Z | http://arxiv.org/abs/2309.14194v2 | # Uniqueness of solutions to some classes of anisotropic and isotropic curvature problems
###### Abstract.
In this paper, we apply various methods to establish the uniqueness of solutions to some classes of anisotropic and isotropic curvature problems. Firstly, by employing integral formulas derived by S. S. Chern [20], we obtain the uniqueness of smooth admissible solutions to a class of Orlicz-(Christoffel)-Minkowski problems. Secondly, inspired by Simon's uniqueness result [60], we then prove that the only smooth strictly convex solution to the following isotropic curvature problem
\[\left(\frac{P_{k}(W)}{P_{l}(W)}\right)^{\frac{1}{k-l}}=\psi(u,r)\quad\text{on } \mathbb{S}^{n} \tag{0.1}\]
must be an origin-centred sphere, where \(W=(\nabla^{2}u+ug_{0})\), \(\partial_{1}\psi\geq 0,\partial_{2}\psi\geq 0\) and at least one of these inequalities is strict. As an application, we establish the uniqueness of solutions to the isotropic Gaussian-Minkowski problem. Finally, we derive the uniqueness result for the following isotropic \(L_{p}\) dual Minkowski problem
\[u^{1-p}rq^{q-n-1}\det(W)=1\quad\text{on }\mathbb{S}^{n}, \tag{0.2}\]
where \(-n-1<p\leq-1\) and \(n+1\leq q\leq n+\frac{1}{2}+\sqrt{\frac{1}{4}-\frac{(1+p)(n+1+p)}{n(n+2)}}\). This result utilizes the method developed by Ivaki and Milman [41] and generalizes a result due to Brendle, Choi and Daskalopoulos [10].
Key words and phrases:Uniqueness, \(L_{p}\) dual Minkowski problem, Orlicz-Minkowski problem, Orlicz-Christoffel-Minkowski problem, anisotropic curvature problem, isotropic curvature problem 2
###### Contents
* 1 Introduction
* 2 Preliminaries
[MISSING_PAGE_POST]
where \(\tilde{\nu}\) is the unit outer normal vector of \(\tilde{M}_{t}=\tilde{X}(\mathbb{S}^{n},t)\). We call \(\tilde{X}:\mathbb{S}^{n}\to\mathbb{R}^{n+1}\) a _homothetic self-similar solution_ to the flow (1.6) if it satisfies
\[c\Psi(u,r)\left(\frac{P_{k}}{P_{l}}\right)^{\frac{1}{k-l}}(W)=1\quad\text{on }\mathbb{S}^{n}, \tag{1.7}\]
where \(c\) is a positive constant, \(u\) is the support function and \(r\) is the radial function of \(\tilde{M}=\tilde{X}(\mathbb{S}^{n})\). When \(l=0\) and \(\partial_{2}\psi=0\), then (1.7) reduces to the isotropic case of (1.2). Moreover, there are two important cases for (1.7): First, when \(l=0,\ k=n\) and \(\Psi(s,t)=s^{1-p}t^{q-n-1}\), (1.7) becomes the isotropic \(L_{p}\) dual Minkowski problem, corresponding to the following equation
\[u^{1-p}r^{q-n-1}\det(W)=1\quad\text{on }\mathbb{S}^{n}. \tag{1.8}\]
Second, when \(l=0,\ k=n\) and \(\Psi(s,t)=s^{1-p}e^{-\frac{t^{2}}{2}}\), (1.7) becomes the isotropic \(L_{p}\) Gaussian-Minkowski problem, corresponding to the following equation
\[u^{1-p}e^{-\frac{r^{2}}{2}}\det(W)=1\quad\text{on }\mathbb{S}^{n}. \tag{1.9}\]
We are also concerned with the uniqueness of solutions to isotropic curvature problems. When \(\Psi(u,r)=u^{\alpha}\), we refer to [17, 23, 24] regarding the uniqueness of solutions to (1.7).
### Uniqueness of solutions to Orlicz-(Christoffel)-Minkowski problems
In this paper, we always assume that all hypersurfaces \(M^{n}\) are smooth closed hypersurfaces in \(\mathbb{R}^{n+1}\) that contain the origin in its interior and bound a convex body \(K\) with strictly positive Gauss curvature. Denote by \(u\) the support function of \(K\), and by \(r\) the radial function of \(K\). We write the Hessian matrix of \(u\) with respect to \((\mathbb{S}^{n},g_{0},\nabla)\) as \(W=(\nabla^{2}u+ug_{0})\).
Let \(\varphi:(0,\infty)\to(0,\infty)\) be a continuous function. We consider the following Orlicz-Christoffel-Minkowski problem:
\[\varphi(u)P_{k}(W)=f\quad\text{on }\mathbb{S}^{n}. \tag{1.10}\]
A solution \(u\) to (1.10) is called _admissible_ if \(W=(\nabla^{2}u+ug_{0})\in\Gamma_{k}\), where \(\operatorname{Sym}(n)\) is the space of \(n\times n\) symmetric matrices and \(\Gamma_{k}=\{A\in\operatorname{Sym}(n):\ P_{i}(A)>0,\ i=1,\dots,k\}\).
Using integral formulas, S. S. Chern [20] proved the classical Alexandrov-Fenchel-Jessen theorem, which establishes the uniqueness of solutions to (1.10) with \(\varphi\equiv 1\). Furthermore, we obtain the following uniqueness result by utilizing integral formulas.
**Theorem 1.1**.: _Let \(2\leq k\leq n\) be an integer. Let \(\varphi:(0,\infty)\to(0,\infty)\) be a continuous function that satisfies_
\[(s\varphi^{\frac{1}{k}}(s)-t\varphi^{\frac{1}{k}}(t))(\varphi^{\frac{k-1}{k}} (s)-\varphi^{\frac{k-1}{k}}(t))<0, \tag{1.11}\]
_for any \(t>s>0\). For any positive function \(f\in C(\mathbb{S}^{n})\), then the smooth admissible solution to (1.10) is unique._
In particular, when considering the \(L_{p}\) Christoffel-Minkowski problem (1.5), the uniqueness of admissible solutions was established by Guan-Ma-Trudinger-Zhu [28] for \(p=1\), by Hu-Ma-Shen [33] for \(p\geq k+1\), and Guan-Xia [29] for \(1<p<k+1\). In the isotropic case, Chen [16] proved the uniqueness of strictly convex solutions to (1.5) with \(f\equiv 1\) for \(1>p>1-k\), while McCoy [52] established the uniqueness for \(p=1-k\). As an application of Theorem 1.1, by choosing \(\varphi(s)=s^{1-p}\), we provide a new proof for the following uniqueness result.
**Corollary 1.1** ([29]).: _Let \(2\leq k\leq n\) be an integer. Suppose \(1<p<k+1\) is a real number. For any positive function \(f\in C(\mathbb{S}^{n})\), then the smooth admissible solution to (1.5) is unique._
Here are some more examples of functions that satisfy the condition (1.11):
1. \(\varphi(s)=\mu s^{\alpha}+s^{\beta}\), where \(0\geq\alpha>\beta>-k\) and \(\mu\geq 0\).
2. \(\varphi(s)=(\mu s^{\alpha}+s^{\beta})^{-1}\), where \(0\leq\alpha<\beta<k\) and \(\mu\geq 0\).
3. \(\varphi(s)=e^{-s}\) with the assumption \(0<u<k\).
4. \(\varphi(s)=s^{1-p_{1}}\), if \(0<s\leq 1\); \(\varphi(s)=s^{1-p_{2}}\), if \(s>1\), where \(1<p_{1},p_{2}<k+1\).
### Uniqueness of solutions to a class of isotropic curvature problems
Let \(\psi:(0,\infty)\times(0,\infty)\to(0,\infty)\) be a continuous function, and \(l,k\) be two integers with \(0\leq l<k\leq n\). We consider the following isotropic curvature problem:
\[\left(\frac{P_{k}(W)}{P_{l}(W)}\right)^{\frac{1}{k-l}}=\psi(u,r)\quad\text{on }\mathbb{S}^{n}. \tag{1.12}\]
After giving new integral formulas involving the radial function, we obtain the following theorem:
**Theorem 1.2**.: _Let \(n\geq 2\) and \(0\leq l<k\leq n\). Suppose \(\psi:(0,\infty)\times(0,\infty)\to(0,\infty)\) is a continuous function, and there exists a \(C^{1}\) decreasing function \(\eta:(0,\infty)\to(0,\infty)\) satisfying_
\[\left(s\psi(1,1)-\psi(s,t)\right)\left(\eta(1)\psi^{k-l-1}(1,1)-\eta(t)\psi^{k -l-1}(s,t)\right)\leq 0, \tag{1.13}\]
_for any \(t>s>0\). Then the smooth strictly convex solution \(M\) to (1.12) must be a sphere. Moreover, if in addition \(\eta^{\prime}(t)<0\) for any \(t>0\), then \(M\) is an origin-centred sphere._
Let \(\psi(u,r)=u^{\alpha}\phi(r)\), we consider the following equation
\[\left(\frac{P_{k}(W)}{P_{l}(W)}\right)^{\frac{1}{k-l}}=u^{\alpha}\phi(r)\quad \text{on }\mathbb{S}^{n}. \tag{1.14}\]
Note that if \(0\leq\alpha<1\) and \(\phi\) is increasing, then \(\psi(u,r)\) satisfies the condition (1.13) with \(\eta(t)=\phi^{-\frac{k-l-1}{1-\alpha}}(t)\). Therefore, we obtain
**Corollary 1.2**.: _Let \(n\geq 2\) and \(0\leq l<k\leq n\). Suppose \(0\leq\alpha<1\) is a real number, and \(\phi:(0,\infty)\to(0,\infty)\) is a \(C^{1}\) increasing function. Then the smooth strictly convex solution \(M\) to (1.14) must be a sphere. Moreover, if in addition \(\phi^{\prime}(t)>0\) for any \(t>0\), then \(M\) is an origin-centred sphere._
On the other hand, Simon [60] studied (1.12) when \(\partial_{2}\psi=0\) and proved some uniqueness theorems, including the classical Alexandrov-Fenchel-Jessen theorem. Inspired by Simon's result, we obtain the following uniqueness theorem for (1.12):
**Theorem 1.3**.: _Let \(n\geq 2\) and \(0\leq l<k\leq n\). Suppose \(\psi:(0,\infty)\times(0,\infty)\to(0,\infty)\) is a \(C^{1}\) function with \(\partial_{1}\psi\geq 0,\ \partial_{2}\psi\geq 0\) and at least one of these inequalities is strict. Then the smooth strictly convex solution \(M\) to (1.12) must be an origin-centred sphere._
When \(l=0\), we consider the following isotropic curvature problem:
\[P_{k}(W)=\psi(u,r)\quad\text{on }\mathbb{S}^{n}. \tag{1.15}\]
Then the following corollary is a direct consequence of Theorem 1.3.
**Corollary 1.3**.: _Let \(n\geq 2\) and \(1\leq k\leq n\). Suppose \(\psi:(0,\infty)\times(0,\infty)\to(0,\infty)\) is a \(C^{1}\) function with \(\partial_{1}\psi\geq 0,\ \partial_{2}\psi\geq 0\) and at least one of these inequalities is strict. Then the smooth strictly convex solution \(M\) to (1.15) must be an origin-centred sphere._
In particular, when \(l=0\) and \(k=n\), we provide a new proof of the uniqueness result obtained by Ivaki [40] for a large class of Minkowski type problems in the case \(n\geq 2\).
**Corollary 1.4** ([40]).: _Let \(n\geq 2\). Suppose \(\psi:(0,\infty)\times(0,\infty)\to(0,\infty)\) is a \(C^{1}\) function with \(\partial_{1}\psi\geq 0,\ \partial_{2}\psi\geq 0\) and at least one of these inequalities is strict. Then the smooth strictly convex solution \(M\) to the following isotropic Minkowski type problem_
\[\det(W)=\psi(u,r)\quad\text{on }\mathbb{S}^{n} \tag{1.16}\]
_must be an origin-centred sphere._
As applications, we can immediately deduce the following corollaries:
**Corollary 1.5**.: _Let \(n\geq 2\) and \(1\leq k\leq n\). Suppose \(p\geq 1,\ q\leq k+1\) and at least one of these inequalities is strict. Then the smooth strictly convex solution \(M\) to the following isotropic \(L_{p}\) dual Christoffel-Minkowski problem_
\[u^{1-p}r^{q-k-1}P_{k}(W)=1\quad\text{on }\mathbb{S}^{n} \tag{1.17}\]
_must be an origin-centred sphere._
_Remark 1.1_.: When \(k=n\), Corollary 1.5 reduces to the uniqueness theorem of the isotropic \(L_{p}\) dual Minkowski problem (1.8) for \(p>1\) and \(q\leq n+1\), as shown by Chen-Li [15].
**Corollary 1.6**.: _Let \(n\geq 2\) and \(1\leq k\leq n\). Suppose \(p\geq 1\) is a real number. Then the smooth strictly convex solution \(M\) to the following isotropic \(L_{p}\) Gaussian Christoffel-Minkowski problem_
\[u^{1-p}e^{-\frac{r^{2}}{2}}P_{k}(W)=1\quad\text{on }\mathbb{S}^{n} \tag{1.18}\]
_must be an origin-centred sphere._
_Remark 1.2_.: Corollary 1.6 provides an affirmative answer to the conjecture proposed by Chen-Hu-Liu-Zhao [18], and another proof can be found in [40].
### Uniqueness of solutions to \(L_{p}\) dual Minkowski problems
The \(L_{p}\) dual Minkowski problem was posed by Lutwak, Yang and Zhang in [51], and unifies the \(L_{p}\) Minkowski problem and the dual Minkowski problem. The \(L_{p}\) dual Minkowski problem has been intensively studied by many authors, see e.g. [8, 13, 14, 15, 37, 47, 48, 58]. In the smooth case, given a positive continuous function \(f\) on \(\mathbb{S}^{n}\), the \(L_{p}\) dual Minkowski problem becomes the following Monge-Ampere type equation:
\[u^{1-p}r^{q-n-1}\det(W)=f\quad\text{on }\mathbb{S}^{n}. \tag{1.19}\]
For a class of convex bodies that are close to the ball, by applying integral formulas involving the radial function, we obtain the uniqueness of solutions to \(L_{p}\) dual Minkowski problems for \(p=2\) and \(q\leq n+1\).
**Theorem 1.4**.: _Let \(n\geq 2\). Suppose \(p=2\) and \(q\leq n+1\). For any positive function \(f\in C(\mathbb{S}^{n})\), then the smooth strictly convex solution \(M\) to (1.19) satisfying \(\frac{r}{u}\leq\sqrt{2}\) is unique, except when \(p=q\), in which case it is unique up to scaling._
_Remark 1.3_.: If a convex body \(K\) satisfies \(RB^{n+1}\subseteq K\subseteq\sqrt{2}RB^{n+1}\) for some \(R>0\), where \(B^{n+1}\) denotes the Euclidean unit ball in \(\mathbb{R}^{n+1}\), then its boundary \(M=\partial K\) satisfies \(\frac{r}{u}\leq\sqrt{2}\). Consequently, the uniqueness of the \(L_{2}\) dual Minkowski problem with \(q\leq n+1\) in \(\mathbb{R}^{n+1}\) is guaranteed for these convex bodies.
When \(f\equiv 1\) and \(q=n+1\), (1.19) reduces to the isotropic \(L_{p}\) Minkowski problem:
\[u^{1-p}\det(W)=1\quad\text{on }\mathbb{S}^{n} \tag{1.20}\]
It has been established, due to the works of Lutwak [50], Andrews [2], Andrews-Guan-Ni [5], Choi-Daskalopoulos [21], as well as Brendle-Choi-Daskalopoulos [10], that for \(p>-n-1\), the only solutions \(M\) to (1.20) are origin-centered spheres, while for \(p=-n-1\), the solutions \(M\) are origin-centered ellipsoids. Moreover, Saroglou [56] obtained uniqueness results for a class of isotropic Orlicz-Minkowski problems, including (1.20) for \(p>-n-1\). By using a local version of the Brunn-Minkowski inequality, Ivaki and Milman [41] provided a new proof of the uniqueness of solutions to (1.20) for \(-n-1\leq p\leq-1\) and \(p=0\).
As for the uniqueness of solutions to the isotropic \(L_{p}\) dual Minkowski problem (1.8), it is known that the solution is unique when \(q<p\)[37] and is unique up to scaling when \(p=q\)[15], which can be obtained by the strong maximum principle. When \(1<p<q\leq n+1\) or \(-n-1\leq p<q<-1\), Chen-Li [15] showed that the solution must be an origin-centred sphere. In the planar case \(n=1\), we [47] provided a complete classification of solutions to (1.8). When \(-n-1\leq p<q\leq\min\{n+1,n+1+p\}\), Chen-Huang-Zhao [13] proved that the origin-symmetric solution must be an origin-centred sphere. Furthermore, Ivaki and Milman [41] proved the uniqueness of origin-symmetric solutions to (1.8) for \(p\geq-n-1\) and \(q\leq n+1\) with at least one of these being strict.
In this paper, by selecting test functions in a local version of the Brunn-Minkowski inequality, we obtain some new uniqueness results for the isotropic \(L_{p}\) dual Minkowski problem.
**Theorem 1.5**.: _Let \(n\geq 1\). Suppose either_
1. \(p\geq-n-1,\ q\leq n-1+2\sqrt{1+\frac{n+1+p}{n+2}}\) _and at least one of these is strict, or_
2. \(q\leq n+1,\ p\geq-n+1-2\sqrt{1+\frac{n+1-q}{n+2}}\) _and at least one of these is strict._
_Then the smooth strictly convex origin-symmetric solution \(M\) to (1.8) must be an origin-centred sphere._
_Remark 1.4_.: In fact, the uniqueness of origin-symmetric solutions to the isotropic \(L_{p}\) Christoffel-Minkowski problem (1.17) was also investigated in [41]. By modifying the test functions in [41, Lemma 3.1] for \(k<n\), we obtain some results similar to Theorem 1.5, see Appendix A.
When \(p\geq-n-1\) and \(q\leq n+1\), Theorem 1.5 reduces to the following result of Ivaki and Milman.
**Corollary 1.7** ([41]).: _Let \(n\geq 1\). Suppose \(p\geq-n-1\), \(q\leq n+1\) and at least one of these inequalities is strict. Then the smooth strictly convex origin-symmetric solution \(M\) to the isotropic \(L_{p}\) dual Minkowski problem (1.8) must be an origin-centred sphere._
In the planar case \(n=1\), combining Theorem 1.5 and [47, Theorem 1.2] immediately yields the following corollary.
**Corollary 1.8**.: _Let \(n=1\). If \(q>2\), \(p+3<q\leq 2\sqrt{\frac{5+p}{3}}\); or \(p<-2\), \(-2\sqrt{\frac{5-q}{3}}\leq p<q-3\), then the embedded solution to the planar isotropic \(L_{p}\) dual Minkowski problem is unique._
As for the case without the origin-symmetric assumption, we have the following result.
**Theorem 1.6**.: _Let \(n\geq 1\). Suppose either_
1. \(-n-1<p\leq-1\)_,_ \(n+1\leq q\leq n+\frac{1}{2}+\sqrt{\frac{1}{4}-\frac{(1+p)(n+1+p)}{n(n+2)}}\)_, or_
2. \(1\leq q<n+1\)_,_ \(-n-\frac{1}{2}-\sqrt{\frac{1}{4}-\frac{(1-q)(n+1-q)}{n(n+2)}}\leq p\leq-n-1\)_._
_Then the smooth strictly convex solution \(M\) to (1.8) must be an origin-centred sphere._
When \(q=n+1\), Theorem 1.6 reduces to the following uniqueness result for the isotropic \(L_{p}\) Minkowski problem (1.20) within the range \(-n-1<p\leq-1\), originally established by Brendle, Choi and Daskalopoulos [10].
**Corollary 1.9** ([10, 41, 56]).: _Let \(n\geq 1\). Suppose \(-n-1<p\leq-1\). Then the smooth strictly convex solution \(M\) to the isotropic \(L_{p}\) Minkowski problem (1.20) must be an origin-centred sphere._
The paper is organized as follows. In Section 2, we collect some basic concepts and techniques. In Section 3, we prove Theorem 1.1, Theorem 1.2 and Theorem 1.4 by use of integral formulas. In Section 4, inspired by Simon's result [60], we give the proof of Theorem 1.3. In Section 5, using the method developed by Ivaki and Milman [41], we prove Theorem 1.5 and Theorem 1.6. In Appendix A, we provide a uniqueness result of origin-symmetric solutions to the isotropic \(L_{p}\) dual Christoffel-Minkowski problem.
**Acknowledgments.** The work was supported by NSFC Grant No. 11831005.
## 2. Preliminaries
### Inequalities of elementary symmetric functions
We first review some properties of elementary symmetric functions, see e.g. [12, 25, 54].
**Definition 2.1**.: For an integer \(k=1,\ldots,n\) and a point \(\lambda=(\lambda_{1},\ldots,\lambda_{n})\in\mathbb{R}^{n}\), the _normalized \(k\)-th elementary symmetric function_\(P_{k}(\lambda)\) is defined by
\[P_{k}(\lambda)=\binom{n}{k}^{-1}\sigma_{k}(\lambda)=\binom{n}{k}^{-1}\sum_{1 \leq i_{1}<\cdots<i_{k}\leq n}\lambda_{i_{1}}\cdots\lambda_{i_{k}}.\]
It is convenient to set \(P_{0}(\lambda)=1\) and \(P_{k}(\lambda)=0\) for \(k>n\).
The definition can be extended to symmetric matrices. Let \(A\in\mathrm{Sym}(n)\) be an \(n\times n\) symmetric matrix and \(\lambda=\lambda(A)\) be the eigenvalues of \(A\). Set \(P_{k}(A)=P_{k}(\lambda(A))\). We have
\[P_{k}(A)= \frac{(n-k)!}{n!}\delta_{i_{1},\dots,i_{k}}^{j_{1},\dots,j_{k}}A_{i_ {1}j_{1}}\cdots A_{i_{k}j_{k}},\quad k=1,\dots,n,\]
where \(\delta_{i_{1},\dots,i_{k}}^{j_{1},\dots,j_{k}}=\det(\delta_{i_{s}}^{j_{t}})_{k \times k}\) denotes the generalized Kronecker delta.
**Lemma 2.2**.: _If \(\lambda\in\Gamma_{k}=\{\lambda\in\mathbb{R}^{n}:\ P_{i}(\lambda)>0,\ i=1, \dots,k\}\), we have the following Newton-MacLaurin inequality_
\[P_{k}(\lambda)P_{l-1}(\lambda)\leq P_{l}(\lambda)P_{k-1}(\lambda),\quad 1 \leq l<k. \tag{2.1}\]
_Equality holds if and only if \(\lambda_{1}=\dots=\lambda_{n}\)._
Furthermore, we state the following properties which can be found in [54, Theorem 2.1].
**Lemma 2.3** ([54]).: _Suppose that \(\alpha,\beta\in(0,1)\) and \(k,l\in\mathbb{N}\) are numbers such that_
\[\alpha+\beta=1\quad\text{and}\quad k\alpha+l\beta\in\{0,\dots,n\}.\]
_If \(\lambda\in\Gamma_{k}\cap\Gamma_{l}\), we have_
\[P_{k\alpha+l\beta}(\lambda)\geq P_{k}^{\alpha}(\lambda)P_{l}^{\beta}(\lambda). \tag{2.2}\]
_Equality holds if and only if \(\lambda_{1}=\dots=\lambda_{n}\)._
**Definition 2.4**.: For an integer \(k=1,\dots,n\) and \(k\) points \(\lambda^{(1)},\dots,\lambda^{(k)}\in\mathbb{R}^{n}\), the _normalized \(k\)-th mixed elementary symmetric function_\(P_{k}(\lambda^{(1)},\dots,\lambda^{(k)})\) is defined by
\[P_{k}(\lambda^{(1)},\dots,\lambda^{(k)})=\binom{n}{k}^{-1}\sigma_{k}(\lambda^ {(1)},\dots,\lambda^{(k)})=\binom{n}{k}^{-1}\sum_{1\leq i_{1}<\dots<i_{k}\leq n }\lambda_{i_{1}}^{(1)}\cdots\lambda_{i_{k}}^{(k)}.\]
The definition can also be extended to symmetric matrices. Let \(A^{1},\dots,A^{k}\in\mathrm{Sym}(n)\) be \(n\times n\) symmetric matrices and \(\lambda^{(i)}=\lambda(A^{i})\) be the eigenvalues of \(A^{i}\) for \(i=1,\dots,k\). Set \(P_{k}(A^{1},\dots,A^{k})=P_{k}(\lambda^{(1)},\dots,\lambda^{(k)})\). We have
\[P_{k}(A^{1},\dots,A^{k})=\frac{(n-k)!}{n!}\delta_{i_{1},\dots,i_{k}}^{j_{1}, \dots,j_{k}}A_{i_{1}j_{1}}^{1}\cdots A_{i_{k}j_{k}}^{k}.\]
It is obvious that
\[P_{k}(\lambda,\dots,\lambda)=P_{k}(\lambda)\quad\text{and}\quad P_{n}( \lambda^{(1)},\dots,\lambda^{(k)},I,\dots,I)=P_{k}(\lambda^{(1)},\dots, \lambda^{(k)}),\]
equivalently,
\[P_{k}(A,\dots,A)=P_{k}(A)\quad\text{and}\quad P_{n}(A^{1},\dots,A^{k},Id, \dots,Id)=P_{k}(A^{1},\dots,A^{k}),\]
where \(I=(1,\dots,1)\in\mathbb{R}^{n}\), and \(Id\) is the \(n\times n\) identity matrix.
Since \(P_{k}\) is hyperbolic and complete, we have the following lemma, see [25, Theorem 5].
**Lemma 2.5** ([25]).: _For any \(A^{i}\in\Gamma_{k},\ i=1,\dots,k\), we have_
\[P_{k}(A^{1},\dots,A^{k})\geq P_{k}(A^{1})^{\frac{1}{k}}\cdots P_{k}(A^{k})^{ \frac{1}{k}}. \tag{2.3}\]
_Equality holds if and only if these \(k\) matrices are pairwise proportional._
In particular, define \(P_{k,l}(\lambda,\bar{\lambda})=P_{k+l}(\underbrace{\lambda,\dots,\lambda}_{k}, \underbrace{\bar{\lambda},\dots,\bar{\lambda}}_{l})\). Then we have
**Lemma 2.6**.: _For any \(\lambda,\bar{\lambda}\in\Gamma_{k+l}\),_
\[P_{k,l}(\lambda,\bar{\lambda})\geq P_{k+l}(\lambda)^{\frac{k}{k+l}}P_{k+l}(\bar{ \lambda})^{\frac{l}{k+l}}, \tag{2.4}\]
_Equality holds if and only if \(\lambda\) and \(\bar{\lambda}\) are proportional._
### \(L_{k}\) operator
Let \(x:M^{n}\to\mathbb{R}^{n+1}\) be a hypersurface, and choose a unit normal vector \(e_{n+1}\) such that \(\{e_{1},\ldots,e_{n},e_{n+1}\}\) is a local orthonormal basis. Let \((h_{ij})\) be the second fundamental form and \(\sigma_{k}(h_{ij})\) be the \(k\)-th mean curvature of \(M\). Denote by \(u=-\langle x,e_{n+1}\rangle\) the support function and by \(P_{k}(h_{ij})={n\choose k}^{-1}\sigma_{k}(h_{ij})\) the normalized \(k\)-th mean curvature of \(M\).
**Definition 2.7**.: For a hypersurface \(x:M^{n}\to\mathbb{R}^{n+1}\), the \(k\)_-th Newton tensor_ is defined by
\[T^{k}_{ij}=\sigma_{k}(h_{ij})\delta_{ij}-\sigma_{k-1}(h_{ij})h_{ij}+\ldots+(- 1)^{k}\sum_{i_{1},\ldots,i_{k}}h_{i_{1}i_{2}}\cdots h_{i_{k}j},\quad k=0, \ldots,n-1.\]
We have a second-order differential operator \(L_{k}\) defined by
\[L_{k}f=\sum_{i,j}T^{k}_{ij}f_{ij},\quad\forall\ f\in C^{\infty}(M),\ 0\leq k \leq n-1.\]
\(L_{k}\) operator satisfies the following properties, see [55, 32]:
**Proposition 2.8**.:
1. \(T^{k}_{ij}=T^{k}_{ji},\ \sum_{j}T^{k}_{ij,j}=0\) _and_ \(T^{k}_{ij}=\frac{1}{k!}\delta^{j_{1},\ldots,j_{k},j}_{i_{1},\ldots,i_{k},i}h_{ i_{1}j_{1}}\cdots h_{i_{k}j_{k}}\)_._
2. \(L_{k}\) _operator is elliptic and self-adjoint for a strictly convex hypersurface_ \(M\)_._
3. _We have_ (2.5) \[\frac{1}{2}L_{k}(|x|^{2})=\sum T^{k}_{ij}(\langle x,e_{i}\rangle)_{j}=(n-k){n \choose k}(P_{k}(h_{ij})-uP_{k+1}(h_{ij})).\]
### Integral formulas for hypersurfaces
In this subsection, we review some integral formulas derived by S. S. Chern [20] and provide some new integral formulas that contain radial functions.
Let \(x:M^{n}\to\mathbb{R}^{n+1}\) and \(\bar{x}:\bar{M}^{n}\to\mathbb{R}^{n+1}\) be two closed strictly convex smooth hypersurfaces such that the normal vectors of \(M\) at \(x\) and of \(\bar{M}\) at \(\bar{x}\) are the same. Let \(\{e_{1},\ldots,e_{n},e_{n+1}\}\) be an orthonormal frame in \(\mathbb{R}^{n+1}\) such that \(e_{n+1}\) is the unit inner normal vector of \(M\) at \(x\), and also the unit inner normal vector of \(\bar{M}\) at \(\bar{x}\). We will denote the following ranges of indices:
\[1\leq A,B,\ldots\leq n+1;\quad 1\leq i,j,\ldots\leq n.\]
Let \(\{\theta_{A}\}\) be the dual frame field of \(\{e_{A}\}\), and denote \(\omega_{A}=\left.\theta_{A}\right|_{\bar{M}}\) and \(\bar{\omega}_{A}=\left.\theta_{A}\right|_{\bar{M}}\). Restricting \(\theta_{A}\) to the hypersurface \(M\), we have \(\theta_{i}=\omega_{i}\) and \(\theta_{n+1}=0\). We have
\[dx=\sum_{i}\omega_{i}e_{i},\quad de_{n+1}=\sum_{i}\omega_{n+1,i}e_{i},\]
where \(\omega_{i,n+1}=\sum\limits_{j}h_{ij}\omega_{j}\). Let \(x=\sum\limits_{A}\tau_{A}e_{A}\), \((h^{ik})=(h_{ik})^{-1}\), and denote by \(u=-\langle x,e_{n+1}\rangle\) the support function and by \(r=\langle x,x\rangle^{\frac{1}{2}}\) the radial function of \(M\), then we obtain
\[\tau_{n+1}=-u,\quad r^{2}=\sum_{i}\tau_{i}^{2}+u^{2},\quad dr=\frac{1}{r}\sum_ {i}\tau_{i}\omega_{i}. \tag{2.6}\]
We assign notations \(\bar{\omega}_{n+1,i},\ \bar{h}_{ij},\ \bar{h}^{ij},\ \bar{\tau}_{A},\ \bar{u}\) and \(\bar{r}\) for hypersurface \(\bar{M}\) as well.
Notice that \(\omega_{n+1,i}=\langle de_{n+1},e_{i}\rangle=\bar{\omega}_{n+1,i}\). Let \(a_{ij}=\sum\limits_{k}h^{ik}\bar{h}_{kj}\), we have
\[\omega_{i}=\sum\limits_{j}a_{ij}\bar{\omega}_{j}. \tag{2.7}\]
Consider the differential form of degree \(n-1\):
\[\Phi_{0}=(x,\bar{x},\underbrace{d\bar{x},\ldots,d\bar{x}}_{n-1}), \tag{2.8}\]
where \((\cdot,\ldots,\cdot)\) is a determinant of order \(n+1\), whose columns are the components of the respective vectors or vector-valued differential forms, with the convention that in the expansion of the determinant the multiplication of differential forms is in the sense of exterior multiplication. Then applying exterior differentiation gives
\[\begin{split} d\Phi_{0}&=(x,d\bar{x},\underbrace{ d\bar{x},\ldots,d\bar{x}}_{n-1})+(dx,\bar{x},\underbrace{d\bar{x},\ldots,d\bar{x}}_{n-1}) \\ &=(\sum\limits_{i}\tau_{i}e_{i}-ue_{n+1},\sum\limits_{i}\bar{ \omega}_{i}e_{i},\underbrace{\sum\limits_{i}\bar{\omega}_{i}e_{i},\ldots,\sum \limits_{i}\bar{\omega}_{i}e_{i}}_{n-1})\\ &\quad+(\sum\limits_{i}\omega_{i}e_{i},\sum\limits_{i}\bar{\tau}_ {i}e_{i}-\bar{u}e_{n+1},\underbrace{\sum\limits_{i}\bar{\omega}_{i}e_{i},\ldots, \sum\limits_{i}\bar{\omega}_{i}e_{i}}_{n-1})\\ &=-(-1)^{n}u\sum\delta^{1,\ldots,n}_{i_{1},\ldots,i_{n}}\bar{ \omega}_{i_{1}}\wedge\ldots\wedge\bar{\omega}_{i_{n}}+(-1)^{n}\bar{u}\sum \delta^{1,\ldots,n}_{i_{1},\ldots,i_{n}}\omega_{i_{1}}\wedge\bar{\omega}_{i_{2 }}\wedge\ldots\wedge\bar{\omega}_{i_{n}}\\ &=(-1)^{n}n!(-u+\bar{u}P_{1}(a_{ij}))d\bar{\mu},\end{split} \tag{2.9}\]
where \(d\bar{\mu}=\bar{\omega}_{1}\wedge\ldots\wedge\bar{\omega}_{n}\). Since \(M\) and \(\bar{M}\) are closed and strictly convex, we can identify \(M,\bar{M}\) with \(\mathbb{S}^{n}\) by use of Gauss maps. Then previously mentioned functions and differential forms are now defined on \(\mathbb{S}^{n}\). Therefore, integrating (2.9) over \(\mathbb{S}^{n}\) yields
\[\int_{\mathbb{S}^{n}}(\bar{u}P_{1}(a_{ij})-u)d\bar{\mu}=0. \tag{2.10}\]
Similarly, we consider a family of differential forms as follows:
\[\Phi_{k}=(x,\bar{x},\underbrace{dx,\ldots,dx}_{k},\underbrace{d\bar{x},\ldots,d\bar{x}}_{n-k-1}),\quad k=1,\ldots,n-1. \tag{2.11}\]
Applying exterior differentiation gives
\[d\Phi_{k}=(-1)^{n}n!(\bar{u}P_{k+1}(a_{ij})-uP_{k}(a_{ij}))d\bar{\mu}. \tag{2.12}\]
Then integrating (2.12) over \(\mathbb{S}^{n}\) yields
\[\int_{\mathbb{S}^{n}}(\bar{u}P_{k+1}(a_{ij})-uP_{k}(a_{ij}))d\bar{\mu}=0. \tag{2.13}\]
In conclusion, we obtain the following integral formulas:
**Lemma 2.9** ([20]).: _Let \(x:M^{n}\to\mathbb{R}^{n+1}\) and \(\bar{x}:\bar{M}^{n}\to\mathbb{R}^{n+1}\) be two closed smooth strictly convex hypersurfaces. Then, for \(k=0,\ldots,n-1\)_
\[\int_{\mathbb{S}^{n}}(\bar{u}P_{k+1}(a_{ij})-uP_{k}(a_{ij}))d\bar{ \mu} =0, \tag{2.15}\] \[\int_{\mathbb{S}^{n}}(uP_{k+1}(a^{ij})-\bar{u}P_{k}(a^{ij}))d\mu =0, \tag{2.14}\]
_where \(a_{ij}=\sum\limits_{s}h^{is}\bar{h}_{sj}\) and \(a^{ij}=\sum\limits_{s}\bar{h}^{is}h_{sj}\)._
In particular, when \(\bar{M}\) is the unit sphere \(\mathbb{S}^{n}\), we have \(\bar{u}=1\) and \((a^{ij})=(h_{ij})\). Note that the eigenvalues of \((h_{ij})\) are the principal curvatures \(\{\kappa_{i}\}\) of \(M\), then \(P_{k}(a^{ij})\) is the normalized \(k\)-th mean curvature \(P_{k}(\kappa)\) of \(M\). Hence, (2.15) reduces to Minkowski formulas:
\[\int_{M}(uP_{k+1}(\kappa)-P_{k}(\kappa))d\mu=0,\quad k=0,\ldots,n-1. \tag{2.16}\]
Next, we consider the following differential forms:
\[\Phi_{k,l}=(x,\bar{x},\underbrace{dx,\ldots,dx}_{k},\underbrace{de_{n+1}, \ldots,de_{n+1}}_{n-k-l-1},\underbrace{d\bar{x},\ldots,d\bar{x}}_{l}),\quad 0 \leq k+l\leq n-1. \tag{2.17}\]
Denote by \(d\sigma\) the standard area element of \(\mathbb{S}^{n}\), it follows that \(d\sigma=\omega_{1,n+1}\wedge\ldots\wedge\omega_{n,n+1}\), and then
\[d\mu=\det(h^{ij})d\sigma,\quad d\bar{\mu}=\det(\bar{h}^{ij})d\sigma.\]
Applying exterior differentiation to (2.17) gives
\[d\Phi_{k,l} =-(-1)^{k+l+1}(n-k-l-1)!u\sum\delta_{i_{1},\ldots i_{k+l+1}}^{j_{1 },\ldots j_{k+l+1}}h^{i_{1}j_{1}}\cdots h^{i_{k}j_{k}}\bar{h}^{i_{k+1}j_{k+1}} \cdots\bar{h}^{i_{k+l+1}j_{k+l+1}}d\sigma\] \[\quad+(-1)^{k+l+1}(n-k-l-1)!\bar{u}\sum\delta_{i_{1},\ldots i_{k+ l+1}}^{j_{1},\ldots j_{k+l+1}}h^{i_{1}j_{1}}\cdots h^{i_{k+1}j_{k+1}}\bar{h}^{i_{k+ 2}j_{k+2}}\cdots\bar{h}^{i_{k+l+1}j_{k+l+1}}d\sigma\] \[=(-1)^{k+l+1}n!\left(\bar{u}P_{k+1,l}(h^{ij},\bar{h}^{ij})-uP_{k, l+1}(h^{ij},\bar{h}^{ij})\right)d\sigma. \tag{2.18}\]
Hence, integrating (2.18) over \(\mathbb{S}^{n}\) yields
**Lemma 2.10** ([20]).: _Let \(x:M^{n}\to\mathbb{R}^{n+1}\) and \(\bar{x}:\bar{M}^{n}\to\mathbb{R}^{n+1}\) be two closed smooth strictly convex hypersurfaces. Then, for \(0\leq k+l\leq n-1\)_
\[\int_{\mathbb{S}^{n}}\left(\bar{u}P_{k+1,l}(h^{ij},\bar{h}^{ij})-uP_{k,l+1}(h ^{ij},\bar{h}^{ij})\right)d\sigma=0. \tag{2.19}\]
Generally, by applying the approach in [28], we can generalize integral formulas (2.19) to the cases where \(u\) and \(\bar{u}\) are \(C^{2}\) functions.
**Lemma 2.11**.: _Let \(u\) and \(\bar{u}\) be two \(C^{2}\) functions on \(\mathbb{S}^{n}\). Denote \((h^{ij})=(u_{ij}+u\delta_{ij})\) and \((\bar{h}^{ij})=(\bar{u}_{ij}+\bar{u}\delta_{ij})\). Then, for \(0\leq k+l\leq n-1\)_
\[\int_{\mathbb{S}^{n}}\left(\bar{u}P_{k,l}(h^{ij},\bar{h}^{ij})-P_{k,l+1}(h^{ij},\bar{h}^{ij})\right)d\sigma=0, \tag{2.20}\]
\[\int_{\mathbb{S}^{n}}\left(uP_{k,l}(h^{ij},\bar{h}^{ij})-P_{k+1,l}(h^{ij},\bar{ h}^{ij})\right)d\sigma=0, \tag{2.21}\]
_and_
\[\int_{\mathbb{S}^{n}}\left(\bar{u}P_{k+1,l}(h^{ij},\bar{h}^{ij})-uP_{k,l+1}(h^{ij},\bar{h}^{ij})\right)d\sigma=0. \tag{2.22}\]
Finally, we derive some new integral formulas that contain radial functions.
Let \(\eta:(0,+\infty)\to(0,+\infty)\) be a \(C^{1}\) function. For any \(k=1,\ldots,n-1\), we calculate the exterior differentiation of \(\eta(\bar{r})\Phi_{k}\):
\[d(\eta(\bar{r})\Phi_{k})=\eta(\bar{r})d\Phi_{k}+\frac{\eta^{\prime}(\bar{r})}{ \bar{r}}\sum_{i}\bar{\tau}_{i}\bar{\omega}_{i}\wedge\Phi_{k}.\]
Using (2.6), (2.7) and (2.11), we have
\[\Phi_{k} =(\sum_{i}\tau_{i}e_{i}-ue_{n+1},\sum_{i}\bar{\tau}_{i}e_{i}-\bar {u}e_{n+1},\sum_{i}\bar{\omega}_{i}e_{i},\underbrace{\sum_{i}\omega_{i}e_{i}, \ldots,\sum_{i}\omega_{i}e_{i}}_{k},\underbrace{\sum_{i}\bar{\omega}_{i}e_{i}, \ldots,\sum_{i}\bar{\omega}_{i}e_{i}}_{n-k-1}\] \[=(-1)^{n}\sum\delta^{1,\ldots,n}_{i_{1},\ldots,i_{n}}(-u\bar{ \tau}_{i_{1}}+\bar{u}\tau_{i_{1}})\underbrace{\omega_{i_{2}}\wedge\ldots \wedge\omega_{i_{k+1}}}_{k}\wedge\underbrace{\bar{\omega}_{i_{k+2}}\wedge\ldots \wedge\bar{\omega}_{i_{n}}}_{n-k-1}\] \[=(-1)^{n}\sum\delta^{1,\ldots,n}_{i_{1},\ldots,i_{n}}(-u\bar{ \tau}_{i_{1}}+\bar{u}\tau_{i_{1}})a_{i_{2}j_{2}}\cdots a_{i_{k+1}j_{k+1}} \underbrace{\bar{\omega}_{j_{2}}\wedge\ldots\wedge\bar{\omega}_{j_{k+1}}}_{k} \underbrace{\bar{\omega}_{i_{k+2}}\wedge\ldots\wedge\bar{\omega}_{i_{n}}}_{n-k -1}\] \[=(-1)^{n}\sum\delta^{1,2,\ldots,k+1,k+2,\ldots,n}_{i_{1},j_{2} \ldots,j_{k+1},i_{k+2},\ldots,i_{n}}(\bar{u}\tau_{i_{1}}-u\bar{\tau}_{i_{1}}) a_{i_{2}j_{2}}\cdots a_{i_{k+1}j_{k+1}}\underbrace{\bar{\omega}_{i_{2}}\wedge \ldots\wedge\bar{\omega}_{i_{n}}}_{n-1}.\]
Combining this with (2.12), we obtain
\[d(\eta(\bar{r})\Phi_{k}) =(-1)^{n}n!\eta(\bar{r})(\bar{u}P_{k+1}(a_{ij})-uP_{k}(a_{ij}))d \bar{\mu}\] \[\quad+(-1)^{n}\frac{\eta^{\prime}(\bar{r})}{\bar{r}}\sum\delta^{ 1,2,\ldots,k+1,k+2,\ldots,n}_{i_{1},j_{2}\ldots,j_{k+1},i_{k+2},\ldots,i_{n}} \bar{\tau}_{i}(\bar{u}\tau_{i_{1}}-u\bar{\tau}_{i_{1}})a_{i_{2}j_{2}}\cdots a _{i_{k+1}j_{k+1}}\bar{\omega}_{i}\wedge\underbrace{\bar{\omega}_{i_{2}}\wedge \ldots\wedge\bar{\omega}_{i_{n}}}_{n-1}\] \[=(-1)^{n}n!\eta(\bar{r})(\bar{u}P_{k+1}(a_{ij})-uP_{k}(a_{ij}))d \bar{\mu}\] \[\quad+(-1)^{n}(n-k-1)!\frac{\eta^{\prime}(\bar{r})}{\bar{r}}\sum \delta^{i_{1},j_{2},\ldots,j_{k+1}}_{i_{1},i_{2}\ldots,i_{k+1}}(\bar{u}\bar{ \tau}_{i_{1}}\tau_{i_{1}}-u\bar{\tau}_{i_{1}}^{2})a_{i_{2}j_{2}}\cdots a_{i_{k+ 1}j_{k+1}}d\bar{\mu}. \tag{2.23}\]
When \(k=0\), applying exterior differentiation of \(\eta(\bar{r})\Phi_{0}\) gives
\[d(\eta(\bar{r})\Phi_{0})=(-1)^{n}n!\eta(\bar{r})(\bar{u}P_{1}(a_{ij})-u)d\bar {\mu}+(-1)^{n}(n-1)!\frac{\eta^{\prime}(\bar{r})}{\bar{r}}\sum_{i}(\bar{u} \bar{\tau}_{i_{1}}\tau_{i_{1}}-u\bar{\tau}_{i_{1}}^{2})d\bar{\mu}. \tag{2.24}\]
Therefore, integrating (2.23) and (2.24) over \(\mathbb{S}^{n}\) yields
**Lemma 2.12**.: _Let \(x:M^{n}\to\mathbb{R}^{n+1}\) and \(\bar{x}:\bar{M}^{n}\to\mathbb{R}^{n+1}\) be two closed smooth strictly convex hypersurfaces. Then, for \(k=0,\ldots,n-1\)_
\[\begin{split}&\int_{\mathbb{S}^{n}}\eta(\bar{r})(\bar{u}P_{k+1}(a_{ ij})-uP_{k}(a_{ij}))d\bar{\mu}\\ =&\frac{(n-k-1)!}{n!}\int_{\mathbb{S}^{n}}\frac{\eta ^{\prime}(\bar{r})}{\bar{r}}\sum\delta^{i_{1},j_{2},\ldots,j_{k+1}}_{i_{1},i_{ 2}\ldots,i_{k+1}}(u\bar{\tau}_{i_{1}}^{2}-\bar{u}\bar{\tau}_{i_{1}}\tau_{i_{1}} )a_{i_{2}j_{2}}\cdots a_{i_{k+1}j_{k+1}}d\bar{\mu},\end{split} \tag{2.25}\]
_and_
\[\begin{split}&\int_{\mathbb{S}^{n}}\eta(r)(uP_{k+1}(a^{ij})-\bar{u}P _{k}(a^{ij}))d\mu\\ =&\frac{(n-k-1)!}{n!}\int_{\mathbb{S}^{n}}\frac{\eta^ {\prime}(r)}{r}\sum\delta^{i_{1},j_{2},\ldots,j_{k+1}}_{i_{1},i_{2},\ldots,i_{k +1}}(\bar{u}\tau_{i_{1}}^{2}-u\tau_{i_{1}}\bar{\tau}_{i_{1}})a^{i_{2}j_{2}} \cdots a^{i_{k+1}j_{k+1}}d\mu,\end{split} \tag{2.26}\]
_where the terms \(a_{i_{2}j_{2}}\cdots a_{i_{k+1}j_{k+1}}\) and \(a^{i_{2}j_{2}}\cdots a^{i_{k+1}j_{k+1}}\) on the right hand sides vanish when \(k=0\)._
Moreover, when \(\bar{M}\) is the unit sphere \(\mathbb{S}^{n}\), then \(\bar{r}=\bar{u}=1\), \(\bar{\tau}_{i}=0\) and \((a^{ij})=(h_{ij})\). Using (2.26), we obtain
**Lemma 2.13**.: _Let \(x:M^{n}\to\mathbb{R}^{n+1}\) be a closed smooth strictly convex hypersurface. Then, for \(k=0,\ldots,n-1\)_
\[\begin{split}&\int_{\mathbb{S}^{n}}\eta(r)(uP_{k+1}(h_{ij})-P_{k}(h_ {ij}))d\mu\\ =&\frac{(n-k-1)!}{n!}\int_{\mathbb{S}^{n}}\frac{\eta ^{\prime}(r)}{r}\sum\delta^{i_{1},j_{2},\ldots,j_{k+1}}_{i_{1},i_{2},\ldots,i_ {k+1}}\tau_{i_{1}}^{2}h_{i_{2}j_{2}}\cdots h_{i_{k+1}j_{k+1}}d\mu,\end{split} \tag{2.27}\]
_where the term \(h_{i_{2}j_{2}}\cdots h_{i_{k+1}j_{k+1}}\) on the right hand side vanishes when \(k=0\)._
Note that when \(\eta\equiv 1\), Lemma 2.12 reduces to Lemma 2.9, and Lemma 2.13 reduces to Minkowski formulas (2.16).
### Spectral estimate
In this subsection, we will use the same notation as in subsection 2.3. Let \((\mathbb{S}^{n},g_{0},\nabla)\) denote the unit sphere equipped with its standard round metric and Levi-Civita connection. For a smooth closed strictly convex hypersurface \(x:M^{n}\to\mathbb{R}^{n+1}\), denote by \(\nu:M^{n}\to\mathbb{S}^{n}\) the Gauss map of \(M\). For convenience, we denote \(\sigma_{k}=\sigma_{k}(h^{ij})\) for \(1\leq k\leq n\) and \(dV_{n}=u\sigma_{n}d\sigma\).
The following lemma is the local version of the Alexandrov-Fenchel inequality and can also be regarded as a spectral interpretation of the Brunn-Minkowski inequality. For more details, refer to [1, 4, 41, 46, 53].
**Lemma 2.14** ([1, 4]).: _Let \(f\in C^{2}(\mathbb{S}^{n})\) with \(\int_{\mathbb{S}^{n}}fu\sigma_{n}d\sigma=0\). Then we have_
\[n\int_{\mathbb{S}^{n}}f^{2}u\sigma_{n}d\sigma\leq\int_{\mathbb{S}^{n}}u^{2} \sigma_{n}^{ij}\nabla_{i}f\nabla_{j}fd\sigma. \tag{2.28}\]
_Equality holds if and only if for some vector \(v\in\mathbb{R}^{n+1}\) we have_
\[f(z)=\langle\frac{z}{u(z)},v\rangle,\quad\forall\ z\in\mathbb{S}^{n}.\]
It is well-known (see e.g. [57]) that the inverse Gauss map \(X=\nu^{-1}:\mathbb{S}^{n}\to M\) is given by
\[X(z)=u(z)z+\nabla u(z),\quad\forall\ z\in\mathbb{S}^{n}.\]
By substituting some test functions in (2.28), we obtain a key inequality as follows:
**Lemma 2.15**.: _Assume that \(\alpha\in\mathbb{R}\). Then we have_
\[\begin{split} n\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{n}& \leq n\frac{|\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{n}|^{2}}{\int_{ \mathbb{S}^{n}}dV_{n}}+\int_{\mathbb{S}^{n}}|X|^{2\alpha}u(\Delta u+nu)dV_{n} \\ &\quad+(\alpha^{2}+2\alpha)\int_{\mathbb{S}^{n}}|X|^{2\alpha-1}u \langle\nabla u,\nabla|X|\rangle dV_{n}.\end{split} \tag{2.29}\]
Proof.: Let \(\{E_{l}\}_{l=1}^{n+1}\) be an orthonormal basis of \(\mathbb{R}^{n+1}\). Suppose \(\{e_{i}\}_{i=1}^{n}\) is a local orthonormal frame for \(\mathbb{S}^{n}\) such that \((u_{ij}+u\delta_{ij})(z_{0})=\lambda_{i}(z_{0})\delta_{ij}\). Inspired by the construction of Ivaki and Milman [41], for \(l=1,\ldots,n+1\), we define the functions \(f_{l}:\mathbb{S}^{n}\to\mathbb{R}\)
\[f_{l}(z)=|X(z)|^{\alpha}\langle X(z),E_{l}\rangle-\frac{\int_{\mathbb{S}^{n}} |X(z)|^{\alpha}\langle X(z),E_{l}\rangle dV_{n}}{\int_{\mathbb{S}^{n}}dV_{n}}. \tag{2.30}\]
Since \(\int_{\mathbb{S}^{n}}f_{l}dV_{n}=0\) for \(1\leq l\leq n+1\), by applying Lemma 2.14 to \(f_{l}\) and summing over \(l\), we have
\[n\sum_{l}\int_{\mathbb{S}^{n}}f_{l}^{2}dV_{n}=n\left[\int_{\mathbb{S}^{n}}|X|^ {2\alpha+2}dV_{n}-\frac{|\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{n}|^{2}}{\int_{ \mathbb{S}^{n}}dV_{n}}\right]\leq\sum_{l}\int_{\mathbb{S}^{n}}u^{2}\sigma_{n} ^{ij}\nabla_{i}f_{l}\nabla_{j}f_{l}d\sigma.\]
It follows from that \(\nabla_{i}X=\sum_{j}(u_{ij}+u\delta_{ij})e_{j}=\lambda_{i}e_{i}\) at \(z_{0}\). Then \(\langle e_{i},X\rangle=u_{i}\) and \(\lambda_{i}\langle e_{i},X\rangle^{2}=|X|\langle\nabla u,\nabla|X|\rangle\) at \(z_{0}\). Using \(\frac{\partial\sigma_{n}}{\partial\lambda_{i}}=\frac{\sigma_{n}}{\lambda_{i}}\) and \(\sum_{i}\frac{\partial\sigma_{n}}{\partial\lambda_{i}}\lambda_{i}^{2}=\sigma_ {1}\sigma_{n}\), we have
\[\sum_{l,i,j}\sigma_{n}^{ij}\nabla_{i}f_{l}\nabla_{j}f_{l} =\sum_{l,i}\frac{\partial\sigma_{n}}{\partial\lambda_{i}}(\nabla _{i}(|X|^{\alpha})\langle X,E_{l}\rangle+|X|^{\alpha}\langle\nabla_{i}X,E_{l} \rangle)^{2}\] \[=\sum_{l,i}\frac{\partial\sigma_{n}}{\partial\lambda_{i}}(\alpha |X|^{\alpha-2}\langle\lambda_{i}e_{i},X\rangle\langle X,E_{l}\rangle+|X|^{ \alpha}\langle\lambda_{i}e_{i},E_{l}\rangle)^{2}\] \[=\sum_{i}\frac{\partial\sigma_{n}}{\partial\lambda_{i}}\lambda_{i} ^{2}(|X|^{2\alpha}+(\alpha^{2}+2\alpha)|X|^{2\alpha-2}\langle e_{i},X\rangle^{2})\] \[=|X|^{2\alpha}\sigma_{1}\sigma_{n}+(\alpha^{2}+2\alpha)|X|^{2 \alpha-1}\langle\nabla u,\nabla|X|\rangle\sigma_{n}.\]
Therefore, we obtain
\[n\left[\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{n}-\frac{|\int_{ \mathbb{S}^{n}}|X|^{\alpha}XdV_{n}|^{2}}{\int_{\mathbb{S}^{n}}dV_{n}}\right]\] \[\leq \int_{\mathbb{S}^{n}}u^{2}\left(|X|^{2\alpha}\sigma_{1}+(\alpha^ {2}+2\alpha)|X|^{2\alpha-1}\langle\nabla u,\nabla|X|\rangle\right)\sigma_{n}d\sigma\] \[= \int_{\mathbb{S}^{n}}u\left(|X|^{2\alpha}(\Delta u+nu)+(\alpha^{2 }+2\alpha)|X|^{2\alpha-1}\langle\nabla u,\nabla|X|\rangle\right)dV_{n}.\]
This completes the proof of Lemma 2.15.
_Remark 2.1_.: In the case \(\alpha=0\), Lemma 2.15 reduces to [41, Lemma 3.2] for \(k=n\).
## 3. Proofs of Theorem 1.1, Theorem 1.2 and Theorem 1.4
Proof of Theorem 1.1.: Suppose there exist two admissible solutions \(M\) and \(\bar{M}\) to (1.10), i.e.
\[\varphi(u)P_{k}(W)=\varphi(\bar{u})P_{k}(\bar{W})=f\quad\text{on }\mathbb{S}^{n}, \tag{3.1}\]
where \(W=(\nabla^{2}u+ug_{0})\) and \(\bar{W}=(\nabla^{2}\bar{u}+\bar{u}g_{0})\).
Denote \(P_{k,l}=P_{k,l}(W,\bar{W})\). Since \(W,\bar{W}\in\Gamma_{k}\), it follows from (3.1) and Lemma 2.6 that
\[P_{k-1,1}\geq P_{k0}^{\frac{k-1}{k}}P_{0k}^{\frac{1}{k}}=\left(\frac{\varphi( \bar{u})}{\varphi(u)}\right)^{\frac{k-1}{k}}P_{0k},\quad P_{1,k-1}\geq P_{k0}^ {\frac{1}{k}}P_{0k}^{\frac{k-1}{k}}=\left(\frac{\varphi(u)}{\varphi(\bar{u})} \right)^{\frac{k-1}{k}}P_{k0}.\]
For \(2\leq k\leq n\), by using integral formulas (2.22) and the assumption (1.11), we get
\[\begin{split} 0&=\int_{\mathbb{S}^{n}}u(P_{0k}-P_{k-1,1} )d\sigma+\int_{\mathbb{S}^{n}}\bar{u}(P_{k0}-P_{1,k-1})d\sigma\\ &\leq\int_{\mathbb{S}^{n}}\left[u\left(1-\left(\frac{\varphi( \bar{u})}{\varphi(u)}\right)^{\frac{k-1}{k}}\right)P_{0k}+\bar{u}\left(1-\left( \frac{\varphi(u)}{\varphi(\bar{u})}\right)^{\frac{k-1}{k}}\right)P_{k0} \right]d\sigma\\ &=\int_{\mathbb{S}^{n}}\left(1-\left(\frac{\varphi(\bar{u})}{ \varphi(u)}\right)^{\frac{k-1}{k}}\right)\left(1-\frac{\bar{u}}{u}\left(\frac {\varphi(\bar{u})}{\varphi(u)}\right)^{\frac{1}{k}}\right)uP_{0k}d\sigma\\ &\leq 0.\end{split} \tag{3.2}\]
Hence the inequalities in (3.2) are actually equalities. It follows that \(W,\bar{W}\) are pairwise proportional and \(u=\bar{u}\). Therefore, we obtain \(M=\bar{M}\).
When \(\varphi\) is \(C^{1}\)-smooth, we can reduce the condition (1.11) to a characterization of \((\log\varphi)^{\prime}\), and then obtain the following corollary.
**Corollary 3.1**.: _Let \(2\leq k\leq n\) be an integer. Let \(\varphi:(0,\infty)\to(0,\infty)\) be a \(C^{1}\) function that satisfies_
\[-\frac{k}{s}<(\log\varphi)^{\prime}(s)<0, \tag{3.3}\]
_for any \(s>0\). For any positive function \(f\in C(\mathbb{S}^{n})\), then the smooth admissible solution to (1.10) is unique._
_Remark 3.1_.: Applying the maximum principle, it is clear that the conclusion of Corollary 3.1 still holds when the condition (3.3) is replaced by the condition \((\log\varphi)^{\prime}(s)<-\frac{k}{s}\).
Proof of Theorem 1.4.: Suppose there exist two smooth strictly convex solutions \(M\) and \(\bar{M}\) to (1.19). Denote \(a_{ij}=\sum_{k}h^{ik}\bar{h}_{kj}\) and \(P_{k,n-k}=P_{k,n-k}(W,\bar{W})\), then we have
\[\det(a_{ij})=\frac{P_{n,0}}{P_{0,n}}=\frac{\bar{u}^{1-p}\bar{r}^{q-n-1}}{u^{1- p}r^{q-n-1}}. \tag{3.4}\]
Using the integral formulas (2.25) and (2.26) with \(k=0\), we obtain
\[\begin{split}&\int_{\mathbb{S}^{n}}\left[\eta(\bar{r})(\bar{u}P_{1,n-1} -uP_{0,n})+\eta(r)(uP_{n-1,1}-\bar{u}P_{n,0})\right]d\sigma\\ =&\frac{1}{n}\int_{\mathbb{S}^{n}}\left[\frac{\eta^{ \prime}(\bar{r})}{\bar{r}}\sum_{i}(u\bar{\tau}_{i}^{2}-\bar{u}\bar{\tau}_{i} \tau_{i})P_{0,n}+\frac{\eta^{\prime}(r)}{r}\sum_{i}(\bar{u}\tau_{i}^{2}-u\tau_ {i}\bar{\tau}_{i})P_{n,0}\right]d\sigma\\ \leq&\frac{1}{n}\int_{\mathbb{S}^{n}}\left[\frac{ \eta^{\prime}(\bar{r})}{\bar{r}}\left(u(\bar{r}^{2}-\bar{u}^{2})-\bar{u}\sqrt{ \bar{r}^{2}-\bar{u}^{2}}\sqrt{r^{2}-u^{2}}\right)P_{0,n}\right.\end{split}\]
\[\left(\bar{\xi}^{-2}\sqrt{\bar{\xi}^{2}-1}-\xi^{-2}\sqrt{\xi^{2}-1} \right)\left(\sqrt{\bar{\xi}^{2}-1}-\sqrt{\xi^{2}-1}\right)\geq 0,\]
where we used the monotonic increasing property of \(g(t)=t^{-2}\sqrt{t^{2}-1}\) for \(1\leq t\leq\sqrt{2}\).
Consequently, the left hand side of (3.5) is non-negative and the right hand side of (3.5) is non-positive. Then both sides must be zero. It follows that \(W=\lambda\bar{W}\) and \(\tau_{i}=\lambda^{*}\bar{\tau}_{i}\), \(1\leq i\leq n\), for some \(\lambda,\lambda^{*}>0\). If \(p\neq q\), we have \(M=\bar{M}\). If \(p=q\), we have \(M=\lambda\bar{M}\).
Proof of Theorem 1.2.: Without loss of generality, we can assume \(\psi(1,1)=1\), then \(\psi(s,t)\) and \(\eta(t)\) satisfy
\[\left(s-\psi(s,t)\right)\left(\eta(1)-\eta(t)\psi^{k-l-1}(s,t) \right)\leq 0, \tag{3.6}\]
for any \(t>s>0\). Suppose there exists a smooth strictly convex solution \(M\) to (1.12). Let \(\bar{M}\) be the unit sphere, then \(\bar{M}\) is a strictly convex solution to (1.12).
Using the integral formulas (2.25) and (2.26), we have
\[\int_{\mathbb{S}^{n}}\left[\eta(1)(P_{l+1}(W)-uP_{l}(W))+\eta(r)(uP_{ k}(W)-P_{k-1}(W))\right]d\sigma\] \[= \frac{(k-1)!}{n!}\int_{\mathbb{S}^{n}}\frac{\eta^{\prime}(r)}{r} \sum\delta_{i_{1},i_{2}\ldots,i_{n-k+1}}^{i_{1},j_{2},\ldots,j_{n-k+1}}\tau_{i _{1}}^{2}h_{i_{2}j_{2}}\cdots h_{i_{n-k+1}j_{n-k+1}}P_{n}(W)d\sigma.\]
Note that for a fixed \(z_{0}\in\mathbb{S}^{n}\), after choosing a local orthogonal frame such that \(h_{ij}=\kappa_{i}\delta_{ij}\) at \(z_{0}\), then we have at \(z_{0}\)
\[\sum\delta_{i_{1},i_{2}\ldots,i_{n-k+1}}^{i_{1},j_{2},\ldots,j_{n-k+1}}\tau_{i _{1}}^{2}h_{i_{2}j_{2}}\cdots h_{i_{n-k+1}j_{n-k+1}}=\sum\tau_{i_{1}}^{2} \kappa_{i_{2}}\cdots\kappa_{i_{n-k+1}}\geq 0.\]
Then it follows from \(\eta^{\prime}(t)\leq 0\) and Lemma 2.3 that
\[0 \geq\int_{\mathbb{S}^{n}}\left[\eta(1)P_{l}(W)\left(\frac{P_{l+1} (W)}{P_{l}(W)}-u\right)+\eta(r)P_{k}(W)\left(u\frac{P_{k-1}(W)}{P_{k}(W)}-1 \right)\right]d\sigma\] \[\geq\int_{\mathbb{S}^{n}}\left[\eta(1)P_{l}(W)\left(\left(\frac{P _{k}(W)}{P_{l}(W)}\right)^{\frac{1}{k-l}}-u\right)+\eta(r)P_{k}(W)\left(u\left( \frac{P_{k}(W)}{P_{l}(W)}\right)^{-\frac{1}{k-l}}-1\right)\right]d\sigma\] \[=\int_{\mathbb{S}^{n}}\left(u-\left(\frac{P_{k}(W)}{P_{l}(W)} \right)^{\frac{1}{k-l}}\right)\left(\eta(r)\left(\frac{P_{k}(W)}{P_{l}(W)} \right)^{\frac{k-l-1}{k-l}}-\eta(1)\right)P_{l}(W)d\sigma.\]
Using (1.12) and the assumption (3.6), we obtain
\[0\geq\int_{\mathbb{S}^{n}}\left(u-\psi(u,r)\right)\left(\eta(r)\psi^{k-l-1}(u,r)-\eta(1)\right)P_{l}(W)d\sigma\geq 0. \tag{3.7}\]
Hence the inequalities in (3.7) are both equalities. It follows that \(W=\lambda I\), and then \(M\) is a sphere. Moreover, if in addition \(\eta^{\prime}(t)<0\), then we have \(\tau_{i}\equiv 0,\ 1\leq i\leq n\). Thus \(r=u\) and \(M\) is an origin-centred sphere.
## 4. Proof of Theorem 1.3
Proof of Theorem 1.3.: Suppose there exists a smooth strictly convex solution \(M\) to (1.12). Let \(\{e_{1},\ldots,e_{n},e_{n+1}\}\) be a local orthonormal frame at \(x\in M\) such that \(e_{n+1}\) is the unit inner normal vector of \(M\) and \(h_{ij}=\kappa_{i}\delta_{ij}\) at \(x\), then we have \(u_{i}=\kappa_{i}\langle x,e_{i}\rangle\) and \(r_{i}=r^{-1}\langle x,e_{i}\rangle\) at \(x\).
Denote \(\tilde{k}=n-k,\ \tilde{l}=n-l\), \(\tilde{\psi}=\psi^{-1}\) and \(P_{i}=P_{i}(\kappa)\) for \(1\leq i\leq n\), we have \(0\leq\tilde{k}<\tilde{l}\leq n\), \(\partial_{1}\tilde{\psi}\leq 0\) and \(\partial_{2}\tilde{\psi}\leq 0\). Note that \(P_{i}(W)=P_{i}(\kappa^{-1})=\frac{P_{n-i}(\kappa)}{P_{n}(\kappa)}\), then (1.13) is equivalent to
\[\left(\frac{P_{\tilde{l}}}{P_{\tilde{k}}}\right)^{\frac{1}{l-k}}=\tilde{\psi}( u,r). \tag{4.1}\]
When \(\tilde{k}<\tilde{l}-1\), by using Minkowski formulas (2.16), Lemma 2.3, Proposition 2.8 and (4.1), we obtain
\[0 \leq\int_{M}(P_{\tilde{l}-1}-P_{\tilde{k}}^{\frac{1}{l-k}}P_{\tilde {l}}^{\frac{\tilde{l}-\tilde{k}-1}{l-k}})d\mu+\int_{M}u(P_{\tilde{k}+1}-P_{ \tilde{k}}^{\frac{\tilde{l}-\tilde{k}-1}{l-k}}P_{\tilde{l}}^{\frac{1}{l-k}}) \left(\frac{P_{\tilde{l}}}{P_{\tilde{k}}}\right)^{\frac{\tilde{l}-\tilde{k}-1 }{l-k}}d\mu\] \[=\int_{M}(P_{\tilde{l}-1}-uP_{\tilde{l}})d\mu+\int_{M}(uP_{\tilde{ k}+1}-P_{\tilde{k}})\left(\frac{P_{\tilde{l}}}{P_{\tilde{k}}}\right)^{\frac{ \tilde{l}-\tilde{k}-1}{l-k}}d\mu\] \[\leq\int_{M}(uP_{\tilde{k}+1}-P_{\tilde{k}})\left(\frac{P_{\tilde {l}}}{P_{\tilde{k}}}\right)^{\frac{\tilde{l}-\tilde{k}-1}{l-k}}d\mu=\frac{1}{ (n-\tilde{k})\binom{n}{k}}\int_{M}T_{ij}^{\tilde{k}}\langle x,e_{i}\rangle \left(\left(\frac{P_{\tilde{l}}}{P_{\tilde{k}}}\right)^{\frac{\tilde{l}- \tilde{k}-1}{l-k}}\right)_{j}d\mu\] \[=\frac{\tilde{l}-\tilde{k}-1}{(n-\tilde{k})\binom{n}{\tilde{k}}} \int_{M}T_{ij}^{\tilde{k}}\langle x,e_{i}\rangle(\tilde{\psi}(u,r))_{j}\left( \frac{P_{\tilde{l}}}{P_{\tilde{k}}}\right)^{\frac{\tilde{l}-\tilde{k}-2}{l-k} }d\mu\] \[=\frac{\tilde{l}-\tilde{k}-1}{(n-\tilde{k})\binom{n}{\tilde{k}}} \int_{M}T_{ij}^{\tilde{k}}\langle x,e_{i}\rangle(\kappa_{j}\langle x,e_{j} \rangle\partial_{1}\tilde{\psi}+r^{-1}\langle x,e_{j}\rangle\partial_{2} \tilde{\psi})\left(\frac{P_{\tilde{l}}}{P_{\tilde{k}}}\right)^{\frac{\tilde{ l}-\tilde{k}-2}{l-k}}d\mu\] \[\leq 0. \tag{4.2}\]
When \(\tilde{k}=\tilde{l}-1\geq 1\), we obtain
\[0 \leq\int_{M}(P_{\tilde{k}}^{2}-P_{\tilde{k}-1}P_{\tilde{k}+1}) \frac{1}{P_{\tilde{k}}}d\mu=\int_{M}(P_{\tilde{k}}-uP_{\tilde{k}+1})d\mu+\int _{M}(uP_{\tilde{k}}-P_{\tilde{k}-1})\frac{P_{\tilde{k}+1}}{P_{\tilde{k}}}d\mu\] \[=\int_{M}(uP_{\tilde{k}}-P_{\tilde{k}-1})\frac{P_{\tilde{k}+1}}{P _{\tilde{k}}}d\mu=\frac{1}{(n-\tilde{k}+1)\binom{n}{\tilde{k}-1}}\int_{M}T_{ ij}^{\tilde{k}-1}\langle x,e_{i}\rangle\left(\frac{P_{\tilde{k}+1}}{P_{\tilde{k}}} \right)_{j}d\mu\] \[=\frac{1}{(n-\tilde{k}+1)\binom{n}{\tilde{k}-1}}\int_{M}T_{ij}^{ \tilde{k}-1}\langle x,e_{i}\rangle(\kappa_{j}\langle x,e_{j}\rangle\partial_{ 1}\tilde{\psi}+r^{-1}\langle x,e_{j}\rangle\partial_{2}\tilde{\psi})d\mu\] \[\leq 0. \tag{4.3}\]
When \(\tilde{k}=\tilde{l}-1=0\), since \(n\geq 2\), we obtain
\[0 \leq\int_{M}u(P_{1}^{2}-P_{2})d\mu=\int_{M}(P_{1}-uP_{2})d\mu+ \int_{M}(uP_{1}-1)P_{1}d\mu\] \[=\int_{M}(uP_{1}-1)P_{1}d\mu=\frac{1}{n}\int_{M}\delta_{ij} \langle x,e_{i}\rangle\left(P_{1}\right)_{j}d\mu\] \[=\frac{1}{n}\int_{M}\sum_{i}\langle x,e_{i}\rangle(\kappa_{i} \langle x,e_{i}\rangle\partial_{1}\tilde{\psi}+r^{-1}\langle x,e_{i}\rangle \partial_{2}\tilde{\psi})d\mu\] \[\leq 0. \tag{4.4}\]
Therefore, the above inequalities in (4.2), (4.3) and (4.4) are all equalities. It follows that \(W=\lambda I\), \(\langle x,e_{i}\rangle\equiv 0\) (\(1\leq i\leq n\)), and then \(M\) is an origin-centred sphere.
Furthermore, we study the following equation
\[\sum_{s=1}^{n-k}f_{s}(u,r)\frac{P_{k+s}(\kappa)}{P_{k}(\kappa)}+\sum_{s=1}^{n-k}g_ {s}(u,r)\frac{P_{k-1+s}(\kappa)}{P_{k-1}(\kappa)}=\psi(u.r), \tag{4.5}\]
where \(\kappa\) are the principal curvatures of the hypersurface. We can prove the following uniqueness result, which generalizes the result in [60].
**Proposition 4.1**.: _Let \(n\geq 2\) and \(0<k<n\). Suppose \(\psi:(0,\infty)\times(0,\infty)\to(0,\infty)\) is a \(C^{1}\) function with \(\partial_{1}\psi\leq 0,\ \partial_{2}\psi\leq 0\) and at least one of these inequalities is strict. For \(1\leq s\leq n-k\), suppose \(f_{s},\ g_{s}:(0,\infty)\times(0,\infty)\to(0,\infty)\) are \(C^{1}\) functions with \(f_{s}+g_{s}=\phi_{s}\) and \(\partial_{1}\phi_{s}\geq 0,\ \partial_{2}\phi_{s}\geq 0\). Then the smooth strictly convex solution \(M\) to (4.5) must be an origin-centred sphere._
Proof of Proposition 4.1.: Suppose there exists a smooth strictly convex solution \(M\) to (4.5). Let \(\{e_{1},\ldots,e_{n},e_{n+1}\}\) be a local orthonormal frame at \(x\in M\) such that \(e_{n+1}\) is the unit inner normal vector of \(M\) and \(h_{ij}=\kappa_{i}\delta_{ij}\) at \(x\). Denote \(P_{i}=P_{i}(\kappa)\).
Using Minkowski formulas (2.16), Lemma 2.3, Proposition 2.8 and (4.5), we obtain
\[0 \leq\sum_{s=1}^{n-k}\int_{M}(P_{k}P_{k-1+s}-P_{k-1}P_{k+s})\left( \frac{f_{s}}{P_{k}}+u\frac{g_{s}}{P_{k-1}}\right)d\mu\] \[=\sum_{s=1}^{n-k}\int_{M}(P_{k-1+s}-uP_{k+s})(f_{s}+g_{s})d\mu- \sum_{s=1}^{n-k}\int_{M}(P_{k-1}-uP_{k})\left(\frac{f_{s}P_{k+s}}{P_{k}}+u\frac {g_{s}P_{k-1+s}}{P_{k-1}}\right)d\mu\] \[=\sum_{s=1}^{n-k}\int_{M}(P_{k-1+s}-uP_{k+s})\phi_{s}d\mu-\int_{M }(P_{k-1}-uP_{k})\psi d\mu\] \[=-\sum_{s=1}^{n-k}\frac{\int_{M}T_{ij}^{k-1+s}\langle x,e_{i} \rangle(\phi_{s})_{j}d\mu}{(n-k+1-s)\binom{n}{k-1+s}}+\frac{\int_{M}T_{ij}^{k- 1}\langle x,e_{i}\rangle(\psi)_{j}d\mu}{(n-k+1)\binom{n}{k-1}}\] \[\leq 0. \tag{4.6}\]
Hence the above inequalities in (4.6) are both equalities. It follows that \(W=\lambda I\), \(\langle x,e_{i}\rangle\equiv 0\ (1\leq i\leq n)\), and then \(M\) is an origin-centred sphere.
## 5. Proofs of Theorem 1.5 and Theorem 1.6
Proof of Theorem 1.5.: Suppose there exists a smooth strictly origin-symmetric convex solution \(M\) to (1.8). Then we have \(dV_{n}=u^{p}|X|^{n+1-q}d\sigma\).
Since \(M\) is origin-symmetric, then \(\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{n}=0\) for any \(\alpha\in\mathbb{R}\). Using (2.29), we have
\[n\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{n}\leq\int_{\mathbb{S}^{n}}|X|^{2 \alpha}u(\Delta u+nu)dV_{n}+(\alpha^{2}+2\alpha)\int_{\mathbb{S}^{n}}|X|^{2 \alpha-1}u\langle\nabla u,\nabla|X|\rangle dV_{n}. \tag{5.1}\]
Note that \(|X|^{2}=u^{2}+|\nabla u|^{2}\) and
\[\langle\nabla u,\nabla|X|^{2}\rangle=\sum_{j}2u_{j}(uu_{j}+\sum_{i}u_{i}u_{ij })=\sum_{i,j}2u_{i}u_{j}(u_{ij}+u\delta_{ij})\geq c|\nabla u|^{2}, \tag{5.2}\]
where \(c>0\) depends on \(M\). By integration by parts, we have
\[\int_{\mathbb{S}^{n}}|X|^{2\alpha}u\Delta udV_{n}=\int_{\mathbb{S}^{ n}}|X|^{2\alpha+n+1-q}u^{p+1}\Delta ud\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha+n+1-q}u^{p}|\nabla u|^{2} d\sigma-(2\alpha+n+1-q)\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-q}u^{p+1}\langle \nabla u,\nabla|X|\rangle d\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha}|\nabla u|^{2}dV_{n}-(2 \alpha+n+1-q)\int_{\mathbb{S}^{n}}|X|^{2\alpha-1}u\langle\nabla u,\nabla|X| \rangle d\sigma. \tag{5.3}\]
Substituting (5.3) into (5.1), we have
\[(n+p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha}|\nabla u|^{2}dV_{n}\leq(\alpha^{2}+q -n-1)\int_{\mathbb{S}^{n}}|X|^{2\alpha-1}u\langle\nabla u,\nabla|X|\rangle dV _{n}. \tag{5.4}\]
When \(p\geq-n-1\) and \(q\leq n+1\) with at least one of these being strict, choosing \(\alpha=0\), it follows from (5.2) and (5.4) that \(\nabla u\equiv 0\) on \(\mathbb{S}^{n}\). Hence \(M\) is an origin-centred sphere.
When \(p>-n-1\) and \(q>n+1\), using the same notation as in the proof of Lemma 2.15, we have
\[\int_{\mathbb{S}^{n}}|X|^{2\alpha-1}u\langle\nabla u,\nabla|X| \rangle dV_{n}=\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}u\sum_{i}\lambda_{i}u_{i}^{ 2}dV_{n}\] \[\leq \int_{\mathbb{S}^{n}}|X|^{2\alpha-2}u|\nabla u|^{2}(\Delta u+nu) dV_{n}. \tag{5.5}\]
Assume that \(\alpha+\frac{n+1-q}{2}\geq 0\), by (5.2), we calculate
\[\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}u|\nabla u|^{2}\Delta udV_{n} =\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p+1}|\nabla u|^{2}\Delta ud\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p}|\nabla u|^{4} d\sigma-\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p+1}\langle\nabla u,\nabla| \nabla u|^{2}\rangle d\sigma\] \[\quad-\left(\alpha+\frac{n-1-q}{2}\right)\int_{\mathbb{S}^{n}}|X |^{2\alpha+n-3-q}u^{p+1}|\nabla u|^{2}\langle\nabla u,\nabla|X|^{2}\rangle d\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p}|\nabla u|^{4} d\sigma+2\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p+2}|\nabla u|^{2}d\sigma\] \[\quad-\int_{\mathbb{S}^{n}}\left(|X|^{2}+\left(\alpha+\frac{n-1-q }{2}\right)|\nabla u|^{2}\right)|X|^{2\alpha+n-3-q}u^{p+1}\langle\nabla u, \nabla|X|^{2}\rangle d\sigma\] \[\leq -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p}|\nabla u|^{4} d\sigma+2\int_{\mathbb{S}^{n}}|X|^{2\alpha+n-1-q}u^{p+2}|\nabla u|^{2}d\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}|\nabla u|^{4}dV_{n}+2 \int_{\mathbb{S}^{n}}|X|^{2\alpha-2}u^{2}|\nabla u|^{2}dV_{n}. \tag{5.6}\]
It follows from (5.4), (5.5) and (5.6) that
\[0\leq \left(-(n+p+1)-(p+1)(\alpha^{2}+q-n-1)\right)\int_{\mathbb{S}^{n}}| X|^{2\alpha-2}|\nabla u|^{4}dV_{n}\] \[+\left(-(n+p+1)+(n+2)(\alpha^{2}+q-n-1)\right)\int_{\mathbb{S}^{ n}}|X|^{2\alpha-2}u^{2}|\nabla u|^{2}dV_{n}. \tag{5.7}\]
Notice that \(-(p+1)<n+2\). We choose \(\alpha=\sqrt{n+1-q+\frac{n+p+1}{n+2}}\) according to the provided conditions for \(p\) and \(q\). This choice ensures that \(\alpha+\frac{n+1-q}{2}\geq 0\). Consequently, (5.7) becomes
\[0\leq-\frac{(n+p+3)(n+p+1)}{n+2}\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}|\nabla u|^ {4}dV_{n}\leq 0. \tag{5.8}\]
Hence \(\nabla u\equiv 0\) on \(\mathbb{S}^{n}\). The proof of Case (i) is complete.
Moreover, using the duality relation of \(L_{p}\) dual Minkowski problems in [13, Theorem 7.1], we have that the solutions to (1.8) with parameters \((p,q)\) are in one-to-one correspondence with the solutions to (1.8) with parameters \((-q,-p)\). Combining this with Case (i), we deduce the proof of Case (ii). We complete the proof of Theorem 1.5.
Proof of Theorem 1.6.: Suppose there exists a smooth strictly convex solution \(M\) to (1.8). Let \(\alpha\) be a constant to be determined such that \(\alpha+\frac{n+1-q}{2}\geq 0\). By (2.29), we have
\[\begin{split} n\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{n}& \leq n\frac{\left|\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{n}|^{2} \right.}{\int_{\mathbb{S}^{n}}dV_{n}}+\int_{\mathbb{S}^{n}}|X|^{2\alpha}u( \Delta u+nu)dV_{n}\\ &\quad+(\alpha^{2}+2\alpha)\int_{\mathbb{S}^{n}}|X|^{2\alpha-1}u \langle\nabla u,\nabla|X|\rangle dV_{n}.\end{split} \tag{5.9}\]
Using the same procedure as Theorem 1.5, we can obtain (5.3), (5.5) and (5.6) again. Since \(q-n-1\geq 0\), we choose \(\alpha=q-n-1\). It follows from the divergence theorem that
\[\begin{split}\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{n}& =\int_{\mathbb{S}^{n}}u^{p}Xd\sigma=\int_{\mathbb{S}^{n}}u^{p} \nabla ud\sigma+\int_{\mathbb{S}^{n}}u^{p+1}zd\sigma(z)\\ &=\frac{n+p+1}{n}\int_{\mathbb{S}^{n}}u^{p}\nabla ud\sigma=\frac{ n+p+1}{n}\int_{\mathbb{S}^{n}}|X|^{\alpha}\nabla udV_{n}.\end{split}\]
Combining this with Cauchy-Schwarz inequality, we have
\[\begin{split} n\frac{|\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{n}|^{ 2}}{\int_{\mathbb{S}^{n}}dV_{n}}\leq&\frac{(n+p+1)^{2}}{n} \frac{(\int_{\mathbb{S}^{n}}|X|^{\alpha}|\nabla u|dV_{n})^{2}}{\int_{\mathbb{S} ^{n}}dV_{n}}\\ \leq&\frac{(n+p+1)^{2}}{n}\int_{\mathbb{S}^{n}}|X|^{ 2\alpha}|\nabla u|^{2}dV_{n}.\end{split} \tag{5.10}\]
Substituting (5.3), (5.5) (5.6) and (5.10) into (5.9), we obtain
\[\begin{split} 0\leq&\left(\frac{(p+1)(n+p+1)}{n}-(p+1)(q- n-1)(q-n)\right)\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}|\nabla u|^{4}dV_{n}\\ &+\left(\frac{(p+1)(n+p+1)}{n}+(n+2)(q-n-1)(q-n)\right)\int_{ \mathbb{S}^{n}}|X|^{2\alpha-2}u^{2}|\nabla u|^{2}dV_{n}.\end{split} \tag{5.11}\]
Due to \((p+1)(n+p+1)\leq 0\) and \(n+1\leq q\leq n+\frac{1}{2}+\sqrt{\frac{1}{4}-\frac{(p+1)(n+p+1)}{n(n+2)}}\), we obtain
\[\begin{split}\frac{(p+1)(n+p+1)}{n}-(p+1)(q-n-1)(q-n)\\ \leq&\frac{(p+1)(n+p+1)}{n}+(n+2)(q-n-1)(q-n)\\ \leq& 0.\end{split} \tag{5.12}\]
When \(-n-1<p<-1\), then at least one of inequalities in (5.12) is strict. It follows from (5.11) that \(\nabla u\equiv 0\) on \(\mathbb{S}^{n}\). When \(p=-1\), then \(q=n+1\) and \(\alpha=0\). In this case, (5.11) becomes an equality. It follows from the equality case of (5.10) that \(\nabla u\equiv 0\) on \(\mathbb{S}^{n}\). The proof of Case (i) is complete. Using the duality relation in [13, Theorem 7.1], we deduce the proof of Case (ii) and complete the proof of Theorem 1.6.
## Appendix A Uniqueness of the \(L_{p}\) dual Christoffel-Minkowski problem
In this appendix, by modifying the test functions in [41, Lemma 3.1] for \(k<n\), we provide a uniqueness result of origin-symmetric solutions to the isotropic \(L_{p}\) dual Christoffel-Minkowski problem. Denote \(dV_{k}=u\sigma_{k}d\sigma\). We first present a generalization of Lemma 2.15, which reduces to [41, Lemma 3.2] when \(\alpha=0\).
**Lemma A.1**.: _Let \(1\leq k\leq n\). Assume that \(\alpha\geq 0\). Then we have_
(A.1) \[\begin{split} k\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{k}& \leq k\frac{|\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{k}|^{2}}{\int_{ \mathbb{S}^{n}}dV_{k}}+\int_{\mathbb{S}^{n}}|X|^{2\alpha}u\left(\sigma_{1}-(k +1)\frac{\sigma_{k+1}}{\sigma_{k}}\right)dV_{k}\\ &\quad+(\alpha^{2}+2\alpha)\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}| \nabla u|^{2}u\left(\sigma_{1}-(k+1)\frac{\sigma_{k+1}}{\sigma_{k}}\right)dV_ {n}.\end{split}\]
Proof.: Let \(\{E_{l}\}_{l=1}^{n+1}\) be an orthonormal basis of \(\mathbb{R}^{n+1}\). Suppose \(\{e_{i}\}_{i=1}^{n}\) is a local orthonormal frame for \(\mathbb{S}^{n}\) such that \((u_{ij}+u\delta_{ij})(z_{0})=\lambda_{i}(z_{0})\delta_{ij}\). For \(l=1,\ldots,n+1\), we define the functions \(f_{l}:\mathbb{S}^{n}\to\mathbb{R}\)
(A.2) \[f_{l}(z)=|X(z)|^{\alpha}\langle X(z),E_{l}\rangle-\frac{\int_{\mathbb{S}^{n}}| X(z)|^{\alpha}\langle X(z),E_{l}\rangle dV_{k}}{\int_{\mathbb{S}^{n}}dV_{k}}.\]
Since \(\int_{\mathbb{S}^{n}}f_{l}dV_{k}=0\) for \(1\leq l\leq n+1\), applying Lemma 2.14 to \(f_{l}\) and summing over \(l\), we have
\[k\sum_{l}\int_{\mathbb{S}^{n}}f_{l}^{2}dV_{k}=k\left[\int_{\mathbb{S}^{n}}|X| ^{2\alpha+2}dV_{k}-\frac{|\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{k}|^{2}}{\int _{\mathbb{S}^{n}}dV_{k}}\right]\leq\sum_{l}\int_{\mathbb{S}^{n}}u^{2}\sigma_{k }^{ij}\nabla_{i}f_{l}\nabla_{j}f_{l}d\sigma.\]
Note that \(\sum_{i}\langle e_{i},X\rangle^{2}=|\nabla u|^{2}\) and \(\sum_{i}\frac{\partial\sigma_{k}}{\partial\lambda_{i}}\lambda_{i}^{2}=\sigma _{1}\sigma_{k}-(k+1)\sigma_{k+1}\). It follows from \((u_{ij}+u\delta_{ij})(z_{0})=\lambda_{i}(z_{0})\delta_{ij}\) that \(\nabla_{i}X=\sum_{j}(u_{ij}+u\delta_{ij})e_{j}=\lambda_{i}e_{i}\) at \(z_{0}\). Since \(\alpha\geq 0\), we have
\[\sum_{l,i,j}\sigma_{k}^{ij}\nabla_{i}f_{l}\nabla_{j}f_{l} =\sum_{l,i}\frac{\partial\sigma_{k}}{\partial\lambda_{i}}(\nabla_ {i}(|X|^{\alpha})\langle X,E_{l}\rangle+|X|^{\alpha}\langle\nabla_{i}X,E_{l} \rangle)^{2}\] \[=\sum_{l,i}\frac{\partial\sigma_{k}}{\partial\lambda_{i}}(\alpha |X|^{\alpha-2}\langle\lambda_{i}e_{i},X\rangle\langle X,E_{l}\rangle+|X|^{ \alpha}\langle\lambda_{i}e_{i},E_{l}\rangle)^{2}\] \[=\sum_{i}\frac{\partial\sigma_{k}}{\partial\lambda_{i}}\lambda_{i}^ {2}(|X|^{2\alpha}+(\alpha^{2}+2\alpha)|X|^{2\alpha-2}\langle e_{i},X\rangle^{2})\] \[\leq(|X|^{2\alpha}+(\alpha^{2}+2\alpha)|X|^{2\alpha-2}|\nabla u|^{ 2})(\sigma_{1}\sigma_{k}-(k+1)\sigma_{k+1}).\]
Therefore, we obtain
\[k\left[\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{k}-\frac{|\int_{\mathbb{S}^{n}} |X|^{\alpha}XdV_{k}|^{2}}{\int_{\mathbb{S}^{n}}dV_{n}}\right]\]
\[\leq \int_{\mathbb{S}^{n}}u^{2}(|X|^{2\alpha}+(\alpha^{2}+2\alpha)|X|^{2 \alpha-2}|\nabla u|^{2})(\sigma_{1}\sigma_{k}-(k+1)\sigma_{k+1})d\sigma\] \[= \int_{\mathbb{S}^{n}}u(|X|^{2\alpha}+(\alpha^{2}+2\alpha)|X|^{2 \alpha-2}|\nabla u|^{2})\left(\sigma_{1}-(k+1)\frac{\sigma_{k+1}}{\sigma_{k}} \right)dV_{k}.\]
This completes the proof of Lemma A.1.
Now we consider the following isotropic \(L_{p}\) dual Christoffel-Minkowski problem
(A.3) \[u^{1-p}r^{q-k-1}\sigma_{k}(W)=1\quad\text{on }\mathbb{S}^{n}.\]
Corollary 1.5 states the uniqueness of solutions to (A.3) when \(p\geq 1\) and \(q\leq k+1\) with at least one of these inequalities being strict. As for the case with the origin-symmetric assumption, in [41, Theorem 1.6], by choosing \(\varphi(s,t)=s^{p}t^{k+1-q}\), we obtain
**Theorem A.2** ([41]).: _Let \(n\geq 2\) and \(1\leq k\leq n-1\). Suppose \(p\geq 1-k\) and \(q\leq k+1\). Then the smooth strictly convex origin-symmetric solution \(M\) to (A.3) must be an origin-centred sphere._
As an extension of the above theorem, by applying Lemma A.1 for \(k<n\), we have the following result:
**Theorem A.3**.: _Let \(n\geq 2\) and \(1\leq k\leq n-1\). Suppose \(p\geq 1-k\) and \(q\leq k+1+2\alpha_{*}\), where \(\alpha_{*}\) is the maximum value satisfying the following conditions:_
(A.4) \[\left\{\begin{array}{l}0\leq\alpha\leq 1,\\ -2\alpha(p+1)+(\alpha^{2}+2\alpha)(3p+1)\leq p+k-1,\\ 2\alpha(n+2)+(\alpha^{2}+2\alpha)(2n+k+6)\leq p+k-1.\end{array}\right.\]
_Then the smooth strictly convex origin-symmetric solution \(M\) to (A.3) must be an origin-centred sphere._
_Remark A.1_.: When \(\alpha_{*}=0\), Theorem A.3 reduces to Theorem A.2.
Proof of Theorem a.3.: Suppose there exists a smooth strictly origin-symmetric convex solution \(M\) to (A.3). For convenience, we denote \(\tau=q-k-1\). Then we have \(dV_{k}=u^{p}|X|^{-\tau}d\sigma\).
Since \(\int_{\mathbb{S}^{n}}|X|^{\alpha}XdV_{k}=0\) for any \(\alpha\in\mathbb{R}\), it follows from (A.1) that for \(\alpha\geq 0\)
(A.5) \[\begin{split} k\int_{\mathbb{S}^{n}}|X|^{2\alpha+2}dV_{k}& \leq\int_{\mathbb{S}^{n}}|X|^{2\alpha}u\left(\Delta u+nu-(k+1) \frac{\sigma_{k+1}}{\sigma_{k}}\right)dV_{k}\\ &\quad+(\alpha^{2}+2\alpha)\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}| \nabla u|^{2}u\left(\Delta u+nu-(k+1)\frac{\sigma_{k+1}}{\sigma_{k}}\right) dV_{k}.\end{split}\]
Since \(\alpha_{*}\geq 0\) and \(\alpha_{*}-\frac{\tau}{2}\geq 0\), by using (5.2), we obtain
(A.6) \[\begin{split}&\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u\Delta udV_{k} =\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau}u^{p+1}\Delta ud\sigma\\ =&-(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau}u^ {p}|\nabla u|^{2}d\sigma-\left(\alpha_{*}-\frac{\tau}{2}\right)\int_{\mathbb{ S}^{n}}|X|^{2\alpha_{*}-\tau}u^{p+1}\langle\nabla u,\nabla|X|^{2}\rangle d \sigma\\ \leq&-(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}- \tau}u^{p}|\nabla u|^{2}d\sigma\\ =&-(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}| \nabla u|^{2}dV_{k}\end{split}\]
and
(A.7) \[\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}u\Delta udV_{k} =\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}|\nabla u|^{2}u^{p+1}\Delta ud\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}u^{p}|\nabla u| ^{4}d\sigma-\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}u^{p+1}\langle\nabla u,\nabla|\nabla u|^{2}\rangle d\sigma\] \[-\left(\alpha_{*}-\frac{\tau}{2}-1\right)\int_{\mathbb{S}^{n}}|X| ^{2\alpha_{*}-\tau-4}u^{p+1}|\nabla u|^{2}\langle\nabla u,\nabla|X|^{2}\rangle d\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}u^{p}|\nabla u |^{4}d\sigma+2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}u^{p+2}|\nabla u|^{ 2}d\sigma\] \[-\int_{\mathbb{S}^{n}}\left(|X|^{2}+\left(\alpha_{*}-\frac{\tau}{ 2}-1\right)|\nabla u|^{2}\right)|X|^{2\alpha_{*}-\tau-4}u^{p+1}\langle\nabla u,\nabla|X|^{2}\rangle d\sigma\] \[\leq -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}u^{p}|\nabla u |^{4}d\sigma+2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-\tau-2}u^{p+2}|\nabla u|^{ 2}d\sigma\] \[= -(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha-2}|\nabla u|^{4}dV_{k}+2 \int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}u^{2}|\nabla u|^{2}dV_{k}.\]
Suppose that \(\{e_{i}\}_{i=1}^{n}\) is a local orthonormal frame for \(\mathbb{S}^{n}\) such that \((u_{ij}+u\delta_{ij})(z_{0})=\lambda_{i}(z_{0})\delta_{ij}\), then \(\langle X,X_{i}\rangle=\langle X,\lambda_{i}e_{i}\rangle=\lambda_{i}u_{i}\). It is well-known that \(\sigma_{k+1}^{ii}=\sigma_{k}-\lambda_{i}\sigma_{k}^{ii}\) at \(z_{0}\). Then we have at \(z_{0}\)
\[0\leq\sum_{i,j}u_{i}u_{j}\sigma_{k+1}^{ij}=\sum_{i}u_{i}^{2}(\sigma_{k}- \lambda_{i}\sigma_{k}^{ii})\leq|\nabla u|^{2}\sigma_{k},\]
and
\[\sum_{i,j}\langle X,X_{i}\rangle u_{j}\sigma_{k+1}^{ij}=\sum_{i}\lambda_{i}u_{ i}^{2}(\sigma_{k}-\lambda_{i}\sigma_{k}^{ii})\leq|\nabla u|^{2}\sigma_{1} \sigma_{k}.\]
Note that \(\nabla_{i}\sigma_{k+1}^{ij}=0\) and \((k+1)\sigma_{k+1}=\sigma_{k+1}^{ij}(u_{ij}+u\delta_{ij})\). Since \(0\leq\alpha_{*}\leq 1\), by using (A.7), we calculate
(A.8) \[(k+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u\frac{\sigma_{k+1}}{ \sigma_{k}}dV_{k}\] \[= (k+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u^{2}\sigma_{k+1}d \sigma=\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u^{2}\sigma_{k+1}^{ij}(u_{ij}+u \delta_{ij})d\sigma\] \[= -2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}uu_{i}u_{j}\sigma_{k+1}^{ ij}d\sigma-2\alpha_{*}\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}u^{2}\langle X,X_{i} \rangle u_{j}\sigma_{k+1}^{ij}d\sigma+\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u^ {3}\sigma_{k+1}^{ij}\delta_{ij}d\sigma\] \[\geq -2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}|\nabla u|^{2}dV_{k}-2 \alpha_{*}\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}u|\nabla u|^{2}\sigma_{1}dV_{ k}+(n-k)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u^{2}dV_{k}\] \[\geq -2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}|\nabla u|^{2}dV_{k}+2 \alpha_{*}(p+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{4}dV_{k}\] \[-2\alpha_{*}(n+2)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}u^{2}| \nabla u|^{2}dV_{k}+(n-k)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}}u^{2}dV_{k}\]
and
(A.9) \[(k+1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}u\frac{ \sigma_{k+1}}{\sigma_{k}}dV_{k}=\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u |^{2}u^{2}\sigma_{k+1}^{ij}(u_{ij}+u\delta_{ij})d\sigma\\ = -2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}uu_{i}u_{j} \sigma_{k+1}^{ij}d\sigma-2(\alpha_{*}-1)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-4 }|\nabla u|^{2}u^{2}\langle X,X_{i}\rangle u_{j}\sigma_{k+1}^{ij}d\sigma\\ -\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}u^{2}(|\nabla u|^{2})_{i} u_{j}\sigma_{k+1}^{ij}d\sigma+\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u |^{2}u^{3}\sigma_{k+1}^{ij}\delta_{ij}d\sigma\\ = -2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}uu_{i}u_{j }\sigma_{k+1}^{ij}d\sigma-2\int_{\mathbb{S}^{n}}\left((\alpha_{*}-1)|\nabla u |^{2}+|X|^{2}\right)|X|^{2\alpha_{*}-4}u^{2}\langle X,X_{i}\rangle u_{j} \sigma_{k+1}^{ij}d\sigma\\ +2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}u^{3}u_{i}u_{j}\sigma_ {k+1}^{ij}d\sigma+(n-k)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2} u^{2}dV_{k}\\ \geq -2\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{4}dV_{k}-2 \int_{\mathbb{S}^{n}}\left((\alpha_{*}-1)|\nabla u|^{2}+|X|^{2}\right)|X|^{2 \alpha_{*}-4}u|\nabla u|^{2}\sigma_{1}dV_{k}\\ +(n-k)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}u^{2} dV_{k}\\ \geq 2p\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{4}dV_{k} -(n+k+4)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}u^{2}dV_{k}.\]
Substituting (A.6), (A.7), (A.8) and (A.9) into (A.5), we obtain
(A.10) \[0\leq\left(1-k-p-2\alpha_{*}(p+1)+(\alpha_{*}^{2}+2\alpha_{*})(3p +1)\right)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{4}dV_{k}\\ +\left(1-k-p+2\alpha_{*}(n+2)+(\alpha_{*}^{2}+2\alpha_{*})(2n+k+6) \right)\int_{\mathbb{S}^{n}}|X|^{2\alpha_{*}-2}|\nabla u|^{2}u^{2}dV_{k}.\]
According to the conditions (A.4), it can be deduced that the right hand side of (A.10) is less than or equal to \(0\). Hence \(\nabla u\equiv 0\) on \(\mathbb{S}^{n}\), and then \(M\) is an origin-centred sphere. We complete the proof of Theorem A.3.
|
2305.19765 | A Bayesian Approach To Analysing Training Data Attribution In Deep
Learning | Training data attribution (TDA) techniques find influential training data for
the model's prediction on the test data of interest. They approximate the
impact of down- or up-weighting a particular training sample. While
conceptually useful, they are hardly applicable to deep models in practice,
particularly because of their sensitivity to different model initialisation. In
this paper, we introduce a Bayesian perspective on the TDA task, where the
learned model is treated as a Bayesian posterior and the TDA estimates as
random variables. From this novel viewpoint, we observe that the influence of
an individual training sample is often overshadowed by the noise stemming from
model initialisation and SGD batch composition. Based on this observation, we
argue that TDA can only be reliably used for explaining deep model predictions
that are consistently influenced by certain training data, independent of other
noise factors. Our experiments demonstrate the rarity of such noise-independent
training-test data pairs but confirm their existence. We recommend that future
researchers and practitioners trust TDA estimates only in such cases. Further,
we find a disagreement between ground truth and estimated TDA distributions and
encourage future work to study this gap. Code is provided at
https://github.com/ElisaNguyen/bayesian-tda. | Elisa Nguyen, Minjoon Seo, Seong Joon Oh | 2023-05-31T11:52:20Z | http://arxiv.org/abs/2305.19765v2 | # A Bayesian Perspective On Training Data Attribution
###### Abstract
Training data attribution (TDA) techniques find influential training data for the model's prediction on the test data of interest. They approximate the impact of down- or up-weighting a particular training sample. While conceptually useful, they are hardly applicable in practice, particularly because of their sensitivity to different model initialisation. In this paper, we introduce a Bayesian perspective on the TDA task, where the learned model is treated as a Bayesian posterior and the TDA estimates as random variables. From this novel viewpoint, we observe that the influence of an individual training sample is often overshadowed by the noise stemming from model initialisation and SGD batch composition. Based on this observation, we argue that TDA can only be reliably used for explaining model predictions that are consistently influenced by certain training data, independent of other noise factors. Our experiments demonstrate the rarity of such noise-independent training-test data pairs but confirm their existence. We recommend that future researchers and practitioners trust TDA estimates only in such cases. Further, we find a disagreement between ground truth and estimated TDA distributions and encourage future work to study this gap. Code is provided at [https://github.com/ElisaNguyen/bayesian-tda](https://github.com/ElisaNguyen/bayesian-tda).
## 1 Introduction
Understanding how machine learning models arrive at decisions is desirable for social, legal and ethical reasons, particularly for opaque deep learning models [1]. One approach to explanations is the data-centric approach of training data attribution (TDA). As the name suggests, TDA finds attributing training samples for a model decision, uncovering which part of the training data is relevant. The attribution \(\tau\) of a training sample \(z_{j}\) on another sample \(z\) is usually defined as the change of model loss \(\mathcal{L}\) on \(z\) when the model is retrained without \(z_{j}\)[2]:
\[\tau(z_{j},z):=\mathcal{L}(z;\theta_{\backslash j})-\mathcal{L}(z;\theta) \tag{1}\]
where \(\theta\) is a model trained on the entire dataset \(\mathcal{D}\) and \(\theta_{\backslash j}\) is a model trained on the same set without \(z_{j}\). Since the direct computation of Equation 1 is expensive, various TDA techniques for approximating the quantity have been proposed, such as influence functions [3] and TracIn [4]. Their approximations are often based on some form of inner product between the parameter gradients \(\nabla_{\theta}\mathcal{L}(z;\theta)\) and \(\nabla_{\theta}\mathcal{L}(z_{j};\theta)\).
Knowing how training samples attribute to a model decision provides an actionable understanding of the training data distribution, especially in cases of model error. TDA methods can identify the training samples that are most relevant to an error and therefore enable users to understand why the error occurred (e.g. due to domain mismatch of test and training data or wrongly labelled training data) [3]. Additionally, TDA gives them the tool to address the errors by e.g. changing the model directly through the training data. Even in non-erroneous cases, understanding the attributing training data may enable users affected by model decisions to contest the decisions if the attributing training data is noisy or of low quality [5].
At the same time, TDA methods, especially influence functions [3], have been criticised for their fragility when applied to deep models [6; 7; 8]. The main reasons are model complexity and the stochasticity of deep model training. While the former poses a challenge specifically for influence functions as they rely on strong convexity assumptions, the latter is a more general challenge [6; 9]. The randomness inherent to the training process does not only lead to variation in the learned model parameters but also in TDA scores, which makes them untrustworthy. Hence, K & Sogaard [9] recommend using expected TDA scores for increased stability.
We argue that solely considering the expectation is not sufficient to ensure the reliability of TDA but requires inspecting the variance, too. We introduce a Bayesian perspective on the TDA task, noticing that there is no deterministic mapping from a dataset \(\mathcal{D}\) to the corresponding model \(\theta\) for deep neural networks. The learned model depends on the initialisation and batch composition in the stochastic gradient descent (SGD) optimiser. We capture the resulting randomness via Bayesian model posterior \(p(\theta|\mathcal{D})\) over the parameter space [10; 11; 12]. In turn, the TDA estimate (Equation 1) is a random variable that depends on two posteriors, \(p(\theta|\mathcal{D})\) and \(p(\theta_{\setminus j}|\mathcal{D}_{\setminus j})\).
This viewpoint leads to a few insights into the practical usage and evaluation of TDA techniques. We confirm quantitatively that the ground-truth influence \(\tau(z_{j},z)\) is often dominated by the noise: \(\sqrt{\text{Var}(\tau)}>\mathbb{E}|\tau|\). We argue that it is practically difficult to apply any TDA technique on pairs \((z_{j},z)\) whose ground-truth attributions \(\tau(z_{j},z)\) are noisy in the first place. Likewise, any evaluation of TDA methods on such high-variance pairs would not be reliable.
Nonetheless, we are optimistic that TDA techniques are useful in practice, particularly for train-test pairs with high signal-to-noise ratios: \(\sqrt{\text{Var}(\tau)}\ll\mathbb{E}|\tau|\). We observe that such pairs are rare but consistently present in multiple experiments. We recommend that researchers and practitioners confine their usage to scenarios where the signal-to-noise ratios are expected to be large enough.
Our contributions are as follows: (1) Bayesian formulation of the training data attribution (TDA) task. (2) Observation that the ground-truth TDA values are often unreliable and highly variable. (3) Recommendation for the community to use the TDA tools only when the expected noise level is low. (4) Experimental analysis of the contributing factors to the variance of ground-truth TDA values. (5) Observation that the estimated TDA is closer to a local perturbation in the model, while ground-truth TDA is more global.
## 2 Background
We cover the background materials for the paper, including the concept, method, and evaluation of training data attribution (TDA) methods and Bayesian deep learning.
### Training data attribution (TDA)
We introduce the TDA task, a few representative TDA methods, and existing evaluation strategies.
TDA task.Given a model \(f_{\theta}\) parametrised by \(\theta\), a training set \(\mathcal{D}:=\{z_{1},\cdots,z_{N}\}\), and a test sample \(z\), one is interested in the impact of a training sample \(z_{j}\) on the model's behaviour on the test sample \(z\). In the TDA context, one is often interested in the counterfactual change in the loss value for \(z\) after **leave-one-out (LOO)** training, when \(z_{j}\) is excluded from the training set (Equation 1). TDA has been considered in different use cases, such as understanding the bias in word embeddings [13], fact tracing in language model outputs [14] and measuring the robustness of model predictions [5].
TDA methods.The conceptually most straightforward way to compute the difference due to LOO training (Equation 1) is to compute it directly. However, this is computationally expensive, as it involves the learning algorithm for obtaining \(\theta_{\setminus j}\) for every \(j\). This gives rise to various TDA techniques that find _approximate_ estimates \(\tau^{\prime}(z_{j},z)\) of LOO.
A prominent example of such approximation is the **influence function (IF)** method [3] by Koh & Liang. Under strong smoothness assumptions, they have approximated Equation 1 by:
\[\tau^{\prime}(z_{j},z):=-\nabla_{\theta}\mathcal{L}(z;\theta)^{\top}H_{\theta }^{-1}\nabla_{\theta}\mathcal{L}(z_{j};\theta) \tag{2}\]
where \(\nabla_{\theta}\mathcal{L}(z;\theta)\) and \(\nabla_{\theta}\mathcal{L}(z_{j};\theta)\) refer to the parameter gradients of \(f_{\theta}\) for \(z\) and \(z_{j}\) respectively. Recognising the difficulty of scaling up the inverse Hessian computation \(H_{\theta}^{-1}\) and the high dimen
sionality of operations in Equation 2, subsequent papers have proposed further approximations to speed up the computation [15; 16]. Charpiat _et al._[17] have analysed the influence of \(z_{j}\) on \(z\) by dropping the need to compute the Hessian and formulating influence as the loss change when an **additional training step (ATS)** on \(z_{j}\) is taken:
\[\tau(z_{j},z):=\mathcal{L}(z;\theta_{+j})-\mathcal{L}(z;\theta) \tag{3}\]
where \(\theta_{+j}\) is a learned model parameter with \(\mathcal{D}\) and an additional step on \(z_{j}\). They propose two approximations:
\[\textbf{Grad-Dot (GD):}\quad\tau^{\prime}(z_{j},z):=\nabla_{\theta} \mathcal{L}(z_{j};\theta)^{\top}\nabla_{\theta}\mathcal{L}(z;\theta) \tag{4}\]
\[\textbf{Grad-Cos (GC):}\quad\tau^{\prime}(z_{j},z):=\frac{\nabla_{\theta} \mathcal{L}(z_{j};\theta)}{\|\nabla_{\theta}\mathcal{L}(z_{j};\theta)\|}^{ \top}\frac{\nabla_{\theta}\mathcal{L}(z;\theta)}{\|\nabla_{\theta}\mathcal{ L}(z;\theta)\|} \tag{5}\]
This method is closely linked to **TracIn**[4] which computes the Grad-Dot not just at the end of the training, but averages the regular Grad-Dot similarities throughout the model training iterations. We note later in our analysis that within our Bayesian treatment of TDA, the TracIn method coincides conceptually with the Grad-Dot method. In our analysis, we study the sensitivity of LOO and the above TDA methods against noise.
TDA evaluation.The primal aim of TDA methods is to measure how well they approximate the ground-truth LOO values. This is often done by measuring the correlation between the estimates from each TDA method and the ground-truth LOO values (Equation 1) [3; 6; 7; 9]. They use either a linear (Pearson) correlation or a rank (Spearman) correlation over _a small number_ of train-test sample pairs \((z_{j},z)\) due to the computational burden of computing the actual LOO values, especially for larger models. Usually, a few samples \(z\) are chosen for a comparison against LOO, e.g. Koh & Liang [3] report results for one \(z\) and Guo _et al._[15] for 10 samples \(z\). Others have adopted indirect evaluation metrics such as the retrieval performance of mislabelled or poisoned training data based on the TDA estimates [3; 18; 15; 9; 16]. In this work, we adopt the Pearson and Spearman correlation metrics and discuss ways to extend them when the target (LOO) and estimates (TDA) are both random variables.
### Bayesian deep learning.
Bayesian machine learning treats the learned model as a posterior distribution over the parameter space, rather than a single point:
\[p(\theta|\mathcal{D})=p(\mathcal{D}|\theta)p(\theta)/p(\mathcal{D}). \tag{6}\]
Bayesian ML nicely captures the intuition that the mapping from a training set \(\mathcal{D}\) to the learned model \(p(\theta|\mathcal{D})\) is not a deterministic mapping, especially for non-convex models like deep neural networks (DNNs). Depending on the initialisation, among other factors, DNN training almost always ends up learning vastly different parameters.
The estimation of the true posterior is indeed difficult for complex models like DNNs. The field of Bayesian deep learning is dedicated to the interpretation of certain random elements in DNN training as sources of randomness for the approximated Bayesian posteriors. For example, if Dropout [19] is used for training a model, it may be used at test time to let users sample \(\theta\) from the posterior distribution \(p(\theta|\mathcal{D})\)[20]. More generally-used components like stochastic gradient descent (SGD) have also been interpreted as sources of randomness. The random walk induced by SGD iterations in the parameter space can be viewed as a Markov Chain Monte-Carlo sampler from the posterior distribution, after a slight modification of the optimisation algorithm (Stochastic Gradient Langevin Dynamics [10]). Similarly, the last few iterations of the vanilla SGD iterations may also be treated as samples from the posterior, resulting in more widely applicable Bayesian methods like Stochastic Weight Averaging (SWA) [12; 21]. Finally, the random initialisation of DNNs has also been exploited for modelling posterior randomness; training multiple versions of the same model with different initial parameters may be interpreted as samples from the posterior [11]. We show in the next section how the Bayesian viewpoint will help us model the sources of stochasticity for TDA estimates.
## 3 A Bayesian perspective on training data attribution
Training data attribution (TDA) \(\tau(z_{j},z)\) is defined as the attribution of one training sample \(z_{j}\) to another sample \(z\) in terms of how a target metric like the loss of a sample \(\mathcal{L}(z,\theta)\) changes when
the model is trained without \(z_{j}\) (Equation 1). We note here that according to the definition, we are interested in the impact of the change in the _dataset_ from \(\mathcal{D}\) to \(\mathcal{D}_{\setminus j}\), rather than the change in the model parameter. From a Bayesian perspective, a change in the training dataset leads to a shift in the posterior distribution, \(p(\theta|\mathcal{D})\to p(\theta_{\setminus j}|\mathcal{D}_{\setminus j})\), leading to the definition of TDA as a random variable:
\[\tau(z_{j},z):=\mathcal{L}(z;\mathcal{D}_{\setminus j})-\mathcal{L}(z; \mathcal{D})=\mathcal{L}(z;\theta_{\setminus j})-\mathcal{L}(z;\theta) \tag{7}\]
where \(\theta\sim p(\theta|\mathcal{D})\) and \(\theta_{\setminus j}\sim p(\theta_{\setminus j}|\mathcal{D}_{\setminus j})\). This interpretation is more natural, given the non-uniqueness of the mapping from a training dataset \(\mathcal{D}\) to the optimal model parameter \(\theta\) for general, non-convex models like DNNs.
Sampling TDA values.One could plug in various Bayesian DL techniques (SS2.2) to compute samples of \(p(\theta|\mathcal{D})\), which can be used to get the samples of \(\tau(z_{j},z)\). In our work, we use the Stochastic Weight Averaging (SWA) [12; 21] and Deep Ensemble (DE) [11] which are applicable to a wide class of deep models. More specifically, we obtain \(T\) samples \(\theta^{(1)},\cdots,\theta^{(T)}\sim p(\theta|\mathcal{D})\) either by taking the last \(T\) model checkpoints of the SGD iterations (SWA) or by taking the last model checkpoints from \(T\) different model initialisations (DE). The same is done for the counterfactual posterior \(\theta^{(1)}_{\setminus j},\cdots,\theta^{(T)}_{\setminus j}\sim p(\theta_{ \setminus j}|\mathcal{D}_{\setminus j})\).
Statistical analysis on TDA.The simplest statistics for the TDA \(\tau(z_{j},z)\) are the mean and variance:
\[\mathbb{E}[\tau(z_{j},z)] =\frac{1}{T}\sum_{t}\mathcal{L}(z;\theta^{(t)}_{\setminus j})- \mathcal{L}(z;\theta^{(t)}) \tag{8}\] \[\text{Var}[\tau(z_{j},z)] =\frac{1}{T^{2}}\sum_{t,t^{\prime}}\Big{(}\mathcal{L}(z;\theta^{ (t)}_{\setminus j})-\mathcal{L}(z;\theta^{(t^{\prime})})-\mathbb{E}[\tau(z_{ j},z)]\Big{)}^{2} \tag{9}\]
Our main interest lies in whether the influence of the training data \(z_{j}\) on the test data \(z\) is statistically significant and not dominated by the inherent noise. For this purpose, we design a Student t-test for quantifying the statistical significance. Our null and alternative hypotheses are:
\[H_{0}:\tau(z_{j},z)=0\qquad H_{1}:\tau(z_{j},z)\neq 0. \tag{10}\]
We consider the test statistic based on sample mean and variance:
\[t=\frac{\tau(z_{j},z)-\mathbb{E}[\tau(z_{j},z)]}{\sqrt{\text{ \text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{ \
Evaluating TDA as a random variable.Previously, the LOO-based TDA values \(\tau(z_{j},z)\) and the estimates from various approximate TDA methods \(\tau^{\prime}(z_{j},z)\) are compared via correlation measures like Pearson or Spearman. Our treatment of those quantities as 1-D random variables poses a novel challenge for evaluation because there exists no inborn notion of ordering among 1-D random variables. We address the challenge by examining the approximation ability of TDA methods for both the first and second moments of the true TDA values \(\tau(z_{j},z)\). More specifically, we compute the Pearson and Spearman correlation for both the mean (Equation 8) and variance (Equation 9) between the ground-truth \(\tau(z_{j},z)\) and estimated TDA \(\tau^{\prime}(z_{j},z)\) values across multiple train-test pairs \((z_{j},z)\).
## 4 Experiments
We introduce our experimental settings, present analyses on factors contributing to the reliability of TDA values, compare TDA methods, and draw suggestions on the evaluation practice of TDA.
### Implementation details
We illustrate the specific details of our implementation. See the Appendix for further information.
TDA methods.We study different TDA methods from a Bayesian perspective. We test the methods introduced in SS2.1 for estimating TDA: **influence functions (IF)**[3], **Grad-Dot (GD)** and **Grad-Cos (GC)**[17]. We use the PyTorch implementation of IF from Guo _et al._[15] and modify it for our models. As the ground-truth target, we consider **Leave-one-out training (LOO)**[3]. For LOO, we remove of \(z_{j}\) from the training set \(\mathcal{D}\) by zeroing out the weight for sample \(z_{j}\) towards the loss. Additionally, we include Charpiat _et al._'s [17] notion of TDA that a training data point \(z_{j}\) attributes more if an **additional training step (ATS)** on it changes the test loss more significantly.
Inducing randomness in posterior \(p(\theta|\mathcal{D})\).In SS2.2, we have introduced the interpretation of various elements around model training as sources of randomness for Bayesian posterior. We summarise our methods for inducing randomness in Figure 2. We use the notion of the Deep Ensemble (DE) [11] to sample from the posterior. In a variant of DE with the initialisation as the source of randomness (**DE-Init**), we train each of \(T_{\text{DE}}\) randomly initialised parameters \(\theta_{0}^{(t)}\) on either \(\mathcal{D}\) or \(\mathcal{D}_{\setminus j}\). The resulting parameter sets, \(\theta^{(t)}\) and \(\theta_{\setminus j}^{(t)}\), are treated as samples from respective posteriors. We also consider the batch composition in stochastic gradient descent (SGD) as the source of randomness (**DE-Batch**). In this case, we train from one initial parameter \(\theta_{0}\) with \(T_{\text{DE}}\) different random shuffles \(\pi^{(t)}\) of the training sets \(\mathcal{D}\) and \(\mathcal{D}_{\setminus j}\). This results in two sets of samples from the original and counterfactual posteriors. We increase the number of samples by taking the last \(T_{\text{SWA}}\) checkpoints as the Stochastic Weight Averaging (**SWA**) samples [12; 21]. For Grad-Dot, this coincides with the definition of TracIn [4] as we average the dot products across checkpoints. In total, we take \(T=T_{\text{DE}}\times T_{\text{SWA}}=10\times 5\) samples from \(p(\theta|\mathcal{D})\) and \(p(\theta_{\setminus j}|\mathcal{D}_{\setminus j})\) to estimate \(\tau(z_{j},z)\).
Figure 2: **Sources of randomness for Bayesian posteriors.** In each case, the training starts from initialisation \(\theta_{0}\). Depending on whether \(z_{j}\) is included in the training data, one has either samples from the original posterior \(p(\theta|\mathcal{D})\) or from the counterfactual posterior \(p(\theta_{\setminus j}|\mathcal{D}_{\setminus j})\). For deep ensemble [11], the randomness stems either from random initialisation (DE-Init) or from SGD batch composition (DE-Batch). For stochastic weight averaging (SWA) [12; 21], last few checkpoints of the training are treated as posterior samples.
**Datasets \(\mathcal{D}\).** To enable an exhaustive analysis of every train-test pair \((z_{j},z)\), we define smaller datasets. We use variants of MNIST [22] limited to three classes (MNIST3), and CIFAR10 [23]. For MNIST3, we sample a training set of size 150 and a test set of size 900, i.e. 135,000 train-test pairs. For CIFAR10, we define the training and test set at size 500, i.e. 250,000 train-test pairs.
Models.We consider two types of image classifiers, convolutional neural networks (CNN, [24]) and visual transformers (ViT, [25]). For ViT variants, instead of full finetuning, we use LoRA adapter layers [26] to minimise the number of parameters being tuned. The number of trainable parameters of ViT+LoRA (597,514) is comparable to adding another layer to our CNN (225,034).
### Reliability of TDA evaluation
We assess the reliability of TDA evaluation by measuring the degrees of noise in both the ground-truth TDA (LOO) \(\tau(z_{j},z)\) and the estimated TDA \(\tau^{\prime}(z_{j},z)\). The noise level is measured with the p-value of the Student-t hypothesis testing to determine if the absolute TDA values are significantly greater than the sample noise (SS3).
We report the results in Table 1. Generally, we observe many TDA measurements, ground-truth and estimations likewise, are unstable with non-significant p-values (\(>0.05\)). In particular, even the ground-truth LOO shows p-values of 0.331 on MNIST3 and 0.692 for CIFAR10 (SWA+DE-Init). In these cases, the noise effectively dominates the signal and any evaluation that does not consider the variance in the posterior \(p(\theta|\mathcal{D})\) is likely to be misleading. This confirms the reports in [9] that TDA values are sensitive to model initialisation.
TDA methods often show similar levels of instability. For example, the IF attains p-values 0.352 and 0.575 on MNIST3 and CIFAR10, respectively, roughly matching the LOO case. Grad-Cos is an exception: it attains lower p-values than the other TDA methods (0.003 and 0.356 for MNIST3 and CIFAR10, respectively). We interpret this as an overconfident TDA estimation. Practitioners shall be wary of using TDA methods that are unreasonably stable when the ground-truth TDA itself is not.
### Factors influencing the variability of TDA
Based on the observation in SS4.2 that TDA values are often dominated by noise, we delve into the factors that lead to the instability of data attributions. We inspect the contribution of model initialisation, training set size and model complexity.
Source of randomness.From a Bayesian ML perspective, the stochasticity of TDA stems from the inherent uncertainty of the learned model posterior \(p(\theta|\mathcal{D})\). We consider two sources of randomness, model initialisation (DE-Init) and SGD batch composition (DE-Batch). Results are reported in
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline Data & Randomness & LOO & ATS & IF & GD & GC \\ \hline \multirow{2}{*}{MNIST3} & SWA\(+\)DE-Init & 0.331 & 0.254 & 0.352 & 0.363 & 0.003 \\ & SWA\(+\)DE-Batch & 0.025 & 0.039 & 0.000 & 0.000 & 0.000 \\ \hline \multirow{2}{*}{CIFAR10} & SWA\(+\)DE-Init & 0.692 & 0.437 & 0.575 & 0.587 & 0.356 \\ & SWA\(+\)DE-Batch & 0.487 & 0.296 & 0.484 & 0.517 & 0.236 \\ \hline \hline \end{tabular}
\end{table}
Table 1: **Stability of TDA estimates.** We report p-values for the ground-truth TDA \(\tau(z_{j},z)\) (LOO) and the estimated TDA values \(\tau^{\prime}(z_{j},z)\) (rest 4 columns). The p-values are averaged across all train-test pairs \((z_{j},z)\). We use the CNN model throughout.
Figure 3: **Stability of TDA estimates per train-test pair.** Distribution of p-values for ground-truth TDA (LOO) for different experiments.
Training set size.We study how training set size is a source of noise (cf. Figure 4). We train the CNN with different-size datasets of MNIST3 and CIFAR10, where we vary the number of samples per class. To limit the effect of noise introduced by new data, we subsample the smaller datasets from the larger ones. Batches are composed differently depending on the dataset size, meaning that parameter updates are made after processing different data. The results show a tendency for high variation in TDA scores with larger datasets. This makes sense as the number of combinations for batch composition increases with dataset size. As the batching is initialised randomly during training, batches are likely to be composed of different data for larger datasets. This leads to variation in the learned model parameters, in turn affecting the reliability of TDA.
Model complexity.We study how model complexity is linked to the reliability of TDA estimates. See Table 2. We observe that, compared to a small CNN model, a large ViT model trained with LoRA results in dramatically greater p-values. For example, for LOO on MNIST3, the p-value increases from 0.025 to 0.786. A similar trend is observed for other TDA methods. This implies that the reliability of TDA estimates decreases with increasing model complexity. While we limit the number of trainable parameters in our ViT by using LoRA to be comparable to e.g. adding another layer in the CNN, the p-values computed from TDA estimates are significantly larger. Larger models exhibit a larger parameter space so that noise stemming from model initialisation or batch composition is amplified. While we fix the model initialisation and dataset size, the batch composition still varies across the model parameters \(\theta\) sampled from the posterior \(p(\theta|\mathcal{D})\) per model. As both the CNN and ViT are trained with the same sampled batch compositions, we attribute the strong increase of p-value to the model complexity.
### (Dis)agreement between TDA methods
We test the reliability of different TDA methods. Ideally, all methods approximate the ground-truth TDA (LOO). Yet the results suggest that there are substantial differences among the methods. For example, Grad-Cos is much more stable than all others. Hence, we study TDA methods with respect to both their correlation with LOO and among each other using Pearson and Spearman correlation of mean and variance of the TDA distributions, as proposed in SS3.
Figure 5 shows the correlation matrices for one experiment (all experimental results in the Appendix). The results show that viewing TDA scores as distributions gives insights into the reliability of TDA methods: None of the tested TDA method's expected values \(\dot{\mu}\) correlates with LOO. This implies that none of the TDA methods is a good approximation when the random factors are considered. The poor correlation of p-values of LOO indicates a disagreement in the train-test pairs considered low noise. We conclude that none of the tested methods reliably capture ground-truth TDA distributions.
\begin{table}
\begin{tabular}{l l c c c c c} \hline \hline Model & Data & LOO & ATS & IF & GD & GC \\ \hline CNN & MNIST3 & 0.025 & 0.039 & 0.000 & 0.000 & 0.000 \\ ViT+LoRA & MNIST3 & 0.786 & 0.573 & 0.369 & 0.365 & 0.093 \\ \hline CNN & CIFAR10 & 0.623 & 0.374 & 0.535 & 0.534 & 0.314 \\ ViT+LoRA & CIFAR10 & 0.777 & 0.766 & 0.686 & 0.686 & 0.522 \\ \hline \hline \end{tabular}
\end{table}
Table 1: For MNIST3 and CIFAR10, we observe that DE-Batch introduces lower levels of noise in the TDA estimates (lower p-values). Particularly on MNIST3, both LOO and other TDA methods result in statistically significant p-values (\(<0.005\)). This implies that almost every training data \(z_{j}\) is influencing every test data \(z\) consistently across various batch compositions. We conclude that the greater source of variations for the TDA estimates is the model initialisation.
Figure 4: **Impact of training data size.** Mean p-values of TDA methods with randomness induced by SWA+DE-Init.
Interestingly, we notice a stronger correlation between all other methods particularly when looking at the correlations of p-values. We identify two groups based on positive correlation, i.e. ATS with IF and GD with GC. Among the two groups, there is a negative correlation which indicates that methods interpret the sign of the attribution differently. Between IF, GD and GC this makes sense, as there is a negative sign in the definition of IF (Equation 2) which is not present in GD and GC. Considering absolute correlation, IF and GD are strongly correlated which shows that the dot product is a valid alternative for IF as they produce similar score distributions. The correlation between GD and GC indicates that the normalisation of gradients does not have a strong impact on the estimated TDA.
IF, GD and GC correlate considerably with ATS, which measures how the loss of a model on \(z\) changes after doing one additional training step on \(z_{j}\). Practically, ATS represents the gradient update after \(z_{j}\), which is the same as the gradient \(z_{j}\) itself. Therefore, it makes sense that the gradient-based approximation methods are close to ATS. We recognise a difference in the scope LOO and ATS address. LOO looks at TDA globally and encapsulates the whole training, whereas ATS considers a local scope with a small model change. As IF, GD and GC correlate with ATS, we observe that they also correspond to a local change in the model, which underlines and extends the argument of [7]: There is a gap between LOO and IF, and more generally between the global and local view on TDA.
The TDA variance \(\hat{\sigma}\) is noticeably well-correlated for TDA estimators and LOO, except for GC. This implies the existence of a consistent ranking of train-test pairs with stable attribution relationships. In particular, stable train-test pairs predicted by LOO are also likely to be stable pairs for TDA methods like ATS, IF, and GD. This motivates our final analysis and recommendation for evaluation in SS4.5.
### Considerations on TDA evaluation from a Bayesian perspective
Our analysis shows that both TDA estimates and ground-truth TDA values are affected by the noise stemming from the stochastic nature of model training. Hence, the practice of comparing against such a ground truth is destined to result in fragile estimates. We propose to treat TDA estimates as random variables which allows us to look at the evaluation from a Bayesian perspective: The comparison of TDA estimates against target TDA values is a comparison of two random variables. Since it is impossible to get rid of noise, it is better to compare distributions rather than point estimates. This provides an understanding of how well methods approximate the ground-truth distribution.
We observe that p-values vary between individual train-test sample pairs \((z_{j},z)\); not all TDA estimates are equally affected by stochasticity. Interestingly, the presence of low-noise pairs is consistent across the majority of our experiments (cf. Figure 3), with varying sizes of the low-noise fraction. We find that fixing model initialisation and a small dataset size gives rise to a larger number of low-noise pairs.
Figure 5: **Correlation of TDA methods.** Pearson and Spearman correlation coefficients among ground-truth TDA and approximate TDA methods. We show correlations for TDA mean \(\hat{\mu}\), TDA standard deviation \(\hat{\sigma}\), and TDA p-values. All results are based on the setting: CNN, MNIST3, SWA+DE-Init.
We propose to focus on such low-noise pairs in TDA evaluation as their estimates are low in variance, leading to a more reliable evaluation. Identifying such pairs requires an analysis similar to this work: treating TDA values as distributions and sampling multiple times from the posterior to get an estimate of the noise. It is crucial to find low-noise pairs to base evaluations on and understand when TDA is applicable. If no low-variance pairs exist, TDA cannot be used.
## 5 Related work
We study the reliability of TDA methods and add to the existing body of work on the fragility of TDA methods. Previous studies focused primarily on IF [27; 6; 8; 9; 7]. We extend the analysis by additionally studying other TDA methods. While IFs are theoretically grounded in robust statistics, they are based on two assumptions which are not always fulfilled in the context of deep learning: Twice-differentiability and strict convexity of the loss [3]. Zhang & Zhang [27] and Basu _et al._[6] point to the fragility of the influence scores due to the non-convexity of deep learning. Particularly increasing model size is connected to increased model curvature, which means that influence estimates are more fragile with larger models. They find that strong regularisation is needed to improve estimation quality. Our experiments verify the observation that fragility increases with model size, which we observe across methods. We add that sources of randomness in the training process attribute to the fragility of TDA methods with increasing model size. Furthermore, related work found that the size of the training set contributes to the fragility of influence estimates. The attribution of one sample in a large training set is marginal so both influence estimates and ground-truth influence scores (i.e., from retraining the model) are noisy [6; 8; 9]. Through a Bayesian lens, we connect the increased fragility with increasing dataset size to batch composition as well. Not only is the attribution of a single sample in a large dataset marginal [6] but batches have vastly different compositions in larger datasets, introducing noise. A recent work [7] states that influence functions in deep learning do not correspond to LOO and quantify gaps in the estimation stemming from model non-linearity. K & Sogaard [9] recommend reporting expected TDA scores to increase estimation stability. This approach is closest to our work but misses the consideration of variance in TDA estimates which we include by taking a Bayesian viewpoint.
In contrast to related work, we treat TDA values as distributions, which enables a novel perspective on the TDA task. We highlight the importance of considering the variance when studying reliability.
## 6 Conclusion
We adopt a Bayesian perspective on the training data attribution (TDA) methods to study their reliability, given the stochastic nature of deep model training. By modelling TDA scores as distributions, we find that randomness in the training process, particularly due to parameter initialisation and batch composition, translates to variation in ground-truth TDA. We empirically observe that current estimation methods, such as influence functions, model a local change in the model whereas the ground truth attribution considers a global model change. Therefore, TDA is subject to inherent variance, leading us to suggest to the community: (1) When proposing a novel TDA method, one should view TDA from a Bayesian perspective and study the TDA estimates as distributions. (2) When using TDA, one should consider the variance to understand when the estimate can be trusted.
Limitations.We perform an exhaustive analysis of TDA values \(\tau\) and the estimates \(\tau^{\prime}\) for all train-test pairs \((z_{j},z)\). Because of considerable computational costs, we have subsampled the datasets. In practice, datasets are considerably larger. Moreover, we choose simple tasks to eliminate the need for an additional hyperparameter search for model training, as the principal focus is on studying TDA methods. We choose gradient-based TDA methods but acknowledge that there exist many more, that we do not address. Hence, we encourage further study of TDA methods to fill these gaps and recommend investigating TDA from a Bayesian perspective, particularly in the low-data regime.
Broader impact.This paper contributes to the field of data-driven XAI which aims at helping humans understand the inner workings of opaque models through data-centric approaches. Our work contributes to understanding the reliability of TDA methods and rethinking their evaluation against a noisy ground truth, which could help assess when TDA is appropriate and reliable.
## Acknowledgments and Disclosure of Funding
Kay Choi has helped in designing Figures 1 and 2. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting Elisa Nguyen.
|
2309.16783 | Photonic Accelerators for Image Segmentation in Autonomous Driving and
Defect Detection | Photonic computing promises faster and more energy-efficient deep neural
network (DNN) inference than traditional digital hardware. Advances in photonic
computing can have profound impacts on applications such as autonomous driving
and defect detection that depend on fast, accurate and energy efficient
execution of image segmentation models. In this paper, we investigate image
segmentation on photonic accelerators to explore: a) the types of image
segmentation DNN architectures that are best suited for photonic accelerators,
and b) the throughput and energy efficiency of executing the different image
segmentation models on photonic accelerators, along with the trade-offs
involved therein. Specifically, we demonstrate that certain segmentation models
exhibit negligible loss in accuracy (compared to digital float32 models) when
executed on photonic accelerators, and explore the empirical reasoning for
their robustness. We also discuss techniques for recovering accuracy in the
case of models that do not perform well. Further, we compare throughput
(inferences-per-second) and energy consumption estimates for different image
segmentation workloads on photonic accelerators. We discuss the challenges and
potential optimizations that can help improve the application of photonic
accelerators to such computer vision tasks. | Lakshmi Nair, David Widemann, Brad Turcott, Nick Moore, Alexandra Wleklinski, Darius Bunandar, Ioannis Papavasileiou, Shihu Wang, Eric Logan | 2023-09-28T18:22:41Z | http://arxiv.org/abs/2309.16783v2 | # Photonic Accelerators for Image Segmentation in Autonomous Driving and Defect Detection
###### Abstract
Photonic computing promises faster and more energy-efficient deep neural network (DNN) inference than traditional digital hardware. Advances in photonic computing can have profound impacts on applications such as autonomous driving and defect detection that depend on fast, accurate and energy efficient execution of image segmentation models. In this paper, we investigate image segmentation on photonic accelerators to explore: a) the types of image segmentation DNN architectures that are best suited for photonic accelerators, and b) the throughput and energy efficiency of executing the different image segmentation models on photonic accelerators, along with the trade-offs involved therein. Specifically, we demonstrate that certain segmentation models exhibit negligible loss in accuracy (compared to digital FLOAT32 models) when executed on photonic accelerators, and explore the empirical reasoning for their robustness. We also discuss techniques for recovering accuracy in the case of models that do not perform well. Further, we compare throughput (_inferences-per-second_) and energy consumption estimates for different image segmentation workloads on photonic accelerators. We discuss the challenges and potential optimizations that can help improve the application of photonic accelerators to such computer vision tasks.
Photonic Computing, Image Segmentation, Deep Learning, Computer Vision
## I Introduction
Semantic segmentation of imagery is an important computer vision task for various applications ranging from robotics [1] and autonomous driving [2] to defect detection during manufacturing processes [3], with an increasing number of models being released for such tasks even today [4]. Over the past decade, DNNs for image segmentation have evolved from convolutional neural networks (CNNs) [5, 6, 7] to transformer-based networks that outperform CNNs on these tasks [8]. However, the sizes of these DNNs and the amount of computation required for training such models have outpaced Moore's law [9]. In order to address the problems of growing model latency and energy usage, researchers have turned to the use of photonic computing for quickly and efficiently performing matrix-vector products (MVPs), which are the primary computations in DNNs [10, 11]. While existing work has explored the use of FPGA (field-programmable gate array) based accelerators for autonomous driving, for achieving low latency [12, 13, 14], recent advances in photonic computing are yet to be explored in this domain.
In this paper, we focus on image segmentation workloads on photonic accelerators for autonomous driving and defect detection in manufacturing - two important applications that require fast, accurate and real-time image segmentation capabilities [15, 16]. For this work, we do not contribute our own accelerator design. Instead, we focus on the implications of executing image segmentation workloads on existing photonic devices, by basing our work on a recently proposed photonic compute core, called the _photo-core_[17]. While there is some variability among photonic accelerator designs, we believe that the findings of this work will offer common insights and challenges associated with executing image segmentation DNNs on photonic accelerators. Specifically, we make the following contributions: a) we analyze recent image segmentation DNNs on the photo-core, alongside techniques such as fine-tuning for accuracy recovery; b) we identify the DNN architectures that achieve good out-of-the-box (OOB) accuracy compared to float32. We further investigate the empirical reasons for the accuracy robustness of some architectures over others, in order to guide the design of future DNNs specialized for photonic accelerators; c) we investigate the relative throughput and power
Fig. 1: The relative mIoU (accuracy), memory, energy cost, and throughput of five DNN models performing image segmentation on three datasets using a photo-core (photonic compute unit). Vision transformers, like Swin (maskformer), are more accurate out-of-the-box at the cost of requiring more energy and memory than their CNN counterparts.
estimates for the image segmentation models on the photo-core to characterize the relative trade-offs for the DNNs on photonic devices. To our knowledge, this is the first work to investigate image segmentation on photonic accelerators, with the goal of providing insights for computer vision on photonic hardware.
## II Related Work
### _ML applications on accelerators_
Recently, digital machine learning accelerators have attained significant interest owing to the potential speed benefits that they offer for ML workloads. Since the primary computational component of DNNs is matrix multiplication, specialty ASICS such as Nvidia's Tensor Cores [18] and Google's TPU [19] have been developed to accelerate matrix multiplication. However, the power consumption of these systems has been increasing with each generation, and their throughput has not improved correspondingly [20]. As a result, recent approaches have proposed photonic accelerator designs that demonstrate higher throughput with lower energy profiles, offsetting the disadvantages of more traditional systems [17, 21, 22]. However, photonic designs present numerous challenges associated with noisy electro-optical components (e.g., thermal, shot noise) alongside their reduced precision that impact the performance of DNNs [23].
Prior work has looked into performing high-speed image classification [24, 25] and natural language processing (NLP) [26] on photonic accelerators. For image classification, prior work demonstrated inference speeds of under 570 ps with high accuracy, for classification of hand-written letters [24]. Their work used a clock-less processing of data thus eliminating analog-to-digital conversions to improve throughput and energy efficiency. Further work demonstrated an integrated metasystem, in contrast to conventional photonics integrated circuits, for high accuracy and throughput on the MNIST dataset [25]. For NLP, prior work demonstrated a novel design of photonic extreme learning machines (PELM) for performing sentiment analysis on the Image Movie Database [27].
For image segmentation, prior work has looked at performing real-time image segmentation on an FPGA-based accelerator [12]. They demonstrate low latency of about 3ms with less than 30% resource usage, while maintaining accuracy on the Cityscapes dataset. Prior work has also looked at the use of FPGA accelerator for performing real-time skin segmentation, demonstrating that no more than 88% of the system resources were used in the process [14]. In the context of designing more efficient image segmentation models for accelerators, prior work has developed the Fast Spatial Feature Network (FSFNet) that achieved high throughput and accuracy on the Cityscapes dataset [13]. Self-driving and defect detection applications rely heavily on their capacity to segment images quickly and accurately. Whilst most research focuses on digital accelerators, we investigate image segmentation on photonic accelerators.
### _Mitigating Quantization & Analog Noise Error_
Similar to fully digital systems, one of the key challenges of executing DNNs on photonic accelerators stem from the reduced precision supported by the electro-optical components, such as the analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) [28]. While using low precision data types can significantly increase DNN throughput, the model's accuracy degradation can be too large to accomplish the task at hand [29]. To address this, fine-tuning and post-training quantization techniques have been used to improve the accuracy of models running low precision inference [29].
Most recently, the approach of dynamically scaling vectors, such as VS-Quant [30] and Adaptive Block Floating Point (ABFP) [28], have emerged as a potential solution to maintaining accuracy at low precision. Both ABFP and VS-Quant follow a similar approach, by computing scaling factors (i.e., the maximum magnitude) over vectors of length \(n\), with VS-Quant adding an additional level of quantization for the scale factors themselves. In this paper, we explore the use of ABFP in photonic accelerators as a method of obtaining good out-of-the-box (OOB) performance for image segmentation. We do not explore the use of a second-level quantization of the scale factors themselves (we leave the scales in bfloat16), although the second-level quantization could be utilized in the context of accelerator designs for further improvements [31].
Among training methods, for both VS-Quant and ABFP, the authors have explored the use of fine-tuning to further boost model accuracy. For ABFP, the authors propose using Differential Noise Fine-tuning (DNF) as a faster alternative to the typical quantization-aware training approach. In the case of image segmentation models that do not perform well OOB, we explore the use of DNF to recover model accuracy. Note that in this case, fine-tuning is performed off-device and the fine-tuned model is deployed on the device for inference only.
### _Defect Detection_
Defect detection is used to enhance manufacturing process efficiency, and to ensure the adherence of parts to specifications. Existing methods for defect detection include deep learning for image classification [32, 33], object detection [34], or instance segmentation [35, 36, 37, 38]. Defect detection in real-time on edge devices has particularly gained significant attention for its advantages including reduced latency, improved privacy, enhanced reliability, and efficient bandwidth utilization [39]. Techniques for deploying defect detection on edge devices include approaches like transfer learning [40], model compression [41], pruning and quantization [42] and federated learning [40] to reduce the model complexity and memory usage. However, prior work has not looked at the deployment of defect detection models on photonic devices.
## III Methods
In this section, we discuss the implementation details of the photo-core [17] that we use as a reference for our experiments. We then describe the two accuracy recovery methods explored in this work, namely Adaptive Block Floating Point (ABFP) and Differential Noise Fine-tuning (DNF) [28].
### _Hardware Design_
We investigate the performance of image segmentation DNNs on photonic devices, by evaluating the accuracy, throughput
and energy implications of executing the matrix multiplication operations on the photo-core, while performing all the non-linear operations (such as activations) in high-precision digital. Within the photo-core, a weight-stationary approach to computing matrix-vector products (MVPs) is adopted. The weights and input activations of each layer are first unfolded into 2D matrices if necessary (e.g., using kn2row [43] for convolutions). Since the photo-core has a fixed size of \(n\times n\), weight matrices that are bigger/smaller than \(n\times n\) are zero-padded to a multiple of \(n\) (called _tile size_) and then tiled into \(n\times n\) sized sub-matrices. The input vectors are similarly zero-padded and tiled. Each tile or vector is then loaded onto the photo-core one-by-one. The weight tile is programmed into a Mach-Zehnder Interferometer array, and the input vector is encoded into optical signals. The MVP is then performed via photonics, by modulating the signal strength of the light. The resulting photo-current is converted to digital, via ADCs. The partial results are digitally accumulated for the final output (see Figure 2); please refer to [17] for more photo-core details.
The chosen DAC precision for inputs and weights is 10 bits and 7 bits, respectively, with an 11-bit ADC at the output. Particularly, we note that the output of the GEMM operations in the photo-core is quantized at the output ADCs, which adds to the error caused by input and weight quantization. This differs from digital devices that retain higher precision in outputs [44]. In the following sections, we discuss the implications of this with methods to mitigate its impact in photonic accelerators.
### _Adaptive Block Floating Point (ABFP)_
The inference accuracy of the DNNs on the photo-core is impacted by two key factors: the presence of analog noise (e.g., shot noise, thermal noise) and the quantization noise due to the reduced weight and activation precision at the ADCs. To mitigate this, we explore the use of ABFP [28] for quantizing the weights and activations to the photo-core. ABFP reduces quantization effects by scaling vectors of length \(n\) in the weight tiles and input vectors. In addition to per-vector scaling, ABFP also uses an over-amplification factor (i.e., gain \(G\)) at the output. The use of over-amplification helps increase the signal-to-noise ratio in order to mitigate the impact of analog noise. In our experiments, we analyze this composite framework based on its overall impact in terms of accuracy, throughput and energy.
Figure 2 shows the workflow of ABFP with the photo-core. Given a weight matrix, \(N_{r}\times N_{c}\), ABFP computes the scale as the maximum values over each row of the \(n\times n\) sub-tile, i.e., one scale per row vector of length \(n\). The scales are stored in bfloat16. Instead of computing the scales over the entire matrix, the use of per-vector scales, \(S^{W}\), for each \(n\)-sized row helps mitigate the impact of quantization. Similarly, for the input vector, the scales, \(S^{X}\), are computed as the maximum over vectors of length \(n\). The weight sub-tiles and input vectors are quantized using the corresponding scales and passed into the photo-core via input and weight DACs.
### _Simulation Details_
For this work, we designed a simulator for the photo-core, that captures the overall architecture shown in Figure 2, while run on NVIDIA A100 GPUs. Specifically, non-linear operations within the DNNs are computed directly at higher GPU precision, whereas, MVPs that would occur on the photo-core are digitally simulated. The photo-core MVP simulation includes modeling quantization and sources of error that stem from analog noise such as thermal noise and shot noise. These noise sources affect the overall MVP output, and is modeled by adding a cumulative noise term to it. The noise is sampled from a zero-mean normal distribution with a standard deviation that reflects the effects of the cumulative analog noise. Hence, the effects of analog noise and quantization are combined in the digital simulation to model the complete electro-photonic system for DNN inference shown in Figure 2. Within the simulation, the MVP of the weight tile \(W\) and input vector \(X\) is given by:
\[Y^{q}=Q(W;S^{W},\Delta_{W})*Q(X;S^{X},\Delta_{X}) \tag{1}\]
The operator, \(Q\), is typical quantization operation defined as:
\[Q(X;S^{X},\Delta_{X})=\left\lfloor\frac{X}{S^{X}}\Delta_{X}\right\rfloor \tag{2}\]
Where, \(\Delta_{m}=2^{b_{m}-1}-1\) for \(b_{m}\) integer bits, and \(\lfloor.\rceil\) represents rounding and then clipping between \([-\Delta_{m},\Delta_{m}]\)
Fig. 2: High-level overview of applying ABFP with the photo-core unit. Input weights and activations are scaled at per-vector granularity and the quantized matrices are multiplied on the photocore. The output result is then amplified, quantized at the output ADC, and de-quantized for accumulation in digital.
We use 7-bit weights and 10-bit inputs, i.e., \(b_{W}=7\) and \(b_{X}=10\). Eqn (2) represents a typical quantization operation, where the inputs are first scaled, then rounded and clipped to the specified range [44]. The resultant partial output for a tile (see Eqn (1)) is then amplified via the application of gain \(G\) (we use \(G=4.0\)). To simulate the impact of analog noise, we add a Gaussian noise \(\mathcal{E}\in\mathcal{N}(0,\sigma)\) to the resultant output. The noisy result is then quantized at the output ADCs and further de-quantized using the corresponding weight and input scales and converted to bfloat16 as follows:
\[Y=Q(Y^{q}G+\mathcal{E};n\Delta_{X}\Delta_{W},\Delta_{Y})\frac{nS^{W}S^{X}}{G \Delta_{Y}} \tag{3}\]
Eqn (3) represents 11-bit output quantization (\(b_{Y}=11\)), and then de-quantization to bfloat16. The final output is obtained by bfloat16 accumulation over the partial tile outputs.
### _Differential Noise Fine-tuning (DNF)_
For models that do not perform well OOB, we fine-tune them using Differential Noise Finetuning (DNF) to improve accuracy. In a one-time pre-training step, DNF computes per-layer noise distributions based on the output differences between the quantized and float32 models. During training, DNF samples noise tensors from the pre-computed noise distributions and adds the noise to the layer outputs of the float32 model. This way, the noise induced perturbations in the layer outputs enable the models to adapt their weights to quantization error and noise. In contrast to typical Quantization-Aware Training (QAT), DNF uses float32 precision in the forward pass, eliminating the need for simulated quantization operations in the forward pass thus resulting in significant speed improvements [28].
## IV Experiments
We evaluate the performance of image segmentation models for autonomous driving and defect detection on the photo-core. In particular, we evaluate: a) accuracy of different image segmentation DNNs and why certain DNNs perform better than others; b) implications of using the ABFP representation; c) implications of using over-amplification (gain), in terms of accuracy and energy consumption; d) throughput and energy comparisons of image segmentation workloads on the photo-core. We do not make any conclusions about the absolute energy consumption or throughput of the photo-core, relative to other hardware architectures. Instead, our work is focused on empirically analyzing and comparing the DNNs to evaluate their relative strengths on photonic hardware via the photo-core.
### _Datasets_
We evaluate and fine-tune the image segmentation DNNs for autonomous driving using the ADE20k [7] and Cityscapes [45] datasets, which comprise images of city scenes with 150 and 34 object classes, respectively.
For defect detection, the Corning Defect Detection dataset (not public), consists of RGB imagery from a traditional manufacturing process, involving the insertion of a cylindrical part into a molded plastic pocket (See example in Figure 3). The objective of the inspection is to verify that positions of the part and the equipment fall within the designated process limits. Pixel-level segmentation facilitates the determination of the relative position of multiple objects and it enables the identification of instances where outcomes deviate from desired specifications. In all three datasets, each pixel in an image is either labeled as one of the classes or as a background pixel.
### _Models_
Five DNNs were chosen based on their sizes and structural nature (convolutional vs. transformers)1. MobileNetv2dilated-c1-deepsup (referred to as mobilenet) is a lightweight CNN architecture designed for efficient semantic segmentation, featuring depth-wise separable convolutions and dilated convolutions to reduce computation costs while maintaining accuracy. ResNet50dilated-ppm-deepsup (referred to as Resnet50) is a variant of ResNet that utilizes dilated convolutions and pyramid pooling to capture contextual information. HRNetv2 is a high-resolution CNN architecture employing multi-resolution fusion to combine features and capture fine-grained information. Maskformer (referred to as win-base or win-large) uses a Swin-base or Swin-large backbone with a detection transformer head for instance segmentation tasks, utilizing self-attention mechanisms [46]. We used model checkpoints that were pre-trained on the ImageNet dataset, providing a strong initialization for our work.
Footnote 1: Please see: [https://github.com/CSAILVision/semantic-segmentation-pytorch](https://github.com/CSAILVision/semantic-segmentation-pytorch) and [https://huggingface.co/facebook/maskformer-swin-large-ade](https://huggingface.co/facebook/maskformer-swin-large-ade)
### _Accuracy Metrics_
We use pixel accuracy and intersection over union (IoU) metrics. Pixel accuracy is the ratio of the number of correctly predicted pixels, over the total number of non-background pixels in the image. The Intersection over Union (IoU) metric measures the overlap between the predicted segmentation mask and the ground truth mask. IoU is calculated as the ratio of the intersection of the predicted and ground truth masks, over their
Fig. 3: An example of the Corning manufacturing process that requires precise insertion of the cylindrical part and its extension to a molded plastic pocket. Semantic segmentation is used to identify four object categories: cylindrical part, cylindrical part extension, molded plastic pocket, and cylindrical part holder. Only the four colored and underlined objects are of interest to the inspection process and are being segmented from the image and other parts. Dotted lines indicate occlusion of the cylindrical part from other components.
union. The IoU \(\in[0,1]\), where 1 indicates a perfect overlap between the predicted and ground truth masks, and 0 indicates no overlap. We measure the average IoU across the object classes in the image and then report the mean IoU (mIoU).
## V Results
### _Accuracy Analysis_
The performance of the image segmentation models in terms of pixel accuracy and mIoU is shown in Table I. For the Mobilenet and Resnet50 models, we see that the out-of-the-box (OOB) performance is less than 99% of float32. However, HRNetv2 and the Maskformer models (Swin-base and Swin-large) perform well OOB on all datasets, achieving \(\simeq 99\%\) of float32. We further see that in all the cases, the model performances can be improved through the use of differential noise fine-tuning (DNF). DNF enables the Mobilenet and Resnet50 models to achieve significant improvements compared to their OOB performance. Overall, the swing-large maskformer outperforms the other networks in terms of pixel accuracy and mIoU on all the tested datasets.
**Does ABFP help with OOB model performance?** Table II displays the results comparing the accuracy of HRNetv2 and Resnet50, with and without ABFP. The utilization of ABFP leads to a notable improvement in the final mIoU, particularly for Resnet50 in comparison to HRNetv2.
**Why do HRNetv2 and Maskformer models have superior OOB results?** In Table I, we noted that some models underperformed others significantly, in spite of using ABFP. To investigate the reasoning behind this, we performed a layer sensitivity analysis of the models [44]. During layer sensitivity analysis, a single layer is quantized at a time, and the model mIoU is evaluated. Layers that result in a lower mIoU when quantized are considered to be "sensitive" to loss of precision [44]. In Figure 4, we show the results of the layer sensitivity analysis for the first four layers of each model in terms of the
Fig. 4: Analysis of layer sensitivity. Most of the mIoU drop compared to FP32 mIoU, is accounted for within the first four layers of the models.
drop in mIoU between the float32 model and the model with the specific layers quantized. We see that most of the drop in mIoU happens within the first few layers of the models. In particular, layer #1 of Resnet50 and layer #4 of Mobilenet exhibit increased sensitivity to quantization, i.e., they exhibit the highest mIoU drop compared to the float32 mIoU. This is consistent with observations on typical digital hardware [47]. However, for HRNetv2 and Swin-base, even the most sensitive layers only result in a small drop in mIoU.
To further understand whether the specific layers are sensitive due to input, weight or output quantization, we perform an ablation study by isolating each level of quantization as shown in Table III. Specifically, we note that _output quantization_ results in almost all of the mIoU loss. The additional quantization at the output ADC introduces a large source of error that impacts the model performance. Following up on this observation, we visualize the normalized output distributions of the most sensitive layers of the different models in Figure 5. We note that the maximum output activation value (denoted by \(s\)) varies across the models, and in particular, Mobilenet and Resnet50 models have very high outliers. A range of \(\pm\) three standard deviations of the normalized ResNet distribution utilizes only \(23\%\) of the quantization levels whereas the swin-base model uses approximately \(69\%\) of the quantization levels. This leads to poor range utilization that in turn results in poor quantization, impacting the performance of Resnet50 and Mobilenet.
**How does gain help with OOB performance?** To better understand the impact of the chosen tile size, \(n\), and gain, particularly for the models that achieve 99% accuracy OOB (Swin-base and Swin-large Maskformer), we investigate this relationship in Figure 6. We make the following observations: a) as \(n\) reduces, mIoU generally improves, since smaller tile sizes can further mitigate the impact of outliers which is a key issue as previously described; b) the use of gain helps improve mIoU until a certain point, since the signal-to-noise ratio is correspondingly improved; c) beyond a threshold, gain causes the mIoU to drop due to saturation. Saturation of values (when amplified by the gain) causes clipping of critical activations thus impacting model accuracy. Overall, the combination of ABFP + gain with specific choices of \(n\), enable models like Swin-base and large Maskformers to achieve good OOB results.
### _Energy Usage Analysis_
The laser power of the photo-core depends on the gain \(G\), tile size \(n\), and fixed internal parameters such as MZI loss, laser efficiency and coupling loss. The key factors of interest in this work are \(n\) and \(G\), and we analyze the _relative_ scaling behavior of photo-core energy consumption for the image segmentation workloads based on the two parameters. This also allows us to abstract the remaining parameters from our analysis since
Fig. 5: Normalized distributions (in range \([-1.0,1.0]\)) of the outputs of the most sensitive layers of the different models. HRNetv2 and Swin-base has a better utilization of the range even for the most sensitive layers, thus resulting in less accuracy drop compared to FP32, on the photo-core. The \(s\), in the legend, is the scaling factor from the largest activation.
Fig. 6: Analysis of accuracy vs. tile size \(n\) and gain for Swin-base and large Maskformer. The use of gain helps improve accuracy by increasing the signal-to-noise ratio. However, beyond a certain point, gain causes saturation and clipping of activations, resulting in a drop in model accuracy.
Fig. 7: Analysis of the energy consumption of one batch of data for different tile sizes. Hmetv2 has lesser consumption than Resnet50 at \(n=64\) (denoted by \(\blacktriangle\)) compared to \(n=256\) (denoted by \(\star\)) due to a decrease in photocore utilization relative to Resnet50.
they are fixed across the models (MZI loss, laser efficiency etc.). With this form of abstraction, we can group the internal laser parameters to two sets of constants, \(\alpha\) and \(\beta\). Then the photo-core power \(P\), can be summarized in terms of \(G\) and \(n\):
\[P(G,n)=\left(G\alpha^{n}+\beta\right)n \tag{4}\]
The total amount of photo-core energy required for a given DNN inference at a specified gain and tile size is then \(E(G,n)=T*P(G,n)\), where \(T\) is the total time taken to execute the DNN workload (# of MVPs \(\times\) time taken for the photo-core per MVP), plus data transfer costs for the weights to be sent and loaded onto the photo-core (set to 40ns, 10ns respectively)2. This abstraction captures the exponential relation between energy and \(n\), and the linear relation between energy and \(G\) (with a slope that is determined by \(T\) and \(n\)). Empirically, for our system, a gain of 2 increases energy consumption by \(1.4\times\) for \(n=64\) and \(1.9\times\) for \(n=128\), compared to using no overamp, indicating that the accuracy improvements from Section V-A (Figure 6) come at an energy cost.
Footnote 2: Since we focus primarily on the photocore, we exclude energy analysis of digital aspects such as quantization, and data transfer costs of inputs.
Figure 7 shows how the relative energy consumption scales as a function of tile sizes for a single batch of data. We keep the other photo-core parameters fixed, and do not focus on the absolute energy consumption of the models. Instead, we make the following observations: a) the swin-base and swin-large maskformers consume \(\simeq 2\times\) and \(\simeq 4\times\) more energy respectively, than CNNs; b) the energy consumption reduces up to \(n=64,128\). This is because smaller tile sizes (\(<64\)) require more MVPs and correspondingly larger run-times. Beyond \(n=128\), the \(\alpha\) parameter begins to dominate for larger tile sizes leading to more energy consumption; c) the energy consumption of HRNetv2 is less than Resnet50 at \(n=64\) (shown as ), but higher at \(n=256\) (shown as ). This is due to the reduced photo-core utilization of HRNetv2 compared to Resnet50, at \(n=256\). Depending on the matrix sizes in these models (that are not divisible by tile size), larger tile sizes will lead to under-utilized tiles and higher energy consumption.
### _Throughput Analysis_
Figure 8 shows the throughput for the different DNNs for different tile sizes and a fixed batch size of 4. We see that: a) swin-base and swin-large maskformers have lower throughput compared to the CNNs. This highlights a trade-off with the maskformer models that have better OOB accuracy results, but lower throughput compared to CNNs; b) although smaller tile sizes offer accuracy improvements as shown in Section V-A (Figure 6), it results in lower throughput. Hence, there is an accuracy-vs-throughput trade-off when selecting the appropriate tile size to use for the image segmentation workloads.
## VI Discussion
In this work, we presented an analysis of image segmentation workloads for autonomous driving and defect detection on photonic hardware. We analyzed different convolutional and transformer models, and a summary of our findings is captured in Figure 1. We see that while the maskformer models achieve good OOB accuracy, they are less competitive in terms of throughput and energy consumption. In contrast, HRNetv2 (particularly _with_ DNF) presents a good trade-off in terms of energy, throughput and accuracy. Thus, _HRNetv2 is a promising candidate for image segmentation on photonic devices._ We highlight some additional key insights from this work:
1. The accuracy of image segmentation on photonic hardware is significantly impacted by the output quantization. While both ABFP and DNF are promising for improving accuracy, techniques such as parameterized activations to mitigate outliers [48], can also be helpful.
2. Accuracy degradation mainly occurs due to relatively few layers of each model. Existing approaches that execute sensitive layers at higher precision while keeping others lower [44, 49], can be beneficial here. Photonic systems should consider supporting this hybrid approach with controllable bit-precisions for sensitive layers.
3. There is a complex relationship between the photo-core tile size, gain, and DNN architectures, influencing the energy and throughput profiles. Photonic systems may be designed to dynamically adjust internal parameters such as gain to optimize energy efficiency. DNN architectures can also be re-designed as in [50], to be optimal for the specific photo-core tile size to improve throughput.
**Drawbacks and future work:** While we have looked at hardware parameters like tile size and gain, we have not analyzed other architectural aspects of the hardware in-depth, such as wall-plug efficiency of the laser, data storage or movement, and non-linearities on digital ASICs. We have also not explored other potential throughput optimizations of attention layers [51] that may improve Maskformer performance over CNNs. We explore our work in the context of the photo-core; however, we believe that our findings with ABFP, DNF, tile sizes and gain will offer insights for future work in algorithmic and hardware enhancements to photonic accelerators for ML workloads.
Fig. 8: Analysis of throughput vs. tile size \(n\). Smaller tile sizes can mitigate impact of outliers, but reduce throughput significantly compared to larger tiles.
## VII Acknowledgements
The authors would like to thank colleagues Cansu Demirkiran and Alexander Sludds for their helpful insights and discussions. We also thank our reviewers for their constructive feedback.
|
2309.16797 | Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution | Popular prompt strategies like Chain-of-Thought Prompting can dramatically
improve the reasoning abilities of Large Language Models (LLMs) in various
domains. However, such hand-crafted prompt-strategies are often sub-optimal. In
this paper, we present Promptbreeder, a general-purpose self-referential
self-improvement mechanism that evolves and adapts prompts for a given domain.
Driven by an LLM, Promptbreeder mutates a population of task-prompts, and
subsequently evaluates them for fitness on a training set. Crucially, the
mutation of these task-prompts is governed by mutation-prompts that the LLM
generates and improves throughout evolution in a self-referential way. That is,
Promptbreeder is not just improving task-prompts, but it is also improving the
mutationprompts that improve these task-prompts. Promptbreeder outperforms
state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve
Prompting on commonly used arithmetic and commonsense reasoning benchmarks.
Furthermore, Promptbreeder is able to evolve intricate task-prompts for the
challenging problem of hate speech classification. | Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rocktäschel | 2023-09-28T19:01:07Z | http://arxiv.org/abs/2309.16797v1 | # Promptbreeder:
###### Abstract
Popular prompt strategies like Chain-of-Thought Prompting can dramatically improve the reasoning abilities of Large Language Models (LLMs) in various domains. However, such hand-crafted prompt-strategies are often sub-optimal. In this paper, we present Promptbreeder, a general-purpose self-referential self-improvement mechanism that evolves and adapts prompts for a given domain. Driven by an LLM, Promptbreeder mutates a population of task-prompts, evaluates them for fitness on a training set, and repeats this process over multiple generations to evolve task-prompts. Crucially, the mutation of these task-prompts is governed by mutation-prompts that the LLM generates and improves throughout evolution in a self-referential way. That is, Promptbreeder is not just improving task-prompts, but it is also improving the mutation-prompts that improve these task-prompts. Promptbreeder outperforms state-of-the-art prompt strategies such as Chain-of-Thought and Plan-and-Solve Prompting on commonly used arithmetic and commonsense reasoning benchmarks. Furthermore, Promptbreeder is able to evolve intricate task-prompts for the challenging problem of hate speech classification.
## 1 Introduction
Prompting is central to the downstream performance of foundation models. For example, different prompt strategies1 can have a significant impact on a model's reasoning abilities (Wei et al., 2022; Nye et al., 2021; Zhou et al., 2022; Wang et al., 2022; Zhou et al., 2023; Wang et al., 2023), multi-modal processing abilities (Yang et al., 2023; Wang et al., 2023), or tool use abilities (Yao et al., 2022; Schick et al., 2023). Furthermore, prompting can improve model distillation (Wang et al., 2023; Hsieh et al., 2023) and it can be used to simulate agethic behavior (Wang et al., 2023; Park et al., 2023; Wu et al., 2023). However, these prompt strategies are manually engineered. Since the specific way a prompt is phrased can have a dramatic effect on its utility (Madaan and Yazdanbakhsh, 2022), it raises the question of whether prompt engineering can be automated. Automatic Prompt Engineer (APE, Zhou et al., 2023) attempts to address this by generating an initial distribution of prompts using another prompt that infers the problem from a number of input-output examples from the dataset. However, Zhou et al. found "diminshiking returns to further selection rounds as the quality seems to stabilize after three rounds", and consequently abandoned the use of an iterative APE. We propose a solution to the problem of diminishing returns via a diversity maintaining evolutionary algorithm for self-referential self-improvement of prompts for LLMs.
Footnote 1: See Appendix A for definitions of terminology.
Schmidhuber (1990) notes that the "program of a neural network is its weight matrix". Consequently, this "program" can be changed in a self-referential way by the neural network itself (Schmidhuber, 1993; Irie et al., 2022). Such a neural network that improves itself, as well as improving the way it improves itself, might be an important stepping stone towards open-ended self-referential self-improvement of AIs (Schmidhuber, 2003). However, self-improvement via self-referential weight matrices is costly as it requires additional parameters that modify all of the model's
parameters. Since behaviors and capabilities of LLMs are significantly influenced by the prompts that we provide to them, we can similarly think of prompts as the program of an LLM (Zhou et al., 2023). In this view, changing a prompt strategy such as the Scratchpad method (Nye et al., 2021) or Chain-of-Thought Prompting (Wei et al., 2022) corresponds to changing the "program" of the LLM. Taking this analogy further, we can use the LLM itself to change its prompts, as well as the way it changes these prompts, moving us towards a fully self-referential self-improving systems grounded in LLMs.
In this paper, we introduce Promptbreeder (PB) for self-referential self-improvement of LLMs. Given a seed set of mutation-prompts (i.e. instructions to modify a task-prompt), thinking-styles (i.e. text descriptions of general cognitive heuristics), and a domain-specific problem description, PB generates variations of the task-prompts and mutation-prompts, exploiting the fact that LLMs can be prompted to act as mutation operators (Meyerson et al., 2023). Based on the fitness of the evolved task-prompts as measured on the training set, we select a subset of evolutionary units consisting of task-prompts and their associated mutation-prompt, to transmit to future generations. Over multiple generations of PB, we observe prompts adapting to the domain at hand. For example, in a mathematical domain, PB evolved the task-prompt "Show all your working. II. You should use the correct mathematical notation and vocabulary, where appropriate. III. You should write your answer in full sentences and in words. IV. You should use examples to illustrate your points and prove your answers. V. Your workings out should be neat and legible* on GSM8K (see Appendix J). On a wide range of commonly used benchmarks spanning commonsense reasoning, arithmetic, and ethics, we find that PB outperforms state-of-the-art methods like Chain-of-Thought (Wei et al., 2022) and Plan-and-Solve (Wang et al., 2023b) prompting. As PB does not require any parameter updates for self-referential self-improvement, we believe this approach points to an interesting future where larger and more capable LLMs could further amplify the gains of our approach.
In summary, this paper makes the following main contributions: (i) we introduce Promptbreeder, a self-referential self-improvement method for LLMs that evolves prompts for a domain at hand, as well as improves the way it is evolving these prompts, (ii) we report improvements over state-of-the-art prompt strategies on a wide range of commonly used arithmetic and commonsense reasoning benchmarks, and (iii) we investigate the various self-referential components of Promptbreeder and their contribution to our results.
\begin{table}
\begin{tabular}{c c c|c c c|c c c c} \hline \hline & **Method** & **LLM** & **MultiArith* & **SingleEq* & **AddSub* & **SVAMP* & **SOA** & **CSQA** & **AQua-RAT** & **GSM8K** \\ \hline \multirow{8}{*}{**Synthetic**} & CoT & text-driven-003 & (83.8) & (88.1) & (85.3) & (69.9) & (63.8) & (65.2) & (38.9) & (56.4) \\ & & PoT & text-driven-003 & (92.2) & (91.7) & (85.1) & (70.8) & – & – & (43.9) & (57.0) \\ & & PS & text-driven-003 & (87.2) & (92.2) & (88.1) & (72.0) & – & – & (42.5) & (58.2) \\ & & PS+ & text-driven-003 & (91.8) & (94.7) & (**92.2**) & (75.7) & (65.4) & (71.9) & (46.0) & (59.3) \\ & & PS & PalM \(2\)-\(4\) & 97.7 & 90.6 & 72.4 & 83.5 & 50.0 & 77.9 & 40.2 & 59.0 \\ & & PS+ & PalM \(2\)-\(4\) & 92.5 & 94.7 & 74.4 & 86.3 & 50.1 & 73.3 & 39.4 & 60.5 \\ & & APE & PalM \(2\)-\(4\) & 95.8 & 82.2 & 72.2 & 73.0 & 38.4 & 67.3 & 45.7 & 77.9 \\ & & OPRO & PalM \(2\)-\(4\) & – & – & – & – & – & – & 80.2 \\ & & PB (ours) & PalM \(2\)-\(4\) & **99.7** & **96.4** & 87.8 & **90.2** & **71.8** & **85.4** & **62.2** & **83.9** \\ \hline \multirow{8}{*}{**Synthetic**} & Manual-CoT & text-driven-003 & (93.6) & (93.5) & (**91.6**) & (80.3) & (71.2) & (78.3) & (48.4) & (58.4) \\ & Auto-CoT & text-driven-003 & (95.5) & (92.1) & (90.8) & (78.1) & – & (41.7) & (57.1) \\ \cline{1-1} & & PB (ours) & PalM \(2\)-\(4\) & **100.0** & **98.9** & 87.1 & **93.7** & **80.2** & **85.9** & **64.6** & **83.5** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Promptbreeder (**PB**) comparison to Chain-of-Thought (**Manual-CoT**, Wei et al., 2022), Zero-shot **CoT** (Kojima et al., 2022), Program-of-Thoughts (**PoT,** Chen et al., 2022), **Auto-CoT** (Zhang et al., 2023b), **OPRO** (Yang et al., 2023a), Automatic Prompt Engineer Zero-shot prompt (**APE**, Zhou et al., 2023), Plan-and-Solve with (**PS+**) and without the improved prompt (**PS**, Wang et al., 2023b) and using PalM 2-L (Anil et al., 2023) as the underlying LLM (**APE**, **PS\({}_{\textbf{PalM 2-L}}\)/**PS\({}_{\textbf{PlM 2-L}}\)). Best results in both the zero-shot and few-shot categories are highlighted in bold. Results in brackets are directly taken from the Plan-and-Solve paper which uses text-davinci-003 (Brown et al., 2020). For datasets with astericks (MultiArith*, SingleEq*, AddSub*, and SVAMP*), we randomly took half of the examples for training and report accuracy on the remaining test set. See Section 4 and Appendix I for details on the prompts and datasets.
## 2 Related Work
Prompting an LLM in the right way is essential to its downstream performance (Moradi and Samwald, 2021; Madaan and Yazdanbakhsh, 2022; Zhou et al., 2023). Indeed, even the order in which prompts are presented can heavily influence LLM performance (Lu et al., 2022). A number of recent works have focused on devising better prompt strategies, or even automating such prompt engineering.
**Prompting:** Chain-of-Thought Prompting (CoT, Wei et al., 2022) is a popular prompt strategy which provides intermediate reasoning steps as few-shot prompts to an LLM, thereby significantly improving its arithmetic, commonsense, and symbolic reasoning abilities. Notably, the gains of CoT are more pronounced for stronger LLMs. This is intriguing, as it points to the possibility of increasingly capable (and potentially open-ended) self-improving mechanisms on top of adept LLMs--a hypothesis that Promptbreeder directly builds upon. Instead of few-shot CoT prompting, Kojima et al. (2022) demonstrate that LLMs can also be prompted zero-shot (e.g. "Let's think step by step") to produce their own chains of thoughts (Zero-shot CoT) that improve reasoning abilities. Self-Consistency (CoT-SC, Wang et al., 2022) extends CoT by sampling a diverse set of workings out and selecting the most consistent answer. Tree of Thoughts (ToT, Yao et al., 2023) generalizes CoT to multiple workings out that can be expanded or backtracked from. Graph of Thoughts (GoT, Besta et al., 2023) is a further generalization to arbitrary graph structures. Plan-and-Solve Prompting (PS, Wang et al., 2023b) encourages an LLM to first devise a plan to solve a problem before attempting to solve it. Similarly, Least-to-Most Prompting (Zhou et al., 2022) encourages an LLM to decompose a problem into subparts, and then to solve each part individually before synthesizing an answer. Self-Refine (Madaan et al., 2023) prompts an LLM to generate a response, to provide feedback on the response, and to finally refine the solution.
Figure 1: Overview of Promptbreeder. Given a problem description and an initial set of general “thinking-styles” and mutation-prompts, Promptbreeder generates a population of units of evolution, each unit consisting of typically two task-prompts and a mutation-prompt. We then run a standard binary tournament genetic algorithm (Harvey, 2011). To determine the fitness of a task-prompt we evaluate its performance on a random batch of training data. Over multiple generations, Promptbreeder subsequently mutates task-prompts as well as mutation-prompts using five different classes of mutation operators. The former leads to increasingly domain-adaptive task-prompts whereas the latter evolves increasingly useful mutation-prompts in a self-referential way.
In contrast to gradient-free approaches above, Soft Prompting approaches (e.g., Liu et al., 2021; Qin and Eisner, 2021; Lester et al., 2021) directly fine-tune continuous prompt representations. Huang et al. (2022) use CoT and CoT-SC on an unlabelled dataset of questions, and subsequently fine-tune an LLM based on generated solutions. Similarly, Zelikman et al. (2022) uses CoT to generate rationales and fine-tunes the LLM based on those examples and rationales that yielded the correct answer. However, as argued by Zhou et al. (2023), any approach that updates all or a portion of LLM parameters will not scale as models get bigger and, moreover, will not work with the increasing number of LLMs hidden behind an API.
All of the prompt engineering approaches above are domain agnostic but hand designed. Central to our work is the hypothesis that we could do better by employing an automated self-improvement process that can adapt prompts to a domain at hand. Auto-CoT (Zhang et al., 2023b) and Automatic-CoT (Shum et al., 2023) automatically find reasoning chains for Few-Shot CoT. Automatic Prompt Engineer (APE, Zhou et al., 2023) uses one generator-prompt to generate prompt candidates, and another mutation-prompt to mutate them. In contrast to APE, our work performs compositional task-specific initialization of mutation-prompts, subsequent online mutation of mutation-prompts, uses special mutation operators that take into account the whole population and elite history, and uses diversity-maintenance methods--all of which help avoid the problem of diminishing returns and diversity loss suffered by APE.
Concurrently to our work, Yang et al. (2023a) developed Optimization by PROmpting (OPRO), a prompt optimization method that varies prompts using a single complex mutation prompt, and evaluates newly generated prompts on a small fixed training set of problems. In contrast, Prompt-breeder autonomously evolves multiple LLM generated mutation-prompts as well as task-prompts, and evaluates fitness on random subsets from the whole training set during evolution. At the time of its release, OPRO achieved a score of 80.2% via the optimized zero-shot prompt "Take a deep breath and work on this problem step-by-step" on GSM8K. Promptbreeder surpasses this with 83.9% in the zero-shot setting with the unintuitively simple prompt "SOLUTION"--further evidence for the sensitivity of LLMs to prompts and the importance on finding effective prompts automatically. Also concurrently to our work, Guo et al. (2023) developed EvoPrompt, which uses a fixed mutation (and crossover) prompt, as well as a prompt that asks for a mutant of the difference between two parent prompts, to produce offspring prompts. EvoPrompt is initialized with a whole population of initial hand-designed task tailored prompts rather than a single problem description as we do. In contrast to the two approaches above, Promptbreeder uses LLMs to self-referentially improve mutation-prompts, and it is able to evolve contexts as well.
**Self-Referential Self-Improvement**: Developing an open-ended system that can improve itself as well as improving the way it is improving itself (Schmidhuber, 1993; 2003) is a long-standing open problem in AI research. Schmidhuber (1993) introduced an "introspective" neural network with a self-referential weight matrix that can modify its own weights and, thus, also modify those weights that are governing how its own weights are modified. Recently, Irie et al. (2022) proposed a more scalable self-referential weight matrix taking inspiration from fast weight programmers (Schmidhuber, 1992). Kirsch and Schmidhuber (2022) propose a self-referential meta-learning approach, combining self-referential weight matrices with ideas from Godel Machines (Schmidhuber, 2003), i.e., to allocate more computational resources to better performing solutions. However, since these approaches directly modify parameters of a model, it is unclear how to scale them to the increasing number of parameters in modern LLMs. In contrast, for Promptbreeder the substrate of self-referential self-improvement is natural language, avoiding costly parameter updates altogether.
**Open-Endedness and LLMs**: Promptbreeder makes use of the observation by Lehman et al. (2022), Meyerson et al. (2023) and Chen et al. (2023) that LLMs are effective at generating mutations from examples. In addition, LLMs encode human notions of interestingness and can be used to automatically quantify novelty (Zhang et al., 2023a). Promptbreeder is related to Picbreeder (Secretan et al., 2008), an open-ended human-in-the-loop system that evolves increasingly interesting images. While Picbreeder explores the space of images, Promptbreeder explores the space of prompts and does so without humans in the loop. As Promptbreeder is proposing mutated prompts to itself, it is an example of a system transitioning from "learning from data" to "learning what data to learn from" (Jiang et al., 2022).
Promptbreeder
We introduce Promptbreeder, a prompt evolution system that can automatically explore prompts for a given domain and that is able to find task-prompts that improve an LLM's ability to derive answers to questions in that domain. Promptbreeder is general purpose in that the same system is able to adapt to many different domains.
Promptbreeder makes use of the observation that LLMs can be used to generate variations of input text (Lehman et al., 2022; Meyerson et al., 2023; Chen et al., 2023). Figure 1 gives an overview of our method. We are interested in evolving task-prompts. A task-prompt \(P\) is a string used to condition the context of an LLM in advance of some further input \(Q\), intended to ensure a better response than if \(Q\) had been presented in the absence of \(P\). To evaluate the fitness of each evolved task-prompt, we sample a batch of 100 Q&A pairs from the entire training set of the domain at hand.2
Footnote 2: Our prompt strategy sequentially applies two task-prompts. The first task-prompt + question produces a continuation. The continuation + second task-prompt produces the final answer.
Promptbreeder generates task-prompts according to an evolutionary algorithm. The mutation operator for this algorithm is itself an LLM, conditioned on a mutation-prompt \(M\). That is, a mutated task prompt \(P^{\prime}\) is defined by \(P^{\prime}=\operatorname{LLM}(M+P)\) where '\(+\)' corresponds to string concatenation. A variety of such mutation-prompts are described in Section 3.2.
Promptbreeder's main self-referential mechanism stems from applying the evolutionary algorithm not just to task-prompts but also to mutation-prompts. The mutation operator for this meta-level algorithm is again an LLM, now conditioned on a hyper-mutation prompt \(H\). That is, we obtain a mutated mutation-prompt \(M^{\prime}\) via \(M^{\prime}=\operatorname{LLM}(H+M)\).
Given a set of "thinking styles" \(\mathcal{T}\) and a set of initial mutation-prompts \(\mathcal{M}\), as well as a domain-specific problem description \(D\), Promptbreeder initializes a population of mutated task-prompts (see Section 3.1). To clarify, a unit of evolution consists of a set of task-prompts, a mutation-prompt and in the few-shot case, a set of correct workings out (i.e. step-by-step or "chains-of-thought" reasoning steps that led to the correct answer). This means task-prompts and mutation-prompts are in 1:1 correspondence. To evolve this population, we employ a binary tournament genetic algorithm framework (Harvey, 2011): we sample two individuals from the population, we take the individual with the higher fitness, mutate it (see next section) and overwrite the loser with the mutated copy of the winner.
### Promptbreeder Initialization
To give a concrete example, consider the initialization steps used to produce the task-prompts and mutation-prompts for GSM8K (a 'grade school maths' word problem dataset). The problem description is "Solve the math word problem, giving your answer as an arabic numeral". Because Plan-and-Solve (Wang et al., 2023) uses two task-prompts we also evolve two task-prompts (plus a mutation-prompt) per unit of evolution. In order to promote diversity in the initial prompts, we generate the initial task-prompts by concatenating (for each task-prompt) a randomly drawn'mutation-prompt' (e.g. "Make a variant of the prompt.") and a randomly drawn 'thinking-style' (e.g. "Let's think step by step") to the problem description, and provide that to the LLM to produce a continuation, resulting in an initial task-prompt. We do this twice to produce the two initial task-prompts per unit. Both the mutation-prompt and the thinking-style are randomly sampled from an initial set of mutation-prompts and a set of thinking-styles (see Appendices C, D and G for the full sets). The mutation-prompt is added to the unit of evolution and so is associated with its specific task-prompt throughout the evolutionary run.
For the example above, the complete input string to the LLM to make an initial task-prompt could be "Make a variant of the prompt. Let's think step by step. INSTRUCTION: Solve the math word problem, giving your answer as an arabic numeral. INSTRUCTION MUTANT:". Note how the control strings "INSTRUCTION" and "INSTRUCTION MUTANT" are added to encourage an appropriate continuation. Table 4 in Appendix E shows examples of the initial prompts generated in this way.
### Mutation Operators
As shown in Figure 1, there are nine operators falling into five broad classes which drive the exploration of prompt strategies. For each replication event only one of nine mutation operators is applied (we sample with uniform probability over the nine operators to decide which mutation operator to apply). The rationale for using this diverse set of operators is to enable the LLM to explore a large space of cognitive methods of linguistic self-questioning, by repeatedly changing the framing of the problem as well as retrieving mental models expressed in natural language that can help tackle a given reasoning challenge. Investigations from insight learning strongly suggest that diverse representational re-description is key to problem solving (Ollinger and Knoblich, 2009)--a principle that we attempt to recreate via self-referential self-improvement with natural language as the substrate. Figure 2 illustrates in what way Promptbreeder is self-referential (see Appendix F for a more detailed explanation).
#### 3.2.1 Direct Mutation
The simplest class of mutation operators directly generate a new task-prompt \(P^{\prime}\) from either one existing task-prompt \(P\) (first-order prompt generation) or from a general prompt that encourages free-form generation of new task-prompts-i.e. not using an existing parent, thus zero-order prompt generation.
**Zero-order Prompt Generation**: We generate a new task-prompt by concatenating the problem description \(D\) (e.g. "Solve the math word problem, giving your answer as an arabic numeral") with the prompt "A list of 100 hints:", which invites the LLM to come up with a new hint that could help solve a problem in the given problem domain. We extract the first generated hint as the new task-prompt. Crucially, this new task-prompt does not depend on any previously found task-prompt. Instead, it is re-generated from the problem description each time. Our rationale for including this zero-order operator is that where prompt evolution diverges, this operator allows us to generate new task-prompts closely related to the original problem description, similar to uniform re-sampling in automated curriculum learning approaches (Jiang et al., 2021; Park et al., 2023; Parker-Holder et al., 2022).
**First-order Prompt Generation**: We concatenate the mutation-prompt (red), to the parent task-prompt (blue), and pass it to the LLM to produce the mutated task-prompt. For example "Say that instruction again in another way. DON'T use any of the words in the original instruction there's good chap. INSTRUCTION: Solve the math word problem, giving your answer as an arabic numeral. INSTRUCTION MUTANT: ". This procedure is identical to the initialization method, except that a randomly sampled thinking-style string is not used. First-order prompt generation is Promptbreeder's standard asexual mutation operator, and it is the core of every genetic algorithm--taking one parental genotype (task-prompt) and applying the mutation to it (in this case influenced by the mutation-prompt).
#### 3.2.2 Estimation of Distribution Mutation
The next class of mutation operators condition not just on zero or one parent, but instead on a set of parents. As such, they may be more expressive by considering patterns in the population.
**Estimation of Distribution (EDA) Mutation**: Inspired by Hauschild and Pelikan (2011), we provide a filtered and numbered list of the current population of task-prompts to the LLM and ask it to continue this list with new task-prompts. We filter the population of prompts on the basis of BERT (Devlin et al., 2019) embedding cosine similarities between each other--an individual is not included in the list if it is more than \(0.95\) similar to any other entry in the list, thus encouraging diversity (cf. quality-diversity methods (Lehman and Stanley, 2011; Maurer and Clune, 2015; Maurer and Clune, 2015)). The prompts are listed in random order and we do not give the LLM access to the fitness values of individuals in the population--we found in preliminary experiments that the LLM did not understand these fitness values3 and resorted to generating copies of entries in the list.
Footnote 3: This is contrary to recent findings by Mirchandani et al. (2023). We leave it for future work to revisit whether LLMs can interpret fitness values for improved prompt evolution.
**EDA Rank and Index Mutation**: This is a variant of the above in which task-prompts are listed in fitness order. Preliminary experiments showed that the LLM is more likely to generate entries that are similar to the elements appearing later in the list. This is in line with similar findings of recency effects in LLMs (Liu et al., 2023). Therefore, after filtering in the same way as before, we ordered the task-prompts in the population by ascending order of fitness. The top of the list is prefixed by the following prompt: "INSTRUCTION: " + <<mutation-prompt>> + "\n A List of Responses in descending order of score." + <<last index + 1>> + "is the best response. It resembles" + << last index>> + "more than it does (1)". Note that we have 'lied' to the LLM by telling it that the order is descending. This is because otherwise it is too biased towards producing a new entry that is too similar to the final entry. The contradiction between the ascending ordering and the statement that it is a descending ordering appears to improve the diversity of sampling. The rationale for this operator is again to represent the current distribution in such a way that high fitness and yet diverse extrapolations are suggested by the LLM.
**Lineage Based Mutation**: For each unit of evolution, we store a history of the individuals in its lineage that were the best in the population, i.e., a historical chronological list of elites. This list is provided to the LLM in chronological order (not filtered by diversity), with the heading "GENOTYPES FOUND IN ASCENDING ORDER OF QUALITY" to produce a novel prompt as continuation. The rationale for this operator is that we expect the signal of improving genotype prompts may be stronger than the signal from prompts in the current population since they provide a gradient of bad to good prompts that could be followed (assuming this signal can be used by the LLM).
#### 3.2.3 Hypermutation: Mutation of Mutation-Prompts
While the mutation operators above might already explore diverse task-prompts, a self-improving system should ideally also improve the way it is improving itself in a self-referential way. Our third class of mutation operators includes hyper-mutation operators concerned with the evolution of evolvability (Dawkins, 2003; Pigliucci, 2008; Payne and Wagner, 2019; Gajewski et al., 2019)--those which modify the search/exploration process rather than the task reward obtaining process directly.4
Footnote 4: This is similar to population based training (Jaderberg et al., 2017)—instead of applying it to hyperparameters such as learning rates, it applies to the mutation-prompts of Promptbreeder.
**Zero-order Hyper-Mutation**: We concatenate the original problem description to a randomly sampled thinking-style, and feed it to the LLM to generate a new mutation-prompt. The resulting mutation-prompt is applied to a task-prompt to make a variant of the task-prompt as in First-order Prompt Generation (see Section 3.2.1). Note that this zero-order meta-mutation operator is identical to that used during initialization. The rationale for this operator is to generate mutation operators in a way similar to initialization, while also bringing in knowledge from the set of thinking styles.
Figure 2: Overview of multiple variants of self-referential prompt evolution. In (**a**), the LLM is directly used to generate variations \(P^{\prime}\) of a prompt strategy \(P\)(cf. Meyerson et al., 2023). Using a mutation prompt \(M\), we can explicitly prompt an LLM to produce variations (**b**). By using a hyper mutation prompt \(H\), we can also evolve the mutation prompt itself, turning the system into a self-referential one (**c**). Promptbreeder (**d**) improves the diversity of evolved prompts and mutation prompts by generating an initial population of prompt strategies from a set of seed thinking-styles \(\mathcal{T}\), mutation-prompts \(\mathcal{M}\), as well as a high level description \(D\) of the problem domain.
**First-order Hyper-Mutation**: We concatenate the hyper-mutation-prompt "Please summarize and improve the following instruction:" to a mutation-prompt so that the LLM generates a new mutation-prompt. This newly generated mutation-prompt is then applied to the task-prompt of that unit (see First-Order Prompt Generation in Section 3.2.1). In this way, we can evaluate the influence of the hyper-mutation via its newly generated mutation-prompt on the quality of the evolved downstream task-prompt at once.
#### 3.2.4 Lamarckian Mutation
For this class of mutation operators we mimic a Lamarckian process. We want to use a successful phenotype (i.e. the concrete working out used to produce correct answers induced by an evolved task-prompt) to generate a new genotype (i.e. a mutant task-prompt). Several processes of this form have appeared in the literature of LLMs, e.g. STaR (Zelikman et al., 2022), APO (Pryzant et al., 2023), and APE (Zhou et al., 2023).
**Working Out to Task-Prompt**: This is a 'Lamarckian' mutation operator similar to instruction induction in APE. We give an LLM a previously generated working out that led to a correct answer via the following prompt: "I gave a friend an instruction and some advice. Here are the correct examples of his workings out + <<correct working out>> + The instruction was:". This is effectively reverse-engineering the task-prompt from a given working out. An effective example of this is shown in Appendix H. This kind of operator is critical when the problem description is absent, insufficient, or misleading.
#### 3.2.5 Prompt Crossover and Context Shuffling
Our last class of mutation operators are crossover operators and operators for shuffling the few-shot context examples present in the units of evolution.
**Prompt Crossover**: After a mutation operator is applied, with 10% chance a task-prompt is replaced with a randomly chosen task-prompt from another member of the population. This member is chosen according to fitness proportionate selection. Crossover is not applied to mutation-prompts, only to the task-prompts.
**Context Shuffling**: Promptbreeder can simultaneously evolve the task-prompts, mutation-prompts and the set of correct workings out known as the few-shot context. To achieve the later, we fill up a few-shot context with only workings out that led to correct answers. During evaluation we provide this few shot-context before the task-prompt, providing guidance as to the form of the working out that is desired. If the few-shot context list is full, a single randomly sampled new correct working out replaces an existing working out from the list after fitness evaluation of a unit on a new set of questions. In addition, with a 10% chance we resample the whole context list with probability inverse to the maximum context list length.
## 4 Experiments
We used a population size of 50 units, evolved for typically 20-30 generations, where a generation involves forming random pairs of all individuals in the population and competing them against each other. To evaluate Promptbreeder, we use the datasets from state-of-the-art prompt strategies such as Plan-and-Solve, spanning _arithmetic reasoning_ with GSM8K (Cobbe et al., 2021), SVAMP (Patel et al., 2021), MultiArith (Roy and Roth, 2016), AddSub (Hosseini et al., 2014), AQuA-RAT (Ling et al., 2017), and SingleEq (Koncel-Kedziorski et al., 2015), _commonsense reasoning_ with CommonseSQA (CSQA, Talmor et al., 2019) and StrategyQA (SQA, Geva et al., 2021), _instruction induction_ tasks from (Honovich et al., 2023), and _hate speech classification_ on the ETHOS dataset (Mollas et al., 2022). See Appendix I for details.
## 5 Results and Discussion
We present results of Promptbreeder (**PB**) in comparison to state-of-the-art prompt strategies on a range of commonly used reasoning benchmarks in Table 1. PB outperforms **PS+**, the best Plan-and-Solve (Wang et al., 2023) prompting technique. Note that the performance of PS+ is improved
by using PaLM 2-L (Anil et al., 2023) as the underlying LLM (**PS+rLM2L**) on all datasets except ADDSUB compared to text-davinci-003 results in the original paper. On all other datasets, zero-shot PB accuracy is higher than PS+, with further improvement in the few-shot case when examples of discovered solutions are included with the prompts. In Table 6 in Appendix J, we show the best evolved zero-shot prompts. The best few-shot candidates are shown in Appendix J.5 onwards. Appendix K shows few-shot results and their controls on the Instruction Induction tasks from the APE paper. To investigate the ability of Promptbreeder to evolve complex domain-specific prompts for a downstream task, we applied it to the ETHOS Hate Speech Classification problem (Mollas et al., 2022). Promptbreeder was able to evolve a prompt strategy consisting of two sequentially applied relatively long prompts (see Appendix J.1) that scored 89% on ETHOS--an improvement over the hand-designed prompt "Determine whether a text contains hate speech" which scores only 80%. This demonstrates that Promptbreeder is capable of intricate domain-adaptation to a task at hand. Appendix B shows a typical evolutionary run and the prompts evolved, showing that unlike iterative APE, fitness continues to increase throughout the run.
We analysed the best mutation-prompts used during a run for GSM8K. Table 7 in Appendix J.3 shows the best evolved mutation prompts according to their scores (the proportion of times that when the mutation-prompt was applied to a task-prompt in an unit, a better task-prompt was produced). Table 8 in Appendix J.4 shows in descending order, the percentage of times that the different kinds of mutation operators resulted in an improvement when applied to a task-prompt in the population. It demonstrates that all mutation operators are important for Promptbreeder to work, including hypermutation operators which lead to self-referential self-improvement.
We measured the impact of self-referential operators on all the maths datasets and the ETHOS dataset. Details of the ablation process and its results can be found in Appendix L. Removing any self-referential operator is harmful under nearly all circumstances, the greatest benefit being the initial re-description of task-prompts upon initialization. We only found one mutation operator to be harmful for one specific task: drawing randomly from the set of mutation-prompts upon initialization hurts performance on GSM8K.
## 6 Conclusion and Future Work
We introduced Promptbreeder (PB), a self-referential self-improving system that can automatically evolve effective domain-specific prompts for a domain at hand. PB is self-referential in that it not only evolves task-prompts, but it also evolves mutation-prompts that govern the way PB modifies task-prompts. Thus, it is not only improving prompts but it also improves the way it is improving prompts.
Going forward, it could be interesting to use the LLM itself to assess and promote the diversity of generated prompts (see Zhang et al., 2023a), or to use it to determine the fitness of a whole "thought process", e.g. an N-prompt strategy where prompts are conditionally applied rather than unconditionally applied as in Promptbreeder. For example, a more complex "thought process" is to use PB in self-play mode to evolve pre-prompts for LLM-based policies that compete with each other, i.e., in a competitive Socratic5 dialog.
Footnote 5: [https://princeton-nlp.github.io/SocraticAI/](https://princeton-nlp.github.io/SocraticAI/)
PB remains limited compared to the open-endedness of human thought processes. First, the topology of prompting remains fixed (see Figure 2)--we only adapt the prompt content not the prompting algorithm itself. One interpretation of thought is that it is a reconfigurable open-ended self-prompting process. If so, how does one develop complex thought strategies? Clearly it is necessary to generate and evaluate them, and whilst a simple evolutionary process provides one framework in which a thought strategy could be evolved, our actual human experience suggests multiple overlapping hierarchical selective processes at play. Moreover, in addition to language, human thought involves intonation, imagery, etc., in a multimodal system.
We believe PB points to an exciting future where increasingly open-ended self-referential self-improvement systems can directly use language as the substrate for improvement instead of relying on any parameter updates. This is intriguing, as this approach will likely continue to scale with ever larger and more capable LLMs in the future.
#### Acknowledgments
We thank Edward Hughes and Tom Schaul for feedback on an early draft of the paper. We also thank Tom Schaul, Chengrun Yang, and Denny Zhou for fruitful discussions, as well as Gavin Buttimore, Simon Green, Keith Anderson, Joss Moore, Ollie Purkiss, John Quan, and Francesco Visin for their support in running some of the experiments.
|
2309.13837 | Backorder Prediction in Inventory Management: Classification Techniques
and Cost Considerations | This article introduces an advanced analytical approach for predicting
backorders in inventory management. Backorder refers to an order that cannot be
immediately fulfilled due to stock depletion. Multiple classification
techniques, including Balanced Bagging Classifiers, Fuzzy Logic, Variational
Autoencoder - Generative Adversarial Networks, and Multi-layer Perceptron
classifiers, are assessed in this work using performance evaluation metrics
such as ROC-AUC and PR-AUC. Moreover, this work incorporates a profit function
and misclassification costs, considering the financial implications and costs
associated with inventory management and backorder handling. The study suggests
that a combination of modeling approaches, including ensemble techniques and
VAE, can effectively address imbalanced datasets in inventory management,
emphasizing interpretability and reducing false positives and false negatives.
This research contributes to the advancement of predictive analytics and offers
valuable insights for future investigations in backorder forecasting and
inventory control optimization for decision-making. | Sarit Maitra, Sukanya Kundu | 2023-09-25T02:50:20Z | http://arxiv.org/abs/2309.13837v3 | # Backorder Prediction in Inventory Management: Classification Techniques and Cost Considerations
###### Abstract
This article introduces an advanced analytical approach for predicting backorders in inventory management. Backorder refers to an order that cannot be immediately fulfilled due to stock depletion. Multiple classification techniques, including Balanced Bagging Classifiers, Fuzzy Logic, Variational Autoencoder - Generative Adversarial Networks, and Multi-layer Perceptron classifiers, are assessed in this work using performance evaluation metrics such as ROC-AUC and PR-AUC. Moreover, this work incorporates a profit function and misclassification costs, considering the financial implications and costs associated with inventory management and backorder handling. The study suggests that a combination of modeling approaches, including ensemble techniques and VAE, can effectively address imbalanced datasets in inventory management, emphasizing interpretability and reducing false positives and false negatives. This research contributes to the advancement of predictive analytics and offers valuable insights for future investigations in backorder forecasting and inventory control optimization for decision-making.
Backorder Forecasting; Cost Sensitive; Decision Science; Inventory Management; Machine Learning; Predictive Analytics; 1
## 1 Introduction
Backorders in inventory management refer to a customer's order for a product that is temporarily out of stock, resulting in a delay in fulfilment and delivery. Backorders can be both beneficial and detrimental. Orders might be delayed due to high demand, but so can poorly planning. Customers may not have the luxury or patience to wait if a product is not immediately available. This has a negative impact on sales and client satisfaction. Existing research (e.g., [1]; [2]) has experimented with machine learning (ML) to identify products at risk of backorders. However, there are still gaps that need to be addressed to improve the accuracy, timeliness, interpretability, and real-world implementation of backorder prediction models.
Companies are continually striving for a balance in the management of backorders. It's a tight line to walk: too much supply raises inventory expenses, while too little supply raises the danger of customers cancelling purchases. Why not maintain everything on hand at all times? Because most merchants and manufacturers have a significant number of SKUs (unique product IDs), this technique will increase inventory costs significantly.
The challenge here is to classify and forecast severely imbalanced product classes based on historical data from inventories and supply chains and to assess their propensity to have backorders. Despite the considerable research efforts ([1]; [2]; [3]; [4]) and researchers addressing backorder forecasting using artificial intelligence (AI) and machine learning (ML)-based prediction in inventory management, there are still open questions and challenges in this field. There is a lack of comprehensive studies that compare different AI-ML-based prediction systems in terms of their performance, advantages, limitations, and suitability for backorder forecasting. Moreover, while some studies look at financial consequences, the integration of cost and profit issues, such as misclassification costs and the profit function, is not thoroughly examined in the current literature. Many AI-based prediction models lack interpretability, making it difficult for decision-makers to comprehend the underlying issues driving backorders and take appropriate action. This work addresses these gaps by using highly imbalanced backorder data and advanced generative AI and machine learning (ML) techniques with the goal of bridging some of the existing gaps. The theoretical foundation of this article lies in the intersection of supply chain management, predictive analytics, and advanced AI-ML techniques, with a specific focus on backorder management and inventory system optimization.
This study considers several metrics, e.g., receiver operating characteristics area under the curve (ROCAUC), precision recall area under the curve (PRAUC), Macro F1-score (harmony between precision and recall), profit maximization, and misclassification cost, as cost-sensitive approaches to developing a final model. This comprehensive evaluation helps to assess the accuracy, robustness, and cost-effectiveness of the prediction models, offering important insights for decision-makers and practitioners in selecting the best applicable solutions for their specific supply chain contexts. The findings have important implications for organizations looking to improve their supply chain analytics capabilities to achieve operational excellence. Organizations can save costs, improve customer satisfaction through on-time delivery, mitigate the bullwhip impact, and enable proactive decision-making in changing market conditions by enhancing forecasting accuracy.
## 2 Literature Review
Before delving into demand and inventory management, we attempted to comprehend the key aspects influencing supply chain performance. Researchers
discovered that supply chain structure, inventory management policy, information interchange, customer demand, forecasting method, lead time, and review period duration are important contributors in this context [5]. Our study incorporates all of these and eventually narrows down to a forecasting method that demonstrates interdependence with inventory management policy, customer demand, lead time, and review period distribution.
The stochastic demand has attracted attention from several scholars in the last decade. Their work encompasses various approaches such as parametric and non-parametric methods, artificial neural networks, FL-based techniques, and other diverse methodologies. Researchers used ML to increase the precision of backorder forecasts [6]. Their study demonstrated the value of ML, showing a 20% improvement in accuracy. The findings emphasized the flexibility, clarity, and enhanced precision that ML offers. Building on this work, few authors ([7], [8], [9], etc.) employed supervised ML techniques on the same dataset used in this research. Their study further corroborated the efficacy of ML in improving backorder prediction within SCM. They highlighted the need for continued research in exploring different algorithms, constructing a cost-sensitive learning framework, and validating performance enhancements. A recent study [8] explored the applications of AI and ML within supply chains which opens innovative avenues for further investigation on how AI and ML can be utilized in SCM.
The existing literature contains several works addressing the economic order quantity (EOQ) and economic production quantity (EPQ) models, considering the presence of backorders. For instance, a recent study conducted research on integrating the EOQ model with backorders and proposed an optimization algorithm to determine the optimal order quantity, backorder quantity, and reorder point [10]. Another study focused on the EPQ model with backorders and developed a mathematical model to optimize production quantity and backorder quantity [1]. All these studies contribute to backorder forecasting by integrating backorders into traditional inventory models, providing optimization techniques, offering practical applications, and leveraging advanced technologies. They enhance our understanding of managing backorders in inventory management, thereby enabling organizations to optimize their inventory systems, improve customer service levels, and make informed decisions.
In a different approach, a hybrid model was proposed that integrated ARIMA and ANN for backorder prediction [11]. By combining the strengths of both approaches, the hybrid model aims to improve the accuracy and robustness of backorder predictions. The impact of non-normal and autocorrelated demand on supply chain risk management was examined, and an empirical method was proposed for computing safety stock levels, showcasing improvements in cycle service level, inventory investment, and backorder volume [12]. They aim to provide a better estimation of safety stock levels, leading to improved cycle service levels, optimized inventory investment, and reduced backorder volume. A new Bayesian method was proposed based on compound Poisson distributions for demand forecasting that outperforms other methods [13]. Researchers also explored the use of deep learning techniques, specifically LSTM networks, for backorder prediction ([3], [12]). Their research highlighted the effectiveness of deep learning in capturing complex patterns and improving the accuracy of backorder forecasts. The combination of all the above methods demonstrates the advancement of backorder forecasting, contributing to improved forecasting.
Several authors (e.g., [14], [6], [1], [15], [3], [8]) have applied ML techniques to the same dataset to determine whether advanced ML techniques can improve the effectiveness of backorder forecasting in the early stages of the supply chain. Adaptable and resilient models are required to handle the complexity of massive inventory data and deliver the correct insights for decision-making ([16], [17]).
All these studies enhance our understanding of managing backorders, optimize inventory systems, improving customer service levels, and aiding decision-making. However, despite the progress made, the field of AI and ML in supply chain management, including backorder forecasting, is still in its early stages [18]. The challenges of working with big data in inventory management, such as data volume, variety, heterogeneity, and statistical biases, need to be addressed to develop adaptable and resilient models for accurate decision-making.
The literature review presented in this study serves as a foundation for developing the conceptual model displayed in Fig. 1 to further enhance backorder forecasting in supply chain management.
Fig. 1 outlines a flow diagram with the different steps and measures were taken in our study to explore and validate the effectiveness of AI and ML techniques in backorder forecasting. By operationalizing the conceptual model into a logical framework, we aim to provide a structured framework for conducting empirical research, analyzing data, and deriving actionable insights for inventory management and supply chain decision-making. Furthermore, the logical model considers cost-sensitive learning frameworks, validation in real-world implementations, and considerations of contextual factors
Fig. 1: Conceptual model
in backorder forecasting. It will also consider the financial implications and costs associated with inventory management and backorder handling by incorporating profit functions and misclassification costs into the model.
The problem statement can be hypothesized as follows: A hypothetical manufacturer has a data set that indicates whether a backorder has happened. The objective is to use predictive analytics and machine learning to reliably estimate future backorder risk and then discover the best method for inventory products with high backorder risk.
## 3 Data Analysis & Model Development
The inventory dataset used in this study has 1,04,8575 entries with 8 categorical and 15 numerical variables. A unique identification sku (stock-keeping unit) assigns each data point to a distinct product. SKUs assist businesses in precisely identifying and locating products, monitoring stock levels, and facilitating effective inventory management and order fulfilment operations. They are required for efficient inventory management, supply chain management, and sales analysis. This column, however, was eliminated because it provided no additional value for the purposes of this study. Table 1 shows a summary of the dataset's statistics. From the data type, we can identify the categorical columns (7 columns with object type, which excludes sku). The data file includes the past eight weeks' worth of data, which comes before the week we are attempting to forecast.
One challenge with the dataset is the imbalanced classes displayed in Fig. 2, where the majority class considerably outnumbers the minority class. We use SMOTE (synthetic minority over-sampling technique) and variational auto-encoder (VAE) to deal with unbalanced data sets, which increases modelling accuracy and efficiency. The second task is to optimize for the business case. To do this, we employ:
* Profit maximisation via classification models entails developing a profit function, optimizing the decision threshold, feature engineering, cost-sensitive learning, model selection, and continuous monitoring to ensure that the model's performance aligns with the business's financial objectives. It is a data-driven and business-focused strategy to increase profitability while considering any trade-offs and risks.
* We employ misclassification costs, which is a practical and business-oriented approach to optimization. It allows for informed decisions regarding model performance, model selection, and decision thresholds while considering the financial and operational aspects of the business case. The problem is viewed as a cross-sectional problem.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**Sl** & **Variables** & **Description** & **Data type** & **Mean** & **Median** & **Std Dev** & **Maxima** & **Value** \\ \hline
2 & \(nationalInv\) & Current inventory level & int64 & 489.42 & 15.00 & 28595.83 & 12334400 & unit \\ \hline
3 & \(leadTime\) & Transit time for product & float64 & 7.84 & 8.00 & 7.04 & 52 & weeks \\ \hline
4 & \(inTransitQty\) & Product in transit & int64 & 45.36 & 0.00 & 1390.53 & 489408 & \\ \hline
5 & \(forecast3Month\) & Forecast for next 3 months & int64 & 185.22 & 0.00 & 5032.30 & 1218328 & unit \\ \hline
6 & \(forecast6Month\) & Forecast for next 6 months & int64 & 360.88 & 0.00 & 10067.64 & 2461360 & unit \\ \hline
7 & \(forecast9Month\) & Forecast for next 9 months & int64 & 528.91 & 0.00 & 14895.45 & 3777304 & unit \\ \hline
8 & \(sales1Month\) & Sales revenue prior 1 month & int64 & 57.30 & 0.00 & 2067.93 & 741774 & unit \\ \hline
9 & \(sales3Month\) & Sales revenue prior 3 months & int64 & 180.46 & 0.00 & 5263.48 & 1094112 & unit \\ \hline
10 & \(sales6Month\) & Sales revenue prior 6 months & int64 & 352.46 & 0.00 & 9773.35 & 2146625 & unit \\ \hline
11 & \(sales9Month\) & Sales revenue prior 9 months & int64 & 544.33 & 0.00 & 15195.65 & 3201035 & unit \\ \hline
12 & \(minBank\) & Minimum amount to stock & int64 & 54.14 & 0.00 & 1244.24 & 313319 & unit \\ \hline
13 & \(potentialissue\) & Past overdue & int64 & 3.28 & 0.00 & 299.43 & 146496 & unit \\ \hline
14 & \(piecesPastDue\) & Performance last 6 months & float64 & -7.05 & 0.82 & 26.84 & 1 & N/A \\ \hline
15 & \(perf6MonthAvg\) & Performance last 12 months & float64 & -6.62 & 0.80 & 26.14 & 1 & N/A \\ \hline \end{tabular}
\end{table}
Table 1: Summary Statistics
### Exploratory Data Analysis
* A thorough data analysis was done for this dataset, which helped in understanding the characteristics, patterns, and relationships within the data.
* The attribute indicates the current inventory levels. The mean inventory is 489.42 units, with a wide range from 15 units to a maximum of 12,334,400 units. The median value is 15 units, indicating a possible right-skewed distribution.
* The attribute represents the transit time for the product. The mean transit time is 7.84 weeks, with a median of 8 weeks. The standard deviation is 7.04 weeks, indicating some variability in transit times.
* The attribute indicates the quantity of products currently in transit. The high standard deviation and maximum value indicate significant variation in transit quantities.
* The dataset includes variables representing sales revenue for different time periods (1 month, 3 months, 6 months, and 9 months) as well as forecasted sales for the next 3, 6, and 9 months.
* The mean and median values indicate the average and middle values of sales and forecasts. The high standard deviations suggest variability in sales performance.
* The _minBank_ attribute represents the minimum amount required to stock. The mean and median values indicate the average minimum stocking requirement, while the high maximum value suggests that some products may require a significantly higher minimum stock level.
* The _piecesPastDue_ attribute represents the quantity of products that are past-due. The mean and median values suggest a low average quantity of past due items.
* The _perf6MonthAvg_ and _perf12Monthavg_ attributes represent the performance of products over the last 6 and 12 months, respectively. The negative mean values for both variables indicate below-average performance, while the standard deviations suggest variation in performance.
* The _localBoty_ attribute indicates the number of stock orders overdue. The low mean and median values suggest a relatively low average amount of overdue stock orders.
* Several attributes, such as _potentialssue_, _deckRisk_, _oeConstraint_, _ppapRisk_, _stopAutoBuy_, and _revStop_, represent risk flags associated with the products. These variables are categorical and indicate the presence or absence of specific risks.
* The _wentOnBackorder_ attribute represents whether a product went on backorder. This variable is the target variable, and further analysis is required to understand its distribution and relationship with other variables.
Figure 2: Severe imbalanced classes
of numerical attributes. All the significant correlations observed are positive.
* \(forecast3Month\), \(forecast6Month\) and \(forecast9Month\) are strongly correlated (coefficient = 0.99).
* \(sales1Month\), \(sales3Month\), \(sales6Month\) and \(sales9Month\) are strongly correlated with each other with a degree varying from 0.82 to 0.98.
* forecast and sale columns are correlated with each other with a minimum degree of 0.62 varying up to 0.88. It is obvious that when the sales for a certain product are high in the past sales the forecast for the same in the coming months will be higher and vice versa.
* \(perf6MonthAvg\) and \(perf12MonthAvg\) are very highly correlated with each other (coefficient = 0.97.
* \(minBank\) (minimum amount of stock recommended) is highly correlated with sales and forecast columns as stock in inventory is directly proportional to sales.
* \(inTransitQty\) is highly correlated with sales, forecast and \(minBank\) columns. This is obvious because high sales of a product \(\Rightarrow\) more of that product in transport for inventory replenishing high sales of a product \(\Rightarrow\) high forecast.
* \(piecesPastDue\) is meekly correlated with sales and forecast columns \(nationalInv\) is meekly correlated with \(minBank\) and weekly correlated with sale columns.
Overall, the correlation matrix indicates that the number of features used to forecast whether an item will be placed in backorder may be less than the number of features in the data set. Fig. 4 displays the correlation matrix of categorical attributes.
Based on the chi-squared values, there is no strong association between any pairs of variables. There might be a relationship between \(oeConstraint\) and \(potentialIssue\). \(RevStop\) and \(oeConstraint\) may be related in some way. There might be a relationship between \(potentialIssue\) and \(revStop\). Even if the coefficients are high, the features mentioned above have the lowest scores in comparison.
Several data pre-processing steps were employed to get the raw data ready for modelling. This includes replacing -99.0 in performance columns with nan for imputing and employing an iterative imputer to fill in the missing values. Moreover, as some of the features in the numerical columns are correlated, linear models like Logistic Regression, Linear SVM (support vector machines), and other linear models may not perform as well as the coefficients of separating plane change.
The values in the category columns are one hot encoded with "Yes" as 1 and "No" as 0. There are missing values in the columns \(leadTime\), \(perf6MonthAvg\), and \(perf12MonthAvg\). To fill in the missing variables, model-based imputation (iterative-imputer) was employed. Moreover, the positive skewness (possible outliers) in the data was treated in two ways, resulting in two distinct datasets to apply.
* median) / (75% value
- 25% value) to standardize the data while scaling it without taking outliers into account.
* Applied the log transformation followed by the Standard Scaler to the columns in the dataset with positive skewness.
Principal Component Analysis (PCA) was employed to identify major features explaining 99% variance in the dataset. Fig. 5 displays the top three features in our PCA are: \(nationalInv\), \(forecast9Month\) and \(sales9Month\).
Fig. 4: Chi-Squared test heatmap of categorical features
Fig. 3: Correlation heatmap of numerical features
The performance metrics used in this work are: ROCAUC, which differentiates between positive and negative classes. PRAUC, which helps in selecting an appropriate threshold that balances the trade-off between precision and recall. The Macro F1-Score is the average of the F1 scores of both the positive and negative classes.
Besides these three primary metrics, our work reports Mathews Correlation Coefficient (MCC), precision, recall, and Brier scores for better comparison. MCC considers true positive (TP), true negative (TN), false positive (FP), and false negative (FN) values to provide a balanced measure of classification accuracy. Considering these, our work aims to provide a comprehensive evaluation of the predictive models in terms of their performance, interpretability, and latency constraints. By not having latency constraints, the models can potentially make more accurate and informed predictions by leveraging a broader range of data and capturing any patterns or trends that may emerge over longer time periods. It allows for a more comprehensive analysis of the backorder situation and better forecasting of the likelihood of products going into backorder.
### Hypotheses test
We employed non-parametric Mann-Whitney and Chi-Square tests to help assess the association between each attribute and the target variable, or their differences between different groups. The results are displayed in Table 2.
The results indicate significant associations between certain attributes and backorders. Low chi-square test p-values for the attributes \(\textit{deck\_risk}\), \(\textit{oe\_constraint}\), and \(\textit{ppapRisk}\) indicated strong relationships.
All the numerical attributes displayed significant relationships with backorders, based on the Mann-Whitney U tests. These findings suggest that these attributes may play a crucial role in predicting backorders.
### Summary of data analysis and feature engineering
We have a severely imbalanced dataset with 99.99% majority class. The problem we are trying to address is a binary classification problem where we have to predict whether or not a product will go to backorder.
The Dataset contains 15 numerical attributes all of which are highly skewed.
* \(\textit{leadTime}\) attributes come with missing values (64518).
* \(\textit{perf6MonthAvg}\) and \(\textit{perf12MonthAvg}\) are heavily left skewed rest all the features are right skewed.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**SI.** & **Attributes** & **Statics with Significance** \\ \hline
0 & \(\textit{nationalInv}\) & U statistic: 1834036383.5, \\ & & p-value: 0.00 \\ \hline
1 & \(\textit{leadTime}\) & U statistic: 4239446959.0, \\ & & p-value: 0.00 \\ \hline
2 & \(\textit{inTransitQty}\) & U statistic: 7002852514.5, \\ & & p-value: 0.00 \\ \hline
3 & \(\textit{forecast3Month}\) & U statistic: 6905403097.0, \\ & & p-value: 0.00 \\ \hline
4 & \(\textit{forecast6Month}\) & U statistic: 6827462574.5, \\ & & p-value: 0.00 \\ \hline
5 & \(\textit{forecast9Month}\) & U statistic: 5869355615.0, \\ & & p-value: 0.00 \\ \hline
6 & \(\textit{sales1Month}\) & U statistic: 5910651312.5, \\ & & p-value: 0.00 \\ \hline
7 & \(\textit{sales3Month}\) & U statistic: 5809789939.0, \\ & & p-value: 0.00 \\ \hline
8 & \(\textit{sales6Month}\) & U statistic: 5740565459.0, \\ & & p-value: 0.00 \\ \hline
9 & \(\textit{sales9Month}\) & U statistic: 468282621.5, \\ & & p-value: 0.019 \\ \hline
10 & \(\textit{minBank}\) & U statistic: 5062287539.5, \\ & & p-value: 0.00 \\ \hline
11 & \(\textit{potentialIssue}\) & U statistic: 4193563375.0, \\ & & p-value: 0.00 \\ \hline
12 & \(\textit{piecesPastDue}\) & U statistic: 4174380247.5, \\ & & p-value: 0.00 \\ \hline
13 & \(\textit{perf6MonthAvg}\) & U statistic: 5145229552.0, \\ & & p-value: 0.00 \\ \hline
14 & \(\textit{perf12MonthAv}\) & U statistic: 1834036383.5, \\ & & p-value: 0.00 \\ \hline
15 & \(\textit{localBoQty}\) & U statistic: 4239446959.0, \\ & & p-value: 0.00 \\ \hline
16 & \(\textit{deckRisk}\) & Test statistic: 219.903, p-value: 0.00 \\ \hline
17 & \(\textit{oeConstraint}\) & Test statistic: 28.274, p-value: 0.00 \\ \hline
18 & \(\textit{ppapRisk}\) & Test statistic: 110.458, p-value: 0.00 \\ \hline
19 & \(\textit{stopAutoBuy}\) & Test statistic: 4.564, p-value: 0.032 \\ \hline
20 & \(\textit{revAtop}\) & Test statistic: 2.978, p-value: 0.084 \\ \hline \end{tabular}
\end{table}
Table 2: Statistical hypotheses tests
Figure 5: Feature importance using PCA.
The numerical attributes have a small interquartile range, and some have negative values.
* The attributes of sales, forecast and performance are correlated.
* Dataset contains 8 categorical attributes, including \(wentToBackorder\) is the target variable.
* Encoded categorical attributes to numerical attributes.
* Dataset split into train and test dataset with 80:20.
* Iterative imputer used for missing value imputation.
* Performed PCA to determine feature importance.
* Different feature transformation was applied on the dataset.
* Robust scaler
* Log transform
* Standard scaler
### Modelling and Evaluation
We have experimented with multiple models (5-models) to address the complexities and challenges of inventory management systems. There is no one size fits for all and scholars have experiments with different techniques in their work. Such as distributed Random Forest and Gradient Boosting [6], deep neural network was also proposed [3], The models used are the dummy model as the base model, the Balanced Bagging classifier (BBC), the BBC with variational auto-encoder, Fuzzy logic with the BBC, Random Forest, and Multilayer Perceptron. Each model was implemented with four transformed datasets (Robust Scaling, log Transform and Standard Scaling, Quantile Transform). Thus, a total of 20 models were trained and tested.
To start with, a grid search cross-validation was employed to fine-tune the Balanced Bagging Classifier (BBC) by exploring different hyperparameters. We used Grid Search Cross-Validation (GridSearchCV) for optimizing hyperparameter tuning by testing alternative hyperparameter combinations. It combines grid search, which specifies a range of hyperparameter values, with cross-validation, which evaluates the model's generalization to previously unknown data. The optimum hyperparameters are chosen depending on the ROC AUC. Because of its scientific approach and avoidance of human biases, GridSearchCV is a popular method for hyperparameter tuning among the scientific community and applied fields. We performed an exhaustive grid search using all the combinations of parameters to optimize the internal score in the training set. Table 3 displays the pseudocode for hyperparameter search.
BBC is an ensemble learning method that combines multiple classifiers, which increases the performance and generalization of the model. The BBC combines the advantages of bagging and sampling techniques to address the issue of imbalanced datasets. This has been supported by researchers in the past [19].
Several researchers in recent times and in the past have recommended fuzzy logic (e.g., [11], [21], [22], [23], [24], [25], [26]). The objective is to add human-centric design along with advanced machine learning algorithms. When dealing with imbalanced data or when the problem exhibits complicated and ambiguous linkages, employing fuzzy logic in conjunction with balanced bagging might be a beneficial method. Therefore, fuzzy logic was integrated into the BBC to efficiently handle imprecise information that is frequently encountered in inventory management systems.
Furthermore, we leveraged the power of the Generative Adversarial Network--Variational Auto Encoder (VAE)--to experiment with our modelling approach. Researchers claimed the superior performance of VAE compared to supervised ML techniques for imbalanced data (e.g., [27, 28]). Their work recommends VAE, offering great potential for mitigating the bullwhip effect and enhancing supply chain operations. We combined supervised BBC with unsupervised VAE to develop a powerful approach for addressing class imbalance and capturing complex feature representations. VAE generates meaningful latent representations of the input data, which are then combined with the original features to improve the performance of the BBC. Table 4 displays the pseudocode for VAE implementation.
Lastly, we tried Artificial Neural Network based Multilayer Perceptron (MLP). MLP provides efficient computing, lowering computational time and memory utilization, making it a desirable tool for real-world inventory control systems ([13]; [28]).
## 4 Results & Discussions
Table 5 presents a consolidated report of all the classifier experimented.
* a) _on non-normal data_: BBC (0.9081) \(>\) VAE_BBC (0.9003) \(>\) FL_BBC (0.8759) \(>\) Dummy (0.5074).
* b) _on Log-transformed and Normalized data:_ BBC (0.9073) \(>\) VAE_BBC (0.9007) \(>\) FL_BBC (0.8615) \(>\) Dummy (0.4897).
* a) _on non-normal data_: BBC (0.4925) \(>\) VAE_BBC (0.4841) \(>\) FL_BBC (0.4646) \(>\) Dummy (0.2640).
* b) _PRAUC on Log-transformed and Normalized data:_ BB (0.4917) \(>\) VAE_BBC (0.4847) \(>\) FL_BBC (0.4515) \(>\) Dummy (0.2456).
* _Macro F1-Score_
* a) _on non-normal data_: BB (0.5545) \(>\) VAE_BBC (0.5532) \(>\) FL_BBC (0.5251) \(>\) Dummy (0.3407).
* b) _on Log-transformed and Normalized data:_ BB (0.5544) \(>\) VAE_BBC (0.5524) \(>\) FL_BBC (0.5195) \(>\) Dummy (0.0160).
* _Precision_
\begin{table}
\begin{tabular}{|l|} \hline \# Data Splitting \\ X, y = SplitData(df); \\ \# Data Preprocessing \\ numeric\_features = [nationalInv’, ’leadTime’, ’inTransitQty’, ’piecesPastDue’, ’localBoQty’]; \\ categorical\_features = [potentialIssue’, ’deckRisk’, ’oceConstraint’, ’ppapRisk’, ’revStop’]; \\ \# VAE model \\ input\_dim = GetNumberOfFeatures(X); \\ latent\_dim = 10; \\ \# Encoder and decoder layers for the VAE \\ ”Encoder and decoder layers of the VAE are defined with activation functions. These layers are part of the neural network architecture used in the VAE”input\_layer = DefineInputLayer(input\_dim); encoded = \\ DefineDenseLayerWithActivation(input\_layer, 32, ’relu’); \\ encoded = DefineDenseLayerWithActivation(encoded, latent\_dim, ’relu’); \\ decoded = DefineDenseLayerWithActivation(encoded, 32, ’relu’); \\ decoded = DefineDenseLayerWithActivation(decoded, input\_dim, ’linear’); \\ \# VAE model with custom loss \\ vae = CreateVAEModel(input\_layer, decoded, CustomLossFunction); \\ \# Train VAE on the imbalanced data \\ TrainVAEModel(vae, X\_train\_processed, X\_train\_processed, epochs=20, batch\_size=32); \\ \# Encode the input data using the VAE \\ encoded\_X\_train = EncodeDataWithVAE(encoder, X\_train\_processed); \\ \# Combine the latent representations with the original features \\ combined\_X\_train = ConcatenateFeatures(X\_train\_processed, encoded\_X\_train); \\ \# Train BBC on the combined feature set \\ bbc = \\ CreateBalancedBaggingClassifier(n\_estimators=1000) \\ TrainBalancedBaggingClassifier(bbc, combined\_X\_train, y\_train); \\ \# Preprocess the test data and obtain predictions \\ X\_test\_processed = PreprocessTestData(X\_test); \\ encoded\_X\_test = EncodeDataWithVAE(encoder, X\_test\_processed); \\ combined\_X\_test = ConcatenateFeatures(X\_test\_processed, encoded\_X\_test); \\ y\_pred = PredictWithBalancedBaggingClassifier(bbc, combined\_X\_test); \\ \# Calculate ROC AUC score \\ roc\_auc = CalculateROCAUCScore(y\_test, y\_pred); \\ Print(”ROC AUC score:”, roc\_auc); \\ \end{tabular}
\end{table}
Table 4: Pseudocode for handling imbalanced data with VAE and BBC model training
* _on non-normal data_: BB (0.0838) \(>\) VAE_BBC (0.0824) \(>\) FL_BBC (0.0601) \(>\) Dummy (0.0087).
* _on Log-transformed and Normalized data: BB (0.0840) \(>\) VAE_BBC (0.0818) \(>\) FL_BBC (0.0555) \(>\) Dummy (0.0081)._
* _on non-normal data_: BBC (0.9005) \(>\) VAE_BBC (0.8848) \(>\) FL_BBC (0.8679) \(>\) Dummy (0.5151).
* _on Log-transformed and Normalized data:_ BB (0.9017) \(>\) VAE_BBC (0.8865) \(>\) FL_BBC (0.8460) \(>\) Dummy (0.4786).
* _Mathew's correlation coefficient_
For a severely imbalanced dataset, the benchmark metrics most used are Area Under the Precision-Recall Curve (PRAUC), F1-Score, Matthews Correlation Coefficient (MCC), ROC AUC, Precision at a Given Recall, and Geometric Mean ([29, 30]). Here we have used a combination of multiple metrics, which provides a more comprehensive assessment of model performance.
Based on the evaluation metrics, it appears that the BBC model on the log-transformed and normalized dataset performed well across multiple metrics, indicating its effectiveness in predicting the target variable. Additionally, the "VAE \(+\) BBC" model on non-normal data also showed promising performance. In most cases, a higher value indicates better prediction performance, except for the Brier score, where a lower value is preferred. The Brier score measures the accuracy of probabilistic predictions, and a lower score indicates better calibration and accuracy.
Figs. 1 and 2 provide valuable insights into the performance of the models. The confusion matrix (CM) in Figure 1 allows us to examine the model's predictions in detail. It shows the number of true negatives (TN) and false positives (FP) for backorder predictions.
In this case, the high number of TNs (190,429) indicates that the model performs well in accurately identifying non-backorders. This means that the model correctly identifies a significant portion of the instances where a backorder is not present, which is crucial for efficient inventory management.
The number of FP (17,506), however, indicates that there are some situations in which the model forecasts non-backorders as backorders. This can lead to unnecessary actions or costs associated with backorder handling for those specific instances.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{10}{|c|}{**Retained original dimensions**} \\ \hline & **ROCAUC** & **PRAUC** &
\begin{tabular}{c} **Macro** \\ **F1-Score** \\ \end{tabular} & **Precision** & **Recall** & **MCC** & **Brier** \\ \hline \multicolumn{10}{|c|}{_Models on non-normal data_} \\ \hline BBC & 0.9081 & 0.4925 & 0.5545 & 0.0838 & 0.9005 & 0.2600 & 0.0623 \\ \hline FL\_BBC & 0.8759 & 0.4646 & 0.5251 & 0.0601 & 0.8679 & 0.2099 & 0.0815 \\ \hline VAE\_BBC & 0.9003 & 0.4841 & 0.5532 & 0.0824 & 0.8848 & 0.2552 & 0.0626 \\ \hline \multicolumn{10}{|c|}{_Models on Log transformed + Normalized dataset_} \\ \hline Dummy & 0.4897 & 0.2456 & 0.0160 & 0.0081 & 0.4786 & -0.0037 & 0.2500 \\ \hline BBC & 0.9073 & 0.4917 & 0.5544 & 0.0840 & 0.9017 & 0.2609 & 0.0622 \\ \hline FL\_BBC & 0.8615 & 0.4515 & 0.5195 & 0.0555 & 0.8460 & 0.1976 & 0.1310 \\ \hline VAE\_BBC & 0.9007 & 0.4847 & 0.5524 & 0.0818 & 0.8865 & 0.2544 & 0.0629 \\ \hline Random undersampler with & 0.9604 & 0.2428 & 0.5483 & 0.0785 & 0.9005 & 0.2507 & 0.0641 \\ \hline \end{tabular}
\end{table}
Table 5: Results & discussions
The presence of false negatives (180) indicates the model occasionally fails to predict backorders when they occur. This means that there are instances where the model incorrectly classifies a backorder as a non-backorder. This can result in missed opportunities to take proactive measures and prevent backorders, leading to potential disruptions in the supply chain and customer dissatisfaction. On the other hand, the presence of true positives (1600) suggests that the model can identify backorders to some extent. These instances represent the correct classification of backorders by the model.
Conversely, Fig. 2 shows that a high number of TN indicates that the model properly recognised a large number of examples (190,235) as non-backorders, indicating that it works well in reliably identifying non-backorders. A moderate number of FP means the model misclassified a considerable number of instances as backorders when they were non-backorders (17,700). This suggests that the model tends to predict false positives, which means it may sometimes predict an item as a backorder when it is not.
A low number of FPs means the model misclassified a relatively small number of instances as non-backorders when they were backorders (202).
A moderate number of TPs means the model correctly identified a reasonable number of instances as backorders (1,578). This indicates that the model can predict backorders accurately. Overall, the model shows good performance in correctly identifying non-backorders (high TN) but has some limitations in accurately predicting backorders (moderate TP and FN). Reducing false positives and false negatives would be valuable for improving the overall accuracy of backorder predictions.
Matrix 2 performs slightly better than Matrix 1, as it has a lower number of FP and FN and a slightly higher number of TP. However, the differences between the two matrices are relatively small. This means that for Matrix 1, around 88.64% of products identified as potential backorders went on backorder, while for Matrix 2, around 90.02% of products identified as potential backorders went on backorder.
In this work, we justify using supervised learning under the presumption that the memory that is accessible is limited in relation to the size of the dataset. The contexts of big data, distributed databases, and embedded systems all apply to this fundamental paradigm. We study a simple yet powerful ensemble framework in which each individual ensemble model is constructed from a random patch of data generated by selecting random subsets of features and instances from the entire dataset. These results demonstrate that the suggested method performs on par with popular ensemble methods in terms of accuracy while simultaneously lowering memory requirements and achieving much superior performance when memory is severely limited.
### Profit function
By measuring the overall profit generated by the forecasts, we can gain insights into the financial impact of the model's predictions and make informed decisions about inventory levels, ordering quantities, and backorder management strategies. This evaluation goes beyond traditional accuracy metrics and provides a more comprehensive assessment of the model's performance from a business perspective. The profit calculation in the given formula involves the following parameters: \(\mathit{SalesRevenue}\)\((1,3,6,\mathit{and}\)\(9\)\(\mathit{months})\), \(\mathit{holdingCost}\), \(\mathit{inventoryLevel}\), \(\mathit{backorderCost}\), \(\mathit{leadTimeCost}\), \(\mathit{potentiallySuseCost}\), \(\mathit{deckRiskCost}\).
\[\mathit{Revenue}=\sum\mathit{(\mathit{sales1Month}+\mathit{sales3Month}+\mathit{ sales6Month}+\mathit{sales9Month})} \tag{1}\]
\[\mathit{Profit}=\mathit{Revenue}\text{ - }(\mathit{holdingCost}*\mathit{inventoryLevel}) -(\mathit{backorderLost}*\mathit{backorders})\text{ - }(\mathit{leadTimeCost}*\mathit{leadTime})\text{ - }(\mathit{potentiallySueCost}*\mathit{potentiallySue})\text{ - }(\mathit{deckRiskCost}*\mathit{deckRisk}) \tag{2}\]
Fig. 8: BBC-A (non-normal data)
Fig. 7: VAE \(+\) BBC model - confusion matric and roc curves
We have performed optimization to maximize the profit by finding the optimal values for the decision variables: \(holdingCost\), \(backorderCost\), \(leadTimeCost\), \(potentialIssueCost\), and \(deckRiskCost\). The initial guess for the decision variable values was set to \(x_{0}\). Here, we have defined a constraint as \(holdingCost\geq 0\). The goal is to find the optimal values for the decision variables (costs) that maximize the profit. Results are displayed in Table 6. We have used numerical optimization method by employing Sequential Least Squares Quadratic Programming (SLSQP).
### Cost Sensitive Learning
The overall cost of misclassification computed as the sum of the costs of false positives and false negatives. It is calculating the entire cost of inaccurate predictions, taking into consideration the expenses associated with each form of misclassification.
* \(FPcost=10\) (cost of misclassifying a backorder item as non-backorder)
* \(FNcost=1\) (cost of misclassifying a non-backorder item as backorder)
\[TotalCost=\{FP*FPcost\}+\{FN*FNcost\} \tag{3}\]
Table 6 displays the cost sensitive factors. Considering both average profit and misclassification cost, the VAE + BBC model appears to perform well in terms of optimal profit, while the BBC-A model performs well in terms of misclassification cost for non-normal data.
The VAE model consists of two main components: an encoder and a decoder. The encoder takes the input data and transforms it into a lower-dimensional latent space representation, capturing the underlying patterns and features. The encoder converts the input data \(x\) to a latent space representation \(z\) that is parameterized by _mean_ () and variance (2):
\[q\ (z\mid x)\ =\ N\ (z\mid(x),2(x)) \tag{4}\]
This latent space representation is then used as input for the decoder. Based on the learned latent representations, the decoder provides a meaningful and accurate reconstruction of the original data. This reconstruction method aids in capturing the key aspects and qualities of the data, allowing the model to generate correct predictions and classify instances. \(p\ (x\mid z)=N\ (x\mid f(z))\), where \(f(z)\) denotes the mapping from the latent space to the original feature space. The VAE aims to maximize the ELBO objective, which is a trade-off between reconstruction accuracy and regularization of the latent space:
\[ELBO=E\ [log\ p\ (x\mid z)]\ -\ D\_KL(q(z\mid x)\ ||\ p(z)) \tag{5}\]
where \(E[log\ p(x\mid z)]\) represents the reconstruction term and \(D\_KL(q(z\mid x)\ ||\ p(z))\) is the KL divergence between the approximate posterior \(q(z\mid x)\) and the prior distribution \(p(z)\).
BBC is further integrated with the decoder to improve the overall performance. Assuming a set of base classifiers as \(\mathcal{C}=\{C1,C2,\ldots,CM\}\), where M is the number of base classifiers. Given an input sample \(x\), each base classifier \(C_{l}\) provides a class prediction \(C_{l}(x)\). The ensemble prediction is determined by majority voting, where the class label with the highest number of votes is selected:
\[\begin{array}{c}ensemble\ prediction=\\ argmax\ (class\_label,\sum[Ci(x)==class\_label])\end{array} \tag{6}\]
This approach ensures that the final prediction considers the opinions of multiple classifiers. It is evident that the combination of these techniques resulted in superior performance for our use case.
### Interpretability
Interpretability is a critical aspect of the adoption of AI and ML in business ([31], [32]). It enables supply chain professionals to choose the best ML algorithms and comprehend the justification for the forecasts and decisions made by the model. This understanding empowers practitioners to optimize supply chain operations, improve decision-making, enhance visibility, mitigate risks, and drive overall efficiency and effectiveness ([33], [34]). By embracing interpretability, practitioners can make informed adjustments, ensure regulatory compliance, and establish trust with stakeholders, thus harnessing the full potential of AI and ML in transforming supply chain management.
We employed permutation importance (PI) to understand the relative importance of each attribute while making decisions. PI measures the importance of each feature by randomly shuffling its values and observing the impact on the model's performance This helps to understand the attributes affecting backorder predictions and sheds light on the contribution of each feature in the model. Fig. 7 displays the relative weights of each attribute
\begin{table}
\begin{tabular}{c|c|c} \hline
**Classifier** & \begin{tabular}{c} **Optimal** \\ **Profit** \\ \end{tabular} &
\begin{tabular}{c} **Misclassification** \\ **cost** \\ \end{tabular} \\ \hline \multicolumn{3}{c}{_Non-normal data_} \\ \hline BBC-A & 98,37,422.94 & 1,75,443.00 \\ \hline FuzzyBBC-A & 74,51,418.43 & 2,42,238.00 \\ \hline VAE + BBC & 98,37,424.40 & 1,78,353.00 \\ \hline \multicolumn{3}{c}{_Log normalized data_} \\ \hline BBC-B & 98,37,510.94 & 1,75,590.00 \\ \hline FuzzyBBC-B & 74,51,422.38 & 2,42,233.00 \\ \hline VAE + BBC & 98,37,423.06 & 1,78,060.00 \\ \hline MLP & 98,36,319.84 & 1,80,041.00 \\ \hline \end{tabular}
\end{table}
Table 6: Cost perspective
in predicting backorders. These weights, along with their standard deviations, provide insights into how each attribute influences the model's output. We can draw several key conclusions from this analysis:
* \(nationalInv\) attribute carries a high weight, indicating its significant impact on the model's predictions. Changes in this attribute (national inventory) values have a substantial effect on anticipating backorders. This underscores the pivotal role that inventory levels play in predicting backorders.
* \(sales1Month\) is the second most influential attribute is "sales1Month." Its weight suggests that variations in sales over a one-month period significantly affect the model's predictions. The speed of recent sales activity appears to be a critical factor.
* \(forecast3Month\) and \(Sales9Month\) attributes also exhibit substantial importance, emphasizing the value of both short-term (3-month) forecasts and the history of sales over a 9-month period in predicting backorders.
\(nationalInv\) is a critical attribute because it reflects a company's readiness to fulfil customer demand without encountering backorders. Maintaining optimal inventory levels is essential for operational efficiency and customer satisfaction. The high weight assigned to \(nationalInv\) in the model's predictions underscores its significance in the context of backorder prediction, as it directly impacts a company's ability to meet customer demand and avoid disruptions in its operations. Likewise, \(sales1Month\) attribute captures recent demand patterns and market dynamics, allowing the model to make informed predictions about backorders. Its significance lies in its ability to reflect immediate sales trends and enable timely responses to changes in customer demand, ultimately contributing to customer satisfaction and operational efficiency. \(forecast3Month\) attribute provides forward-looking insights into expected demand, aiding in inventory planning and procurement decisions. Meanwhile, \(Sales9Month\) attribute offers a long-term historical context, enabling the model to assess stability, analyze cyclical patterns, and build resilience in the face of changing market conditions. Both attributes contribute significantly to the accuracy of backorder predictions by considering various aspects of demand and sales behavior.
Some of the features, such as \(StopAutoBuy\), \(PpapRisk\), \(revStop\), and \(OcConstraint\), carry nearly zero weights. This indicates that changes in these attributes have minimal impact on backorder predictions and are less significant in this context.
### Limitations
The study relies heavily on data quality and availability, which could be improved by overcoming data-related challenges. The real-world supply chain dynamics are complex and ever-changing, and future research could explore adaptive models that can adapt to dynamic environments in real-time. The findings are promising, but their applicability across diverse industries and supply chain structures may vary. Future research directions include developing explainable AI models, real-time supply chain optimization, hybrid models, cost modelling, and cross-domain applications.
The study paves the way for a new era of supply chain management by integrating state-of-the-art technologies and financial considerations. We encourage researchers and practitioners to embrace these limitations as opportunities for further innovation. The future of supply chain analytics is bright, with groundbreaking discoveries and transformative applications.
## 5 Conclusions
This paper presents a novel perspective on the impact of advanced predictive analytics techniques on supply chain management performance. By combining generative AI-based unsupervised VAE with the supervised BBC model, the study offers valuable insights into proactive strategies, stockout mitigation, and supply chain optimization. The research demonstrates the trade-off between profit and misclassification costs, providing a comprehensive evaluation of the model's performance through various metrics such as ROCAUC, PRAUC, Macro F1-Score, Precision, Recall, MCC, and Brier. By incorporating a profit function and considering misclassification costs along with these metrics, this work addresses the financial implications and costs associated with inventory management and backorder handling. Additionally, the inclusion of permutation importance enhances the interpretability of the model by identifying the relative importance of input features. This provides valuable insights into the factors influencing backorder forecasting and inventory control, contributing to future investigations in the field. Looking ahead, the potential real-world applications of our research are vast. Businesses operating in dynamic supply chain environments can leverage our insights to fine-tune their inventory management strategies, minimize stockouts, and enhance customer satisfaction. Moreover, the findings have the potential to revolutionize the way business make decisions
Figure 9: Permutation Importance (BBC model)
by incorporating cost-sensitive predictive analytics into their daily operations.
|
2309.05933 | Workshop on a future muon program at FNAL | The Snowmass report on rare processes and precision measurements recommended
Mu2e-II and a next generation muon facility at Fermilab (Advanced Muon
Facility) as priorities for the frontier. The Workshop on a future muon program
at FNAL was held in March 2023 to discuss design studies for Mu2e-II,
organizing efforts for the next generation muon facility, and identify
synergies with other efforts (e.g., muon collider). Topics included high-power
targetry, status of R&D for Mu2e-II, development of compressor rings, FFA and
concepts for muon experiments (conversion, decays, muonium and other
opportunities) at AMF. This document summarizes the workshop discussions with a
focus on future R&D tasks needed to realize these concepts. | S. Corrodi, Y. Oksuzian, A. Edmonds, J. Miller, H. N. Tran, R. Bonventre, D. N. Brown, F. Meot, V. Singh, Y. Kolomensky, S. Tripathy, L. Borrel, M. Bub, B. Echenard, D. G. Hitlin, H. Jafree, S. Middleton, R. Plestid, F. C. Porter, R. Y. Zhu, L. Bottura, E. Pinsard, A. M. Teixeira, C. Carelli, D. Ambrose, K. Badgley, G. D. Bautista, R. H. Bernstein, S. Boi, J. Crnkovic, J. Eldred, A. Gaponenko, C. Johnstone, B. Kiburg, R. Kutschke, K. Lynch, A. Mukherjee, D. Neuffer, F. Pellemoine, V. Pronskikh, G. Rakness, J. Tang, R. Tschirhart, M. Yucel, J. Zettlemoyer, B. Simons, D. Redigolo, E. Diociaiuti, S. Giovannella, S. Miscetti, I. Sarra, S. E. Muller, W. Ootani, E. B. Yucel, D. M. Kaplan, T. J. Phillips, J. Pasternak, J. Zettlemoyer, D. Palo, Y. Davydov, D. Brown, S. Banerjee, D. Kawall, Z. Hartwig, S. Davidson, R. Abrams, C. Kampa, M. Mackenzie, M. Schmitt, P. Piot, B. Simons, Y. J. Lee, V. Morozov, A. Sato, S. Di Falco, A. Gioiosa, L. Morescalchi, A. Papa, M. T. Hedges, F. Renga, J. -B. Lagrange, C. Rogers, D. Wilcox, A. Petrov, J. Tang, S. Zhao, E. C. Dukes, R. Erlich, C. Group, J. Heeck, G. Pezzullo, T. Nguyen, J. L. Popp | 2023-09-12T03:08:21Z | http://arxiv.org/abs/2309.05933v1 | # Workshop on a future muon program at FNAL
###### Abstract
\({}^{29}\)Muons, Inc., Batavia, Illinois 60510, USA
\({}^{30}\)Northwestern University, Evanston, Illinois 60208, USA
\({}^{31}\)Northern Illinois University, DeKalb, Illinois 60115, USA
\({}^{32}\)Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831, USA
\({}^{33}\)Osaka University, Toyonaka, Osaka 564-0043, Japan
\({}^{34}\)INFN Sezione di Pisa, I-56127 Pisa, Italy
\({}^{35}\)INFN, Sezione di Pisa, I-56127 Pisa, Italy; Paul Scherrer Institut, 5232 Villigen, Switzerland
\({}^{36}\)Purdue University, West Lafayette, Indiana 47907, USA
\({}^{37}\)INFN Sezione di Roma, P.le A. Moro 2, 00185 Roma, Italy
\({}^{38}\)STFC Rutherford Appleton Laboratory, Didcot OX11 0QX, UK
\({}^{39}\)University of South Carolina, Columbia, South Carolina 29208, USA
\({}^{40}\)Sun Yat-sen University, Guangzhou, Guangdong 510275, China
\({}^{41}\)University of Virginia, Charlottesville, Virginia 22904, USA
\({}^{42}\)Yale University, New Haven, Connecticut 06520, USA
\({}^{43}\)York College and Graduate Center, The City University of New York, Nedw York, New York 11451, USA
November 3, 2021
###### Abstract
The Snowmass report on rare processes and precision measurements recommended Mu2e-II and a next generation muon facility at Fermilab (Advanced Muon Facility) as priorities for the frontier. The _Workshop on a future muon program at FNAL_ was held in March 2023 to discuss design studies for Mu2e-II, organizing efforts for the next generation muon facility, and identify synergies with other efforts (e.g., muon collider). Topics included high-power targetry, status of R&D for Mu2e-II, development of compressor rings, FFA and concepts for muon experiments (conversion, decays, muonium and other opportunities) at AMF. This document summarizes the workshop discussions with a focus on future R&D tasks needed to realize these concepts.
pacs: 12.38.-b, 12.38.-b, 12.38.+d, 12.38.+d, 12.38.+d, 12.38.+d, 12.38.+d
###### Contents
* I Introduction
* II Theory
* III Mu2e-II
* III.1 Accelerator
* III.2 Magnets
* III.3 Targetry
* III.3.1 Possible approach to and challenges for the Mu2e-II target
* III.3.2 Conveyor target design and particle production rates
* III.4 Tracker
* III.4.1 Improvements to the current design
* III.4.2 Alternative tracker designs
* III.5 Calorimeter
* III.6 Cosmic Ray Veto
* III.7 Trigger and DAQ
* III.8 Physics and sensitivity
* III.8.1 Phenomenological Considerations
* III.8.2 Technical Considerations
* IV Advanced Muon Facility
* IV.1 Proton Compressor ring
* IV.1.1 PAR Proposal
* IV.1.2 Towards a Compact PIP-II Accumulator Ring for AMF
* IV.1.3 PIP2-BD Experimental Program
* IV.2 Muon FFA
* IV.2.1 Motivation for an FFA ring
* IV.2.2 FFA ring
* IV.2.3 Beam transport and injection into an FFA ring
* IV.3 Conversion experiments
* IV.3.1 Current approach and a new FFA design
* IV.3.2 Momentum resolution requirements
* IV.3.3 Tracking
* IV.3.4 Simulation Scheme
* IV.3.5 Cosmic Rays
* IV.3.6 R&D Projects
* IV.4 Decay experiments
* IV.4.1 Beam requirements
* IV.4.2 Experimental signatures and general detector requirements
* IV.4.3 \(\mu^{+}\to e^{+}e^{+}e^{-}\)
* IV.4.4 \(\mu^{+}\to e^{+}\gamma\)
* IV.4.5 Exotic decays
* IV.4.6 Other experiments
* IV.4.7 Session Summary
* IV.4.8 Muonium-Antimuonium Conversion Experiment (MACE)
* IV.4.1 Muonium Spectroscopy
* 5Low-Energy Muons at Fermilab
* 6Muonium Production in Superfluid Helium
* 7Muonium Antimatter Gravity Experiment (MAGE)
* 8R&D Projects / Potential Synergies / Open Questions
* VConclusion
Acknowledgments
A. Workshop program
Introduction
The concepts of flavor and generations have played a central role in the development of the Standard Model, but the fundamental symmetries underlying the observed structures remain to be discovered. While flavor violation in quark and neutral lepton transitions have already shed some light on this question, charged lepton lepton flavor violation (CLFV) has yet to be seen. An observation would be a clear sign of New Physics (NP), and provide unique insights about the mechanism generating flavor. Furthermore, CLFV is closely linked to the physics of neutrino masses, and these processes can strongly constrain neutrino mass models and open a portal into GUT-scale physics.
Thanks to the availability of intense sources and their relatively long lifetime, muons offer a promising avenue to search for charged lepton flavor violation. A global experimental program of muon CLFV searches is underway in the US, Europe and Asia. Impressive sensitivity gains are expected in this decade, with up to four orders of magnitude improvements in the rate of \(\mu^{-}N\to e^{-}N\) conversion and \(\mu^{+}\to e^{+}e^{-}e^{+}\) decay searches. Upgrades to the beam lines at PSI, Fermilab, and J-PARC would further extend the discovery potential by orders of magnitude. With the goal of exploiting the full potential of PIP-II, a staged program of next-generation experiments and facilities has been proposed at FNAL. Mu2e-II is a near-term evolution of the Mu2e experiment, proposing to improve Mu2e sensitivity by an order of magnitude. The construction is planned to start before the end of the decade by leveraging existing infrastructure. The Advanced Muon Facility is a more ambitious proposal for a new high-intensity muon science complex, delivering the world's most intense positive and negative muon beams. This facility would enable broad muon science with unprecedented sensitivity, including a suite of CLFV experiments that could improve the sensitivity of planned experiments by orders of magnitude, and constrain the type of operators contributing to NP in case of an observation.
The Snowmass report on rare processes and precision measurements [1] recommended Mu2e-II and a next generation muon facility at Fermilab (Advanced Muon Facility) as priorities for the frontier. The timeline of a muon program outlined in this report is shown in Fig. 1, with Mu2e-II starting shortly after the completion of the Mu2e experiment, followed by the construction and operations of the Advanced Muon Facility. This program would leverage the full power of PIP-II at FNAL, and potential upgrades of the accelerator complex in the future. A strong R&D program should begin immediately to realize these opportunities.
Figure 1: Timeline for muon-based charged lepton flavor violation experiments outlined in the Snowmass report on rare processes and precision measurements [1]. Approximate expected dates are shown. The number to the right of the timeline is the expected final 90% CL
The _Workshop on a future muon program at FNAL_ ([https://indico.fnal.gov/event/57834/](https://indico.fnal.gov/event/57834/)) was held in March 2023 to pursue design studies for Mu2e-II, to organize efforts for the next generation muon facility, and to identify synergies with other R&D efforts. The workshop comprised plenary and parallel sessions discussing technical aspects and physics capabilities; the workshop program is available in Appendix A. This document provides a summary of the workshop discussion, together with the list of prioritized R&D tasks and avenues for future investigation.
Theory
The non-zero neutrino masses and mixing angles induce CLFV via loops. If neutrino masses are generated from Yukawa interactions with the Higgs boson, the CLFV rates are GIM-suppressed by factors of \(\sum_{i,j}(\Delta m^{2}_{i,j}/m^{2}_{W})^{2}\), where \(\Delta m^{2}\) denotes the mass-squared difference between the \(i\)-th and \(j\)-th neutrino mass eigenstates. The resulting branching fraction for \(\mu\to e\gamma\) is at the level of \(10^{-54}\)[2], well below any conceivable experimental sensitivity. However, new sources of CLFV are introduced in many BSM scenarios, leading to rates that are potentially accessible to future experiments (see e.g. Ref. [3; 4]).
Under the assumption that new particles responsible for CLFV are heavy, Effective Field Theory (EFT) offers a powerful framework to assess the reach and complementarity of CLFV searches. Restricting the discussion to the muon sector, the \(\mu^{+}\to e^{+}e^{-}e^{+}\) decay, \(\mu^{+}\to e^{+}\gamma\) decay, and (Spin Independent) \(\mu^{-}N\to e^{-}N\) conversion can be parameterized by the following Lagrangian at the experimental scale (\(\sim m_{\mu}\)):
\[\mathcal{L}_{\mu e}=+\frac{4G_{F}}{\sqrt{2}}\sum_{X=L,R}\Big{[}m_ {\mu}C_{D,X}\,\overline{e}\sigma^{\alpha\beta}P_{X}\mu\,F_{\alpha\beta}+C_{S,XX }\,\overline{e}P_{X}\mu\,\overline{e}P_{X}e \tag{1}\] \[+\sum_{Y\in L,R}C_{V,XY}\,\overline{e}\gamma^{\alpha}P_{X}\mu\, \overline{e}\gamma_{\alpha}P_{Y}e\ +\sum_{N=p,n}\left(C_{S,X}^{N}\,\overline{e}P_{X}\mu\, \overline{N}N+C_{V,X}^{N}\,\overline{e}\gamma^{\alpha}P_{X}\mu\,\overline{N} \gamma_{\alpha}N\right)\Big{]}\]
where \(P_{L,R}\) are chiral projection operators, \(C_{a}\) are dimensionless Wilson coefficients, and Spin Dependent conversion is neglected, occurring at a relatively suppressed rate compared to spin-independent conversion [5; 6; 7]. This Lagrangian provides a model-independent description of CLFV interactions at leading order in \(\chi\)PT with electrons, muons and nucleons as long as the NP scale is much larger than the \(\,\mathrm{GeV}\) scale. If the underlying model is specified, the Wilson coefficients can be calculated in terms of the model parameters. A similar Lagrangian with more operators could be constructed to describe \(\tau\) decays. It is worth noting that low-energy muon CLFV reactions are sensitive to a much larger number of operators than those given in the above Lagrangian, as loop effects ensure that almost every operator with four or less legs contributes to \(\mu^{+}\to e^{+}\gamma\), \(\mu^{+}\to e^{+}e^{-}e^{+}\) and \(\mu^{-}N\to e^{-}N\) amplitudes with a suppression factor at most \(\sim 10^{-3}\)[8].
The reach and complementarity of muon CLFV reactions is shown in Fig. 2, expressing the coefficient appearing in the Lagrangian in spherical coordinates [9]. The variable \(\kappa_{D}\) describes the relative contribution of the dipole and selected four-fermion operators: the dipole dominates for \(|\kappa_{D}|\ll 1\), while the four-fermion operators dominate for \(|\kappa_{D}|\gg 1\). The variable \(\theta_{V}\) describes the angle between specific four-fermion operators on leptons or quarks, representing the relative rate between \(\mu^{+}\to e^{+}e^{-}e^{+}\) and \(\mu^{-}N\to e^{-}N\) at large \(|\kappa_{D}|\), and \(\phi\) distinguishes coefficients probed by \(\mu-e\) conversion on light and heavy nuclei.
Additional observables can be used to further study the underlying NP. For example, the dipole and four-lepton operators in \(\mu^{+}\to e^{+}e^{-}e^{+}\) decays can be distinguished by analyzing the final state angular distributions with polarized muons [10]. In Spin-Independent \(\mu^{-}N\to e^{-}N\) conversion, all operators add coherently at the amplitude level, weighted by nucleus-dependent overlap integrals [11; 12]. A different nuclear target probes a different combinations of coefficients, and multiple measurements can be used to disentangle their contributions. Representing overlap integrals as vectors in the space of operator coefficients, the complementarity can be expressed as a misalignment angle [13; 14], as shown in Fig. 3. As pointed out by previous studies [15], light and heavy targets are good complements. However, the shorter muon lifetime in heavier elements presents an experimental challenge that must be addressed by future concepts.
The EFT approach has the advantage of being model agnostic, but the large number of operators is daunting (although this issue can be circumvented using an observable-motivated operator
basis [8; 9]). The study of simpler models, in particular neutrino-mass scenarios, provides an alternative approach. In the type-I seesaw mechanism [16; 17; 18; 19], heavy right-handed neutrinos generate a Majorana neutrino mass matrix inducing among others \(\ell_{\alpha}\to\ell_{\beta}\gamma\) decays. CLFV processes can be sizable [20; 21; 22] and provide information about the seesaw mechanism [23]. In the type-II seesaw mechanism [24; 25; 26; 27; 28], CLFV processes induced by a \(SU(2)_{L}\)-triplet with a flavor structure are directly linked to the neutrino masses and oscillation angles. The observation of CLFV reactions could provide information about neutrino parameters difficult to access otherwise. Similarly, models generating neutrino masses via loops rather naturally require lower new-physics scales and thus enhanced CLFV rates [29]. Models involving light particles also need dedicated analysis as they cannot be described by the SMEFT [30]. A non-exhaustive list of candidates includes the majoron \(J\)[31; 32], pseudoscalars (axions, famuilons,...) [33; 34], or \(Z^{\prime}\) gauge bosons [35; 36]. These scenarios predict a large variety of CLFV signatures, including invisible and displaced decays.
In summary, muon CLFV reactions are excellent probes of NP, closely linked to the mechanisms generating flavor and neutrino masses. Current measurements of \(\mu^{+}\to e^{+}\gamma\), \(\mu^{+}\to e^{+}e^{-}e^{+}\) and \(\mu^{-}N\to e^{-}N\) already set constraints on the NP mass scale at the level of \(10^{3}\,\mathrm{TeV}\) for some EFT operators, and future initiatives are poised to improve these bounds by one or more orders of magnitude. Prospects for distinguishing between models can conveniently be explored in EFT. Should a signal be observed, the Z-dependence of the conversion rate will provide critical information about the NP structure. If not, higher intensity muon beams will be required to further improve the sensitivity, probing higher mass scales and constraining models. The physics case for a next generation of experiments is well motivated in both cases.
Figure 2: Reach in NP scale, \(\Lambda\), of past and upcoming muon CLFV searches. The solid region is currently excluded. The parameter \(\kappa_{D}\) describes the relative contribution of the dipole and the four-fermion contact operators. The dipole operator dominates for \(|\kappa_{D}|\ll 1\), while the four-fermion operators dominate for \(|\kappa_{D}|\gg 1\). The remaining parameters are fixed to typical values (see [9] for details).
Figure 3: Left: misalignment angle with Al, taken from [12]. The misalignment angle increases with the number of neutrons in isotopes. Right: mean muonic atom lifetime as a function of the atomic number. Adapted from Ref. [37].
## III Mu2e-II
The Mu2e experiment is designed to improve sensitivity to muon to electron conversion by four orders of magnitude. With the advent of PIP-II, it was recognized that this sensitivity could be improved upon still further using the Mu2e facility as a base. This possibility is referred to as Mu2e-II, with a goal of improving \(\mu\to e\) sensitivity by at least an order of magnitude beyond Mu2e's capabilities. Mu2e-II uses the more powerful beam available from PIP-II, including a higher duty cycle, to enable this sensitivity, while re-using a substantial portion of the Mu2e infrastructure. The higher power and higher rates imply challenges, and some aspects of Mu2e will require modification to address these.
Mu2e-II provides a natural evolutionary step in the muon physics program at FNAL. It can follow Mu2e fairly quickly, keeping the muon physics program active. Mu2e-II can further inform as well as fill the gap towards a more ambitious program such as AMF. The R&D for Mu2e-II, as well as experience from Mu2e-II, is synergistic with R&D needed for other efforts such as AMF and the muon collider. The following sections describe the status of the thinking about Mu2e-II, including existing and needed R&D.
### Accelerator
The primary beam for Mu2e-II will be the 800 MeV proton beam from the new PIP-II linac under construction at Fermilab. The PIP-II linac will be capable of accelerating up to 2 ma of H\({}^{-}\) beam to 800 MeV in CW operation (1.6 MW). Mu2e-II will use a fraction of that potential, with pulses of 100 ns at a 1.7 \(\mu\)s period. The plan is to use an intensity of 100 kW, more than 10 times greater than Mu2e.
The initial implementation of PIP-II only includes pulsed beam for the Fermilab Booster. Mu2e-II requires that the PIP-II Linac be upgraded to include CW operation, which will require some power supply and chopper upgrades. It also requires construction of a new beam line from PIP-II to the Mu2e experimental hall. An initial design of that beam line exists; construction is a moderate but substantial expense.
The PIP-II linac could be extended to 1 GeV with modest upgrades. It could also be extended to 2 GeV, with a substantially more expensive upgrade, which may be needed for future Booster replacement. These extensions could modify the implementation of Mu2e-II.
### Magnets
The Mu2e experiment includes three large solenoids, the production solenoid (PS), the transport solenoid (TS) and the detector solenoid (DS). The Mu2e-II experiment plans to use the same configuration, reusing as much of that infrastructure as possible.
Some modifications will be required. 800 MeV proton beam trajectories would not fit within the mu2e HRS (heat and radiation shield), and the HRS is insufficient to protect the PS from the higher radiation and heat load of the Mu2e-II beam. The HRS should be replaced, and the PS magnet modified to handle the larger heat loads. Mu2e operation may iaradiate the PS magnet to a level that it cannot be modified. It is likely that the most cost-effective solution would be to replace the entire PS assembly. A redesign based on Mu2e-II parameters combined with lessons learned from Mu2e experience should be developed.
At this workshop, we heard two talks about high temperature superconductivity (HTS), from Luca Bottura (CERN, working on the muon collider), and from Zachary Hartwig (MIT). The 2022 version of the muon collider capture solenoid is a HTS magnet with a field of 20 T running at 20 K.
The cost of HTS, thanks to interest and investment from the fusion community, has been coming down dramatically, approaching that of NB\({}_{3}\)Sn. It no longer seems that cost is a prohibitive factor for a possible HTS replacement Mu2e-II PS. Besides cost, conductor such as REBCO (Rare Earth Barium Copper) is now available in quantity.
A new Python tool, called SolCalc, with a GUI, assists in the design of solenoids to meet magnetic field specifications. Calculations were shown for several choices of conductor towards deign of a Mu2e-II production solenoid, including low temperature superconductor, HTS (VIPER REBCO), and resistive coils. Several choices can potentially meet field specification. An issue for further investigation is whether it is possible to keep the radius of the solenoid from growing compared with the Mu2e magnet.
### Targetry
#### ii.3.1 Possible approach to and challenges for the Mu2e-II target
The forthcoming research program at Fermilab necessitates the development of innovative, high-power targets that can endure high-energy and high-intensity proton beams throughout their operational lifespan. The Mu2e project has proposed a solution anchored on a radiatively-cooled bicycle wheel structure made of chopped tungsten, while LBNF is strategizing to employ a long graphite target, designed to sustain a 1.2 (2.4) MW, 120-GeV beam. For the Mu2e-II project, however, only conceptual designs and early-stage prototypes exist, as will be discussed in the subsequent subsection. At present, there are no formulated strategies on how to construct the target for the AMF experiment.
The operational regimes anticipated for these future facilities present challenging conditions for materials, exhibiting parallels with those anticipated at the Muon Collider. Consequently, it is imperative to explore synergies and facilitate knowledge transfer with the Muon Collider program, as well as with other target facilities, in order to foster innovation and overcome these challenges.
Among the rapidly advancing initiatives in the field of nuclear physics is the Second Target Station (STS) at Oak Ridge National Laboratory (ORNL), a program predicated on the use of rotating tungsten target technology. The STS is slated to operate a 1.3-GeV 700-kW proton beam, with anticipated radiation damage approximating 1 DPA/yr as noted in reference [38]. The STS target is designed as a 'lasagna'-type tungsten target featuring a copper thermal interface, an Inconel water-cooled vessel, and a protective tantalum cladding to safeguard against in-beam erosion.
STS researchers have underscored [39] that programs utilizing tungsten targets are currently hampered by an insufficiency of data regarding tungsten's embrittlement, hardening, and diffusivity characteristics. In response to this challenge, the STS is actively conducting irradiation experiments at Los Alamos National Laboratory (LANL), which include fatigue and oxidation tests. Establishing open communication lines or active collaboration with these programs could yield significant benefits for future muon programs.
A highly promising concept for muon-production targets involves the use of fluidized tungsten powder as cited in reference [40]. The strengths of these targets include their ability to endure exceptionally high energy densities and the fact that the technology for handling fluidized powder is already well-established in the industry. They have a lower eruption velocity than mercury and do not suffer from cavitation damage. Nevertheless, there are certain challenges associated with this type of target. Long-term operation could lead to erosion, necessitating further research and development to mitigate this issue. The density of tungsten is higher than that of other industrial materials, which demands enhancements in theoretical understanding and plant design.
Additionally, diagnostic tools and process controls for reliable long-term operation are yet to be fully developed. To tackle these challenges, developers are conducting off-line testing. For instance, tests were executed at the HiRadMat facility in 2012 and 2015 to evaluate and improve upon the performance and resilience of these targets.
The development of new targetry technologies necessitates a synergistic approach with nuclear physics programs, which are formulating strategies towards a Muon Collider. Among these strategies, the use of liquid heavy metal targets, as discussed in reference [41], is prominent. Three such targets are currently under consideration: a liquid flow lead target, where liquid lead will circulate within a double-wall container, encased by a superconducting solenoid; a lead curtain design target; and a liquid mercury jet target, a design akin to that of the Spallation Neutron Source (SNS). Several issues are expected to arise with these targets, including cavitation, fatigue, shockwave, magneto-hydrodynamics, and stability, all of which demand a substantial amount of research and development. Additionally, the differences in pion spectra between lead and mercury compared to tungsten, the increased production of secondary neutrons from lead compared to tungsten (which requires shielding), and the generation of mixed wastes need to be taken into consideration.
The dearth of information regarding the radiation stability of materials, including tungsten, underscores the need for radiation tests of targets, including the Mu2e baseline one as suggested in reference [42]. Proposals are being reviewed to conduct tests of the Mu2e "Hayman" tungsten target using an 8-GeV proton beam at AP0. Despite certain limitations, such as the absence of resonant extraction and a smaller beam spot size, such tests can be conducted post completion of the g-2 experiment. The parameters that can be assessed include thermal stresses, oxidation, and creep.
In an effort to fully leverage the experience and successful methodologies developed at other centers, Fermilab is contemplating several strategies, as outlined in reference [43]. Firstly, the establishment of Post-Irradiation Examination (PIE) facilities, incorporating hot cells and specific characterization equipment, is under consideration. The laboratory generates a substantial volume of highly-activated materials, the examination of which necessitates transportation to collaborating centers equipped with suitable facilities. Having in-house PIE facilities would enable more efficient use of time and resources. Secondly, there is a need for dedicated facilities for radiation tests, including those using low-energy beams, which would facilitate radiation damage and thermal shock measurements. Thirdly, Fermilab currently lacks the capabilities to perform ab initio and molecular dynamics calculations and modeling. Development of these skills is vital for predicting the fundamental response of various classes of materials to irradiation, thereby guiding material selection and experimental designs for future irradiation studies. This includes modeling scenarios such as helium gas bubbles in beryllium and the radiation behavior of novel materials such as High-Entropy Alloys.
#### iii.1.2 Conveyor target design and particle production rates
Within the recently completed Laboratory Directed Research and Development (LDRD) project, a leading Mu2e-II pion-production target design has been developed, the "conveyor" target. This utilizes target spheres that are cycled in and out of the target region, distributing the damage among many spheres. Two prototypes of the conveyor Mu2e-II pion-production target have been developed, a carbon and a tungsten sphere design. The carbon target requires more spheres due to the lower density of carbon. The target design drawing and its deployment in Heat and Radiation Shield (HRS) are shown in Fig. 4, where only the upper right straight section would be in the beam. While these prototypes show promise, they have not yet reached the final design stage and require additional research, development, and simulation studies.
A critical factor in the design of the Mu2e-II production target is the rate of low momentum pion production to produce a high intensity, low momentum muon beam. The transport solenoid is designed to accept low momentum muons, which have a higher probability of stopping in the muon stopping target in the detector solenoid, while suppressing backgrounds from high energy electrons produced in the production solenoid. The number of conveyor target balls in the target region of the conveyor target is optimized to maximize the number of stopped muons in the muon stopping target using MARS simulations.
Figure 4: Conveyor target drawing (left) and conveyor target placement in the HRS (right)
The Mu2e-II Offline uses GEANT4 [44; 45; 46] for particle interaction and transport simulations to model the expected detector pileup environment as well as the signal and dominant background contributions. To validate the simulated particle production rates, Mu2e-II utilizes MARS15, GEANT4, and FLUKA to simulate the primary proton interactions with the production target candidates. The particle yields per proton on target are studied at the entrance to the transport solenoid, focusing on these particles that can contribute to the experiment's sensitivity.
Figure 5 shows the negative muon and pion momentum spectrum per primary proton on target for the tungsten conveyor target design using the MARS, GEANT4, and FLUKA MC. The three MC agree well, with the muon (pion) yield per primary proton agreeing within 10% (20%). The transport solenoid acceptance is very low for particles above 100 MeV/c, so the disagreement in the high momentum region is less important for Mu2e-II.
Figure 6 shows the negative muon and pion momentum spectrum per primary proton on target for the carbon conveyor design using the MARS, GEANT4, and FLUKA MC. Unlike in the tungsten conveyor target case, the MC have large disagreements, most notably GEANT4 which predicts much higher muon and pion yields. GEANT4 predicts higher muon (pion) yields at the transport solenoid entrance using the carbon conveyor target than using the tungsten conveyor target by 26% (22%). MARS and FLUKA predict a 2% (-2%) and -5% (-2%) change in the rates using the carbon conveyor target, respectively, making the large increase predicted using GEANT4 an outlier. The shape of the momentum distributions disagree between the MC using the carbon conveyor target, including between FLUKA and MARS. These differences between the MC are not yet understood, where studies to compare the differential pion production cross section on both carbon and tungsten using 800 MeV protons in the three MC are underway.
### Tracker
The Mu2e-II tracker discussion focused on the challenges the Mu2e-II environment will create and possible changes and improvements to the current Mu2e straw tube tracker design. Alternative
Figure 5: Muon (left) and pion (right) momentum spectrum at the transport solenoid entrance per 800 MeV primary proton on the tungsten conveyor target.
technologies and geometries were also considered.
#### iv.2.1 Improvements to the current design
By the time Mu2e-II is running, the existing Mu2e tracker will have been exposed to significant radiation and time. This causes concerns about aging effects in addition to performance metrics such as leaks, straw sag and radiation damage. A new tracker needs to be built according to the Mu2e-II beam and sensitivity requirements as indicated in the recent Snowmass paper [14]. A straightforward approach is to improve the current design by conducting R&D and using the latest advancements in technology to update or replace tracker elements. This approach has also the benefit of using the experience gained throughout the life cycle of the Mu2e tracker construction and operations for the design of the new tracker.
The Mu2e-II tracking environment is even more challenging due to an order of magnitude change in occupancy and radiation dose. With an expected improvement of 100 times POT, more muons are stopped compared to the Mu2e, which increases the DIO background, making it the dominant background of the experiment. Improved momentum resolution is critical in separating much of the DIO background tail from the signal conversion electrons. Removing material from the active tracking area allows for limited improvement, as shown in Figure 7, but encounters mechanical limits and increases construction difficulty. Due to this limitation, changes in the geometry, drift gas and electronics are needed to fulfill Mu2e-II requirements while improving the momentum resolution.
Figure 6: Muon (left) and pion (right) momentum spectrum at the transport solenoid entrance per 800 MeV primary proton on the carbon conveyor target.
Current R&D is taking place on the production of 8 \(\mu\)m thin straws that are built by two layers of 3 \(\mu\)m Mylar wound in a spiral along the straw axis with 2 \(\mu\)m adhesive keeping the seams together. These ultra thin straws pass mechanical requirements for the Mu2e tracker and increase the sensitivity 10% by reducing the mass from the active tracking area of the detector. The prototype straws do not have a metallization layer at this point and a reduction in the thickness of the inner and outer metallization layers are in discussion. The thickness of the outer layer of metallization has a profound effect on the leak rate of the straws. Other production methods such as ultrasound welding used in COMET straws [47] or microforming of extremely thin metal tubes [48] should be investigated as alternatives to the Mu2e style straws. In addition, the latest simulation studies argue that the Mu2e experiment could run at lower bore vacuum as far as up to \(10^{-1}\) Torr as shown in Figure 8. Running at lower vacuum may lead to additional technical issues but could be considered in order to expand the Mu2e-II tracker design space.
From experience with the mechanical construction of the Mu2e tracker panels and track reconstruction simulations, changes could be made to geometry and materials to improve on the original
Figure 8: A comparison of true MC conversion electron energy loss for different detector solenoid vacuum configurations.
Figure 7: (a) CE signal vs DIO tail spectrum for 15 \(\mu\)m Mu2e straws. (b) CE signal vs DIO tail spectrum for 8 \(\mu\)m prototype Mu2e-II straws.
design. One idea is to add a third layer of straws to the panel, which improves left/right ambiguity and pattern recognition. This would make the sealing of the middle layer of straws quite difficult. Sealing channels should be incorporated into such a design and the latest 3d printing technologies and new materials should be investigated to provide a strong inner ring compared to the plastic inner ring of the Mu2e tracker panel. Straw size could be changed by optimizing construction complexity to charge load on the wire and momentum resolution. An optimized straw size could allow for a simpler tracker panel production. Ideas for improving the construction also exists but they depend on the changes to the tracker panel geometry and layout.
Another key element in the tracker design is the type of drift gas used in the detector. A different gas ends up changing critical parameters like signal threshold, gain, drift velocity, spatial resolution, diffusion and more. ArCO\({}_{2}\)CF\({}_{4}\)[49] and HeCH\({}_{10}\)[50] are possible candidates that are being used in other drift chambers. Signal simulation should be conducted to make sure there are no red flags associated with other gases with respect to Mu2e-II requirements. Information on the aging effect of these gases or potential new mixtures are scarce in the literature. Aging studies should start during the Mu2e era to come to an agreement on the gas of choice before the production starts for Mu2e-II.
Finally, the tracker electronics should be reconsidered for Mu2e-II. An initial estimate of 750 hadrons/cm\({}^{2}\)/yr (hadrons \(>\) 30 MeV) is 12 times larger than the Mu2e calculations and necessitates an even stricter requirement on the radiation hardness of the electronics. The current safety factor on electronics is 12 times, therefore the new electronics need to be verified for higher dose. In the meantime, the biggest challenge is the occupancy. The hit rate on an average Mu2e straw is 100 kHz and the predicted Mu2e-II hit rate is 1.6 MHz. Not only the Mu2e-II tracker electronics need to process at this high rate, triggering on a single hit cluster is also desired to improve momentum resolution. Using ASIC chips for signal processing seems to be the future direction to handle the increased rates and a simple cartoon of a tracker detector using these chips can be seen in Figure 9. ASIC chips also require less power and occupy less space. This gives more flexibility to the mechanical design of the tracker panel and how power is delivered. Another modern approach to signal processing is to pass hit classification and filtering to FPGA using AI/ML [51]. This is a developing and very popular R&D area and research should be conducted to find the best use scenario for the Mu2e-II tracker detector.
Figure 9: A cartoon for the usage of ASIC chips for the readout of the Mu2e-ii straws. Switching to ASIC chips open up new possibilities for mechanical design, power delivery and cooling.
Alternative tracker designs
The "I-tracker" design has been proposed during the early days of the Mu2e project [52]. In this design, square drift cells made out of a sense wire centered within field wires that are strung along the beam direction as shown in Figure 10. Wires are precisely positioned into a metal frame that locates them down to 20 \(\mu\)m. The metal frame is installed within an ultra light gas vessel that provides the gas seal. This approach alleviates a lot of construction and mechanical issues with the straw detector, primarily the wire positioning and gas leaks. To control the multiple scattering within the gas environment, He is proposed. He as a drift gas is slower compared to the ArCO\({}_{2}\) used in the Mu2e tracker, however it potentially offers a different way to deal with the increased hit rates of the Mu2e-II environment. The latest studies aim to reduce the drift cell size to a 3 mm \(\times\) 3 mm square. The effect of this change to the momentum resolution should be studied with respect to the Mu2e-II rates.
The ultra light gas vessel idea is also applicable to a Mu2e like tracking detector. Similar to the I-tracker, the aim is to reduce the leak requirement on the tracker straws and to ease the construction of a tracker panel with straws. The workshop did not focus on this idea [53], however with the recent developments on the vacuum requirements of the Mu2e tracker and the interest in the I-tracker for the Mu2e-II, a fusion of the I-tracker and the straw tracker remains an interesting prospect.
### Calorimeter
The Mu2e calorimeter [54] consists of 1348 pure CsI crystals, comprising two disks, read out with custom SiPMs developed with Hamamatsu company. The calorimeter has a robust rate performance at Mu2e rates but may be challenged by Mu2e-II instantaneous rates that are two to three times higher. The x10 integrated radiation dose on the calorimeter readout electronics motivates the study of appropriate rad-hard readout electronics at a level informed by the HL-LHC detector upgrades. An alternative calorimeter design has been developed based on BaF2 crystals readout with solar-blind UV sensitive SiPMs that efficiently collect the very fast UV component (\(\sim\) 220 nm) of the scintillation light while suppressing the slow component near 310
Figure 10: I-Tracker design for an alternative to the Mu2e like tracker. In this design, drift cells made out of sense wires and field wires are strung between spokes aligned in the beam direction within a gas vessel that makes the seal against the vacuum.
nm. This alternative design would be considerably more robust against Mu2e-II rates but requires the development and commercialization of the required solid-state photo sensors, which is ongoing. The Mu2e-II calorimeter should have the same energy (\(<\)10%) and time (\(<\)500 ps) resolutions as in Mu2e, aiming to provide a standalone trigger, a track seeding, and PID as before. However, the Mu2e-II environment presents two challenges to the calorimeter system:
1. The pileup with respect to CE seems to scale linearly with beam intensity, so to keep the same level we have in Mu2e (15%) with 150 ns we need to rescale the new signal length. The signal length for Mu2e-II should be 75 ns.
2. Under the assumption that the total integrated dose (TID) from the beam flash in the calorimeter from 800 MeV protons scales as the number of stopped muons with respect to the Mu2e 8 GeV beam, a factor 10 increment is expected. The \(\times\)10 increase in the integrated dose (neutron fluence) corresponds to 10 kGy (\(1\times 10^{13}\) n/cm\({}^{2}\)/sec) for both crystals and sensors motivates consideration of more radiation tolerant crystals and sensors such as Barium fluoride (BaF2) crystal and Solar blind SiPMs.
Supported by the DOE HEP ADR program, the Caltech crystal lab has been developing yttrium doped ultrafast barium fluoride crystals [55] to face the challenge of high event rate and severe radiation environment. In 2017, they found that yttrium doping in BaF\({}_{2}\) is effective in the suppression of its slow scintillation component with 600 ns decay time while maintaining its ultrafast sub-ns scintillation component unchanged. In a collaboration with SICCAS and the Beijing Glass Research Institute, yttrium-doped BaF\({}_{2}\) crystals of up to 19 cm length were successfully grown, and showed a factor of ten suppression in the slow component (see Figure 11). They also found that 25 cm long BaF\({}_{2}\) crystals are radiation hard up to 120 Mrad. This R&D will continue in collaboration with crystal vendors. Support is requested to develop high-quality yttrium doped BaF\({}_{2}\) crystals of large size for the Mu2e-II calorimeter.
Figure 11: Main parameters of crystals used in HEP; comparison with the BaF2 and BaF2 Yttrium doped crystal.
The development of a photosensor capable of efficiently reading out the fast component of barium fluoride is an urgent component of R&D for Mu2e-II. Working with FBK and JPL, the Caltech group is developing a large area SiPM (nominally the same size as that used in Mu2e) that incorporates an integrated ALD filter having high efficiency at the 220 nm fast component and substantial extinction of the 300 nm slow component [56], as shown in Figure 12. A second phase of this development will further incorporate a delta-doped layer that will improve the rise and decay times of the SiPM response, as was demonstrated with the Delta-doped RMD APDs. The initial batch of wafers has been furnished by FBK to JPL. These have had the ALD filter applied and are now being returned to FBK for the next step in processing. Several rounds of development are anticipated, motivating a request for R&D funds from Mu2e-II.
An extremely fast electronics has been developed by the INFN-LNF group with the collaboration of D. Tagnani (INFN-Roma3) in 2022 for the Crilin calorimeter [57]. Crilin's FE electronics are composed of two subsystems: the SiPM board (Figure 13, right) and the Mezzanine Board (Figure 13, left). The SiPM board houses a layer of 36 photo-sensors so that each crystal in the matrix is equipped with two separate and independent readout channels, the latter being composed of a series connection of two Hamamatsu S14160-3010PS SMD sensors. The 10 \(\mu\)m SiPM pixel size, along with the series connection of two photo-sensors, were selected for a high-speed response, short pulse width and to better cope with the expected total non-ionizing dose (TNID) without showing an unmanageable increase in bias current during operation.
Figure 12: Green: 3 layers-SiPM PDE; Red: pure BaF2 emission; Grey: 6% Y doped BaF2.
All bias voltages and SiPM signals for each readout channel are transported between the SiPM board and the Mezzanine Board by means of individual 50 \(\Omega\) micro-coaxial transmission lines. Decoupling capacitors for each channel, along with a PT1000 temperature sensor, are also installed onboard. This readout scheme can be adapted to the Mu2e-II calorimeter together with an alternative proposal of using 8 cm LYSO crystals. The proposal has many advantages:
* 8 cm length LYSO is enough to achieve O(5%) energy resolution;
* the equivalent noise energy is not a problem and there is good longitudinal response uniformity;
* no expected degradation in performance after \(10^{13}\) neutrons/cm\({}^{2}\);
* SiPMs already exist, no R&D needed,
* the high LYSO light yield permits the use of the SiPMs at low over-voltage and thus enhances the radiation resistance and reduces the power dissipation;
* so long as a front-end amplifier is not needed, there are no problems with the irradiation level of electronics.
Although this backup solution seems to be practical, simulation studies are needed to verify its rate capability in the Mu2e-II environment.
### Cosmic Ray Veto
The Mu2e experiment expects one signal-like event per day induced by cosmic rays. These are cosmic muons interacting somewhere in the detector material, producing an electron that by coincidence mimics a signal. The cosmic ray veto (CRV) detector suppresses this background. It consists of four layers of scintillating counters [58] with a cross-section of \(5\times 2\) cm\({}^{2}\) that are read out through wave-length shifting fibers [59] connected to \(2\times 2\) mm\({}^{2}\) silicon photomultipliers (SiPM) [60]. The CRV covers the full detector region of the experiment, all sides, as well as the top. Localized CRV hits coincident in multiple layers (three out of four in most regions) trigger an offline \(\sim 125\) ns (to be optimized) veto in the signal window. The Mu2e CRV needs to suppress the cosmic ray background by a factor of a few 1000 with efficiencies up to 99.99% in the most
Figure 13: Crilin calorimeter electronics: controller board (left) and SiPM board (right) matrix.
sensitive areas while keeping the dead time low in order to reach single-event sensitivity on the order of \(2.5\times 10^{-17}\) (\(6\times 10^{-17}\) 90% C.L.) with around three years of running. An upgrade to the CRV system is required for Mu2e-II as described below [14].
The duty cycle of Mu2e-II is expected to be a factor of three higher, resulting in the need for a three-fold higher suppression of the cosmic ray background. At the same time, the higher beam intensity of Mu2e-II will lead to higher noise (non-cosmic ray hits) rates of more than a factor of three. This poses a challenge to keep the dead time low.
If the Mu2e CRV were used for Mu2e-II, dead times on the order of 50% would be expected. The dead time arises from noise hits that occur in coincidence and trigger veto windows. There are two main sources for such fake hits: a) secondaries from the primary production beam, and b) secondaries from stopped muons. It has been shown in simulations that the secondaries from the primaries production beam can be suppressed efficiently by improved barite- and boron-loaded concrete shielding. To address the higher rate from secondaries from stopped muons, the channel rate needs to be reduced, which reduces the false confidence rate, reducing the deadtime. To reduce the channel rate, a finer segmented detector concept is proposed (see below).
There are different sources of inefficiency in the CRV. Although the Mu2e CRV is designed to minimize gaps by offsetting the different layers, there remain some corner cases where the cosmic muons manage to sneak in through gaps. Figure 14 (left) shows such an example. The cosmic ray background from such gaps in Mu2e-II with the Mu2e CRV design is estimated to be \(0.22\pm 0.15\). It turns out that most of the cosmic muons producing a signal-like electron have an azimuth-angles (angle with respect to "up") smaller than \(60^{\circ}\) as shown on the right side of figure 14.
This motivates the proposal for an improved CRV design consisting of triangle-shaped counters with an angle of \(120^{\circ}\) stacked together. Figure 15 shows a sketch of the proposed design. This design not only reduces the rate per channel due to its higher granularity, but it also reduces the inefficiencies due to gaps. For cosmic rays with an azimuth-angle smaller than \(60^{\circ}\) it's impossible to travel along a counter gap. In addition, cosmic rays at these shallow angles deposit more energy in other layers due to the longer path length through the module. With this improved design, the cosmic ray background in Mu2e-II from gaps is estimated to be below 0.1 events.
Figure 14: Left) example of a cosmic muon in a CRV gap. Right) azimuthal-angle distribution of cosmic rays resulting in event-like events.
A second source of inefficiency is uninstrumented areas of the detector. The opening for the transport solenoid is the most prominent example. The cosmic ray background from this opening for Mu2e-II is estimated to be \(0.08\pm 0.02\). So far this was considered irreducible. However, it turns out that additional shielding would allow to reduce this contribution by about a factor of 2.
A third source of inefficiency is an insufficient light yield. An extensive program is ongoing to characterize and understand the aging of the Mu2e CRV detector. Due to these aging effects, at least parts of the CRV will need to be replaced for Mu2e-II. There are multiple options to increase the overall light yield for a Mu2e-II detector:
* Moving from 1.4 mm to 1.8 mm wavelength shifting fibers increases the light yield by 24%. This was already done for some of the most critical top modules for Mu2e [61].
* More modern SiPMs (for example S14160) have significantly higher photon detection efficiency. Not only is the overall efficiency improved, but they are also more sensitive to the wavelength of the wavelength-shifting fibers.
* It was shown that potting the fibers channels with silicon resin improved the light yield by 40% [62]. R&D on how to seal the channels to avoid damaging the readout due to leaking resin is ongoing.
An additional background source is neutral cosmic particles that produce a signal-like electron by coincidence. This contribution is not negligible for Mu2e-II. Adding 6 feet of concrete shielding above the target reduces this background to \(0.02\pm 0.002\).
The readout bandwidth of the Mu2e CRV is limited by the 10 MB/s link between front end boards (FEB) and readout controllers (ROC). In Mu2e, the off-spill window will be used to transmit the on-spill data which is expected to be suppressed by the online-trigger by two orders of magnitude. In Mu2e-II no off-spill time will be available, either the bandwidth needs to be significantly improved or the trigger will need to achieve a significantly higher suppression. An additional challenge in Mu2e arises in the FEB event builder to provide the bandwidth to account for the beam instabilities that require an overhead of up to a factor of 2. To achieve this, the most active channels are sparsified. While the beam is expected to be much more uniform in Mu2e-II the event builder might already be saturated in nominal conditions. Detector side changes will be needed to use the same readout scheme. The proposed higher granularity helps also in this aspect.
In summary, at least parts of the Mu2e CRV can not be used for Mu2e-II due to aging effects and radiation damage in the SiPMs. We propose an improved triangular-shaped CRV design with finer granularity reducing the single-channel rate, which reduces the dead time. At the same time, the triangular shape improves the detector's efficiency. The light yield of the counters is also enhanced with this geometry. For the CRV at Mu2e-II, enhanced high-Z shielding is the most crucial part. It reduces the readout noise, which is crucial to control the dead time. In addition, improved shielding around the transport solenoid opening reduces muons sneaking in through that gap and additional shielding on top of the target suppresses cosmic background from neutrals. With this
Figure 15: Sketch of the proposed triangular shaped counter design in red overlayed on the shape of the Mu2e CRV design in blue.
improved detector design and shielding, we predict that the CRV background can be limited to the level summarized in Table 1.
Moving forward, we plan to seek funding to build triangular prototypes that should be installed in the running Mu2e experiment. An R&D program is needed to improve the counter profile, explore the possibility of coating for enhanced reflectivity, and improve or fill the fiber channels. In addition, the aging behavior of such new counters needs to be studied. Mu2e-II prototypes installed in Mu2e will benefit the running experiment by providing an additional handle on determining the CRV efficiency. In parallel, dedicated simulation efforts with triangular-shaped counters are needed. We propose to procure some enhanced shielding to start building up experience. The shielding design needs to be optimized. Obviously, running Mu2e will reduce uncertainties related to the CRV design needed to meet requirements for Mu2e-II. However, detector R&D must be started now, in order to be ready to meet the challenges of Mu2e-II.
### Trigger and DAQ
Obtaining higher sensitivity in Mu2e-II imposes more performant requirements on the trigger and data acquisition (DAQ) compared with Mu2e. We make the following assumptions:
1. Compared with Mu2e, Mu2e-II will have twice the number of detector channels and five times the number of pulses on target, leading to a ten times higher data rate.
2. The Mu2e-II event size may be three times the expected Mu2e event size of 200 kB, because of the greater channel count and more background hits.
3. We assume that we can support a factor of two in tape capacity over Mu2e, leading to 14 PB/yr.
4. The required trigger reduction factor in Mu2e-II is 3000:1.
Mu2e-II does not have the large 1400 ms gaps between batches of spills that Mu2e does. That is, Mu2e-II will have a steady event stream, so there is no "catch-up" time. Buffering can be used to handle local fluctuations in event rate, but not gaps in beam from the accelerator. Two scenarios may be considered for cost-effectiveness: (_i_) Large CRV buffers and a software trigger; (_ii_) Small CRV buffers and a hardware trigger.
An important decision is which detector subsystems are triggered, possible approaches include:
* Stream all tracker and calorimeter data; software trigger for CRV based on the tracker and calorimeter;
* Stream calorimeter; hardware trigger for tracker and CRV based on the calorimeter;
* High-level software trigger for storage decision.
\begin{table}
\begin{tabular}{l|c} source & background \\ \hline inefficiencies from gaps in counters & \(<0.1\) \\ inefficiencies from uninstrumented gaps & \(0.08\pm 0.02\) \\ neutrals & \(0.02\pm 0.002\) \\ \hline
**total** & \(0.20\pm 0.08\) \\ \end{tabular}
\end{table}
Table 1: Expected cosmic ray background budget for Mu2e-II with the proposed changes. Optimizing the the momentum cuts reduces the total to 0.17. From [14].
The radiation levels at the detector will be higher than for Mu2e. Mu2e-II will likely not want to design its own rad-hard links. Instead, we will probably want to make use of technology developed for CMS/ATLAS.
The generic data readout topology is anticipated to be a multi-stage TDAQ system, consisting of the front-ends, data concentrators, event builders, and storage decisions. The data concentrator aggregates small front-end fragments into larger chunks for efficient event building. Data is switched from the concentrator layer to the event builder layer so that full events arrive at the event builder layer and are buffered. At the event builder preprocessing or filtering could occur. In the storage decision available decision nodes make high level storage decisions.
Four TDAQ LOIs were contributed in the Snowmass 2021 process:
* A 2-level TDAQ system based on FPGA pre-processing and trigger primitives;
* A 2-level TDAQ system based on FPGA pre-filtering; Features of these first two approaches include: Mu2e already using FPGAs in the ROCs and DTCs; FPGA can offer flexibility for algorithm development; these solutions are more tightly coupled to the sub-detector readout systems.
* TDAQ based on GPU co-processing. Features include data transfer is challenging and importing C-style algorithms is not simple
* Triggerless TDAQ based on a software trigger (scaling up the Mu2e system). Features include cooling in the DAQ room and data transfer and processing become very challenging.
FPGA algorithm development could make use of C-style coding High Level Synthesis (HLS), offering advantages over manual VHDL or Verilog development. CMS is investing in the HLS approach to FPGA algorithm development. There is also a hls4ml collaboration developing machine learning tools using HLS. In principle helix pattern-recognition can be coded on FPGAs, and one could use very powerful FPGAs if located outside the detector solenoid.
R&D will benefit greatly from Mu2e experience. The current Mu2e trigger algorithms can be used on commercial FPGA boards to perform feasibility studies. A successful demonstration would consist of delivering a demonstrator that could be operated parasitically in the Mu2e TDAQ towards the end of Run 2.
### Physics and sensitivity
The aim of Mu2e-II is to achieve a single event sensitivity of \(\mathcal{O}(10^{-18})\). Designing an experiment capable of this level of sensitivity involves a multi-faceted approach. On-going R&D focuses on three main aspects:
* stopping target design, designing a target that is optimal for conversion while minimizing energy losses of possible signal electrons;
* production target design, optimizing pion production while designing a technically feasible target;
* detector design, specifically designing a tracking system capable of the required momentum resolution.
This discussion is divided into two sections. Section III.8.1 details phenomenological considerations needed to inform our choice of stopping target, and how we present our final result. Section III.8.2 details technical considerations that inform the design of the experiment while ensuring we maintain the required signal-to-background ratio and momentum resolution.
Phenomenological Considerations
**Improved understanding of nuclear dependence of conversion rate**
Measurements of the atomic number dependence of the rate of muon-to-electron conversion are detailed in Refs. [12; 15; 63]. In the event of a charged lepton flavor violation (CLFV) signal at Mu2e and/or COMET, we can elucidate the Lorentz structure of New Physics coupling at Mu2e-II by obtaining additional conversion measurements in other nuclei. Mu2e and COMET both intend to use an aluminum (Al-27) target, Mu2e-II must choose a material that is complementary to Al-27, but which is also technically feasible in the Mu2e-II experimental set-up.
Reference [63] is the most widely cited treatment of the atomic number dependence of \(\mu N\to eN\), which has been extended by Refs. [12; 15]. During the workshop a new approach to the estimation of the atomic number dependence of the conversion rate was presented. Barrett moments [64] are utilized to include muonic \(X\)-ray measurements of nuclear charge distributions in addition to electron scattering data. The latter alone was used in previous studies [12; 15; 63]. By including muonic \(X\)-ray data we can account for deformation in nuclei, which is particularly apparent for higher-\(Z\) materials. Nuclear deformations are parametrized by a three-parameter Fermi distribution, which takes into account the effect of permanent quadrupole moments. In addition, the new approach uses a deformed relativistic Hartree-Bogoliubov method to account for neutron distributions, this differs from Ref. [12; 15; 63] which simply scaled the proton distributions by a factor of \(N/Z\).
In general, the presented treatment differs from Ref. [12; 15; 63] in several ways:
* **More data through inclusion of muonic \(X\)-ray data which helps account for deformations:** the use of muonic \(X\)-ray data substantially enlarges the sample size in the regime above \(Z\)=60, where many nuclei have substantial quadrupole deformations. Elastic electron scattering and muonic \(X\)-ray data are combined using Barrett moments, and devise a procedure to incorporate the effect of permanent \(Y_{20}\) deformations on the effective nuclear skin thickness.
* **Accounting more accurately for neutron distributions:** a model based on a deformed relativistic Hartree-Bogoliubov calculation is used to estimate the neutron distributions. This model allows the inclusion of a wider variety of nuclei and explores isotopic effects on the conversion rate.
* **Accounting for isotope abundance, and feasibility of single isotopes:** recognizing that separated isotopes are hard to obtain in sufficient quantities to make practical stopping targets of \(\sim 100\) g. Instead, elements that are comprised of a single stable isotope or in which the dominant stable isotope is greater than \(\sim 90\%\) abundant are explored. Vanadium is proposed as the best option for Mu2e-II.
A detailed publication describing the result of the work is imminent, and we leave the explicit details to that document.
**Normalization: Presenting conversion results**
The conventional approach to the normalization of \(\mu\to e\) conversion experiments, quoting the conversion rate (experimental limit or theory prediction) normalized to the measured rate of \(\mu\) capture on a given nucleus, has been in place for more than seventy years. As the current round of experiments approach a sensitivity that may yield a signal, this convention should be re-examined,
particularly because future experiments will likely focus on the \(Z,A\) dependence of the conversion process. A talk at the Workshop proposed a revised convention for presenting both theoretical and experimental results on \(\mu\to e\) conversion going forward that addresses the shortcomings of the historical approach.
Normalization to the muon capture rate is not precisely analogous to the idea of a branching fraction (the number of decays into a particular mode, divided by all decays), which would be to divide the conversion rate in the field of a particular nucleus by all possible fates of the muon (\(\mu\to e\) conversion, DIO or nuclear capture). However, normalization to muon capture was effectively codified by Weinberg and Feinberg in 1959 [65]; essentially all results or predictions on muon to electron conversion have henceforth been presented in the form:
\[R_{\mu e}\,\mathrm{or}\,B_{\mu e}(\mathrm{Z})\,\,\mathrm{or}\,\mathrm{CR}( \mu^{-}\mathrm{N}\to e^{-}\mathrm{N})=\frac{\Gamma(\mu^{-}+\mathrm{N}\to e^{- }+\mathrm{N})}{\Gamma(\mu^{-}+\mathrm{N}\to\mathrm{all\,\,captures})}\,.\]
Compilations of the history of experimental limits on CLFV processes typically place the 90% confidence level limits for decays and conversion on the same plot, ignoring the fact that they are normalized differently. The decays are reported as true branching fractions, while the conversion rate limits are on the fraction of muon captures resulting in production of a monoenergetic electron, which does not account for all fates of a muon in a muonic atom. Indeed, the lifetime of such a muon is determined in varying proportions by the conversion rate, a BSM process, by the nuclear capture rate, an incoherent Standard Model process, and by the lifetime of the decay-in-orbit muon, which is modified from the free decay rate by the atomic binding energy, the so-called Huff factor (0.993 for aluminum, 0.981 for titanium and 0.850 for gold) [66].
There is a simpler alternative to normalization that hews more closely to what both experiments and theoretical calculations actually do, and has benefits in understanding the \(Z,A\) dependence of the conversion process. Conversion experiments are normalized to the number of muon stops in the nuclear target within the sensitive time window. This requires knowledge of the total muon lifetime in a particular muonic atom, as well as of a set of relevant experimental efficiencies. The number of conversion signal events (or to this point, absence of events) is divided by the number of muons stopped in the target (as measured by, _e.g._, counting \(2P-1S\) muonic x-rays or other transitions, such as delayed \(\gamma\)s from muon capture. These experiments do not, in general, measure the muon capture rate). The number of candidate muons for conversion is determined by muon decay as well as by nuclear muon capture, that is, by the total lifetime in the muonic atom. Thus, for example, the experimental measurement of the total muon lifetime, \(864.0\pm 1.0\) ns in aluminum [66], with its associated uncertainties, unavoidably enters the calculation of the experimental efficiency and therefore the calculation of the \(\mu\to e\) conversion rate. Since the overlap of the muon atomic wave function with the nuclear proton and neutron distribution influences the effective lifetime, the BSM physics and the Standard Model nuclear physics are inextricably mixed. Thus, the measured rate (or limit on the rate) _ab initio_ depends in part on the muon capture lifetime. The muon nuclear capture rate _grosso modo_ follows Wheeler's [67]\(Z_{\mathrm{eff}}^{4}\) law, but in detail shows the effect of nuclear shell structure on nuclear size. The Weinberg-Feinberg convention reports results as a "capture fraction", by analogy to a branching fraction, by dividing the measured rate once again by the muon capture rate, thereby exaggerating the effect of nuclear shell model structure.
From a theory perspective, a model calculation of the rate of conversion effectively yields an absolute rate (more specifically a rate characterized by \(G_{F}^{2}\) and mass-scale coupling factors). The convention has then been to divide this BSM rate by the experimentally measured SM muon capture rate. Thus a BSM conversion rate calculation that is effectively on the same footing as a calculation of a BSM decay rate is presented as a hybrid ratio of the calculated rate of the coherent conversion process divided by the experimental measurement of a partially incoherent SM muon
capture process, analogous to a branching fraction. This hasn't mattered in any practical sense to this point. It is, however, comparing a calculable coherent process to a difficult-to-calculate (and therefore usually measured) process. When comparisons of decay and conversion sensitivity are made there are issues, as there are in comparison of CLFV rates for different nuclei.
The method of normalization matters in two different ways. Plots of the chronological improvement of limits on CLFV processes typically plot the limits for \(\mu\to e\gamma\), \(\mu\to 3e\) and \(\mu\to e\) conversion on the same scale, even though the decay results are true branching fractions, while the conversion rates are fractions of the \(\mu\) capture rate. It would be preferable to normalize the conversion rate to the fate of all stopped muons, which is what is actually measured. The ratio of muon decay in orbit to \(\mu^{-}\) capture is a strong function of \(Z\), which yields a somewhat different dependence on \(Z\) for the two normalization approaches. The proposal is therefore to present conversion results as they are actually measured and compare these results with what is actually calculated. Thus the normalization would be to all fates of the muon in the atom:
\[\text{CR}(\mu^{-}\text{N}\to e^{-}\text{N})=\frac{\Gamma(\mu^{-}\!+\!\text{N} \to e^{-}\!+\!\text{N})}{\Gamma_{\text{total}}(\mu^{-}\!+\!\text{N}\to\text{ all})}\,.\]
We have done a new comprehensive study of the \((Z,A)\) dependence of \(\mu\to e\) conversion [68] that extends previous work in several areas. In addition to employing the normalization to the total muon lifetime instead of the \(\mu\) capture lifetime, it includes nuclear size and shape data from muonic X-rays, accounts for nuclear quadrupole deformations and treats proton and neutron distributions separately. Figure 16 shows the results, relative to the conversion rate in aluminum. While variations due to nuclear shell structure inevitably remain, they are reduced from previous treatments that normalize to \(\mu\) capture.
Figure 16: Sensitivity of CLFV experiments as a function of atomic number, relative to sensitivity in aluminum. The normalization is to total muon lifetime, rather than to muon capture.
## Exotic physics signatures
Arguably, high-intensity muon facilities have underappreciated potential to search for light weakly coupled new physics. As an illustration of these ideas, we discuss a proposed search for two-body decays to light new particles with both pions and muons at Mu2e. Modern muon facilities offer unprecedented statistical samples of both muons and pions in a controlled, thin-target limit, with high-resolution detectors [69; 70; 71; 72]. Their flagship physics goals are often highly specialized due to the kinematic "smoking gun" signatures of CLFV occurring in small pockets of phase space that the Standard Model struggles to populate. Despite their highly specialized design, muon facilities have unique capabilities to search for physics beyond the Standard Model (BSM).
Having a broad portfolio of physics in addition to a specialized flagship measurement is beneficial for modern HEP experiments. As a relevant example, the neutrino community has increasingly embraced searches for dark sector physics [73]. Similarly, experiments designed for precise measurements of neutrino oscillation parameters such as DUNE, JUNO, and Super Kamiokande also search for supernova neutrinos and proton decay [74; 75; 76; 77; 78]. A broad physics use case (beyond CLFV) is beneficial because it both maximizes the physics impact of the experiments themselves and increases connections across different subfields of HEP.
As a concrete example of the opportunities discussed above, we now present a proposal for Mu2e to search for two-body decays \(\mu^{+}\to e^{+}X\)[79], [80]. The ability to perform a successful search in this channel relies crucially on the sample being \(\mu^{+}\) rather than \(\mu^{-}\). The statistical sample is so high at Mu2e \(\sim 10^{18}\) muons stopped, that even calibration data (which would be \(\mu^{+}\) rather than \(\mu^{-}\)) with one-millionth the statistics, e.g. \(10^{12}\) or \(10^{13}\) stopped muons, may offer world leading constraints. This illustrates an important point, that even in "sub-optimal" configurations that sacrifice orders of magnitude in statistical power, modern muon facilities may still offer unparalleled reach for certain models of new physics. We project that Mu2e can overcome existing constraints from the TWIST-II experiment at TRIUMF [81] and provide world leading limits on the branching ratio of \(\mu^{+}\to e^{+}X\)[79].
Charged pions also offer exciting new physics opportunities via \(\pi^{+}\to e^{+}X\). Pions are typically seen as a hindrance; for instance Mu2e designed their beam structure around a timing cut to extinguish backgrounds from radiative pion capture [70]. Nevertheless, a modified data acquisition strategy that focuses on early periods of the beam structure can enable Mu2e, or other muon facilities, to search for BSM physics in pion decays. The experiment's thin target and high-resolution detectors differentiate them from other high-statistic pion sample. For instance pion decay at rest (\(\pi\)DAR) facilities [82; 83; 84] often have huge statistical samples, but use a thick target and copious shielding making a search for a mono-energetic electron impossible. By way of contrast, this type of signal is _precisely_ what the Mu2e tracker is designed to look for. A modified \(B\)-field of \(76\%\) removes all Michel electrons, and enables good energy reconstruction for \(60-70\) MeV electrons. Using a bump-hunt strategy we project that Mu2e could place world-leading limits on right-handed neutrinos mixing with electrons (Fig. 17), competitive even with the highly specialized PIONEER proposal [85]. The signal is a monoenergetic positron from \(\pi^{+}\to e^{+}N\) with \(m_{N}\gtrsim 20\) MeV to avoid overlap with the peak from \(\pi^{+}\to e^{+}\nu_{e}\) (which may also be used as a source of calibration).
In summary, muon facilities offer many unexplored sidebands that can be exploited for new physics. The statistical samples and modern detector quality allow even a single day's worth of calibration data to be impactful. More effort should be invested into studying novel signatures of BSM physics that may be unearthe
#### iv.3.2 Technical Considerations
**Designing a Pion Production Target**
The sensitivity of the Mu2e II experiment critically depends on the number of muons stopped in the target over the course of the experiment. The muon beam is generated from low momentum pions produced in the production solenoid, where the rate of stopped muons directly depends on the rate of low momentum pions produced in primary proton interactions in the production target. This production rate depends both on the production target design as well as the pion production in the target material.
The rate of stopped muons per 800 MeV primary proton on the production target simulated using the Mu2e II Offline, which uses GEANT4 for particle transport, is shown in Table 2 for both the tungsten and carbon conveyor target designs, as well as using the Mu2e Hayman tungsten target for comparison. The carbon conveyor production target is estimated to have a 25% higher muon stopping rate than the tungsten conveyor target, though, as discussed in Section III.3.2, MARS and FLUKA do not predict the higher particle production rates using the carbon conveyor target that GEANT4 predicts. The Mu2e-era Hayman tungsten production target, which could not survive under the Mu2e II operating conditions, but also was not optimized for the trajectory of 800 MeV protons in the production solenoid's magnetic field, has a higher muon stopping rate of about \(10^{-4}\) per proton on target. This suggests that targets optimized for Mu2e II may be able to achieve stopping rates on the order of \(10^{-4}\) per proton on target.
A change in the muon stopping rate leads to a direct change in the single event sensitivity for \(\mu^{-}\to e^{-}\) and the rate of beam-related backgrounds, while leaving the rate of cosmic ray-related backgrounds unaffected, assuming little change in the reconstruction efficiency and no change in selection. Figure 18 shows the expected sensitivity of the Mu2e II experiment for varying muon stopping rates. This does not account for changes in the signal search selection to account for the change in the beam-related background rates, which would improve the sensitivity, or any change in
Figure 17: Projected sensitivity to right-handed neutrinos mixing with electron flavor. Mu2e calibration data with a momentum degrader that improves sensitivity by a factor of 7 (reduction of muon decay in flight background by a factor of 5 and an increase of stopped pions by a factor of 3). The see-saw target corresponds to parameter space that would produce acceptable neutrino masses in a type-I see-saw framework. To appear in Ref. [79]
the detector pileup due to changes in the particle production rates, which may improve or diminish the sensitivity.
#### Designing a detector system
Several aspects pertaining to the design of the tracker, calorimeter, and cosmic ray veto system are detailed in their respective sections. We do not re-iterate them here but it is, of course, crucial that we have excellent momentum resolution, and minimize multiple scattering and energy losses and cosmic backgrounds in order to achieve our sensitivity goals. There are already ongoing R&D efforts to understand possible tracker configurations. So far these have focused on the use of thinner straws (8\(\mu\) m compared to 15 \(\mu\) m in Mu2e). More exotic tracker designs could also help, and there are opportunities to contribute. For the calorimeter, more radiation resistant material is necessary, barium fluoride will replace the cesium iodide used in Mu2e. There is an ongoing R&D effort for the best sensor technology to use with barium fluoride. There is also a need to replace the cosmic ray veto system. This is crucial to ensuring the elimination of cosmic ray background.
\begin{table}
\begin{tabular}{l|c} Target & R(muon stops / POT) \\ \hline Tungsten conveyor & \((7.2\pm 0.1)\times 10^{-5}\) \\ \hline Carbon conveyor & \((9.0\pm 0.1)\times 10^{-5}\) \\ \hline Mu2e Hayman & \((10.3\pm 0.3)\times 10^{-5}\) \\ \end{tabular}
\end{table}
Table 2: Stopped muon rates in the stopping target per primary proton on the production target for different production target designs using 800 MeV primary protons. The “Mu2e Hayman” target refers to the Mu2e-era tungsten production target design.
Figure 18: The Mu2e II expected 90% CL in the absence of a signal as a function of the stopped muon rate. Selection efficiencies are assumed fixed, where only the beam-related backgrounds and the signal rates are varied.
The physical design of the pion production target has already been discussed. There is also a need to re-design the muon-stopping target, which could be further optimized for improved sensitivity. The design is a compromise, a heavier target stops more muons, but also means more energy straggling and multiple scattering which can decrease momentum resolution.
Advanced Muon Facility
The Advanced Muon Facility (AMF) is a proposal for a next-generation muon facility at Fermilab that would exploit the full potential of the PIP-II accelerator to deliver the world's most intense \(\mu^{+}\) and \(\mu^{-}\) beams. This facility would enable broad muon science with unprecedented sensitivity, including a suite of CLFV experiments that could improve the sensitivity of planned experiments by orders of magnitude, and study in detail the type of operators contributing to NP in case of an observation (e.g. high-Z target in conversion experiments).
The AMF complex would use a fixed-field alternating gradient synchrotron (FFA) to create a cold, intense muon beam with low momentum dispersion. Short intense proton pulses are delivered to a production target surrounded by a capture solenoid, followed by a transport system to inject the muons produced by pion decays into the FFA ring. The phase rotation trades time spread for momentum spread, producing a cold, monochromatic muon beam. During that time (\(\mathcal{O}(1)\)\(\mu\)s), the pion contamination is reduced to negligible levels, and the FFA injection / extraction system effectively cuts off other sources of delayed and out-of-time backgrounds. The phase rotation requires very short proton pulse, and a compressor ring is required to rebunch the PIP-II beam.
The following sections summarize the discussions held on the compressor ring, the FFA synchrotron, the conversion and decay experiments, and other opportunities with high intensity muon beams. Prioritized R&D tasks for each topic and synergies with other efforts are also highlighted.
### Proton Compressor ring
#### iv.1.1 PAR Proposal
A primary objective of the PIP-II linac upgrade is to improve Fermilab's Booster performance and thereby increase beam power to DUNE/LBNF. It will be CW-capable, leaving a large portion of beam power available for new GeV-scale experimental programs. The PIP-II era accumulator ring (PAR) has been proposed to enable the PIP-II linac to better perform both roles; to improve Booster performance and to provide a platform for new HEP experiments. In particular, a beam dump program exploring dark sector (DS) and neutrino physics called PIP2-BD [86] could operate with PAR proton pulses. For the Booster role, the circumference of PAR needs to match that of the Booster, but the location at the end of the Booster Transfer Line constrains the available area. The PAR design uses a novel "folded figure-8" architecture, allowing it to fit within a smaller footprint, see Fig. 19. A key advantage of PAR for the Booster program is to use a longer injection section with an extraction line for unstripped H- particles.
Due to the folded figure-8 design, PAR can accommodate two extraction sections and two RF systems. Consequently, PAR can extract to the Booster at a 20 Hz rate and extract to PIP2-BD at a 100 Hz rate or above. The two RF systems would allow PAR to operate with 44 MHz for bucket-to-bucket transfer to Booster mode or at less than 10 MHz for pulse compression to the PIP2-BD program. A third operating mode is to capture the beam at 44 MHz but extract single-bunch pulses at a high rep rate (\(\sim 800\) Hz). Table 3 below gives a summary of the projected beam modes available from PAR.
A self-consistent PAR lattice design has already been developed with several key features. There is a 10 m uninterrupted injection straight section that allows for the ability to safely extract unstripped H\({}^{-}\) particles while also being suitable for higher energy beams (at least 1 GeV). Also within this injection straight is a 28 m dipole-to-dipole length with \(\sim\pi/2\) phase advance, leaving room for downstream collimation. In the long crossover section, there is a 12" parallel shift between the top and bottom rings which means there is no shared beampipe required in the crossover region.
In the other long straight section, there is a cluster of quadrupole magnets that together are referred to as a phase trombone. This phase trombone allows the tune to be changed \(\pm 75\) degrees without impacting the beta functions at other locations around the ring. Suitable RF cavity, magnet, and kickers designs have also been identified. Tracking simulations show that the lattice is stable and features adequate betatron tunespace.
#### vi.2.2 Towards a Compact PIP-II Accumulator Ring for AMF
A detailed design of the PAR has been completed and is in the process of being documented (a summary is available above). However, the same level of detail is not presently available for a more compact and higher power accumulator ring better suited for the AMF proton compressor. In [86], a set of modest self-consistent parameters for Compact PIP-II Accumulator Ring (CPAR) is articulated, whereby CPAR would deliver 90 kW of 1.2 GeV protons in less than 20 ns pulses.
In Ref. [88], Prebys presents a framework for optimizing proton compressor performance for AMF program and lays out the accelerator requirements for achieving 1 MW beam power from
Figure 19: PAR ring (blue) located at the end of the Booster Transfer Line (grey) leading from the PIP-II Linac (red) to the Booster (green). Adapted from B. Pellico and S. Dixon in [87].
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multicolumn{4}{|c|}{PAR Power Modes} \\ \hline & Nominal - To Booster & PIP2-BD (h=4) & AMF (h=1) \\ \hline Intensity/pulse (ppp) & 8-16e12 & 1-2.5e12 & 0.1-0.2e12 \\ Energy (GeV) & 0.8 & 0.8 & 0.8 \\ Rep. Rate/pulse (Hz) & 100 & 200-400 & 400-800 \\
**Power (kW)** & **100-210** & **30-130** & **5-21** \\ Pulse length (ns) & 2000 & 385-170 & 22-18 \\
**Exp. duty factor** & **2e-4** & **7.7-6.8e-5** & **0.9-1.4e-5** \\ Beam capture Rate (MHz) & 44 & \(<\)10 & 44 \\ \hline \end{tabular}
\end{table}
Table 3: Three projected beam modes available from PAR accelerator.
a proton compressor delivering 12.2 ns pulses. To achieve the maximum 12.2 ns pulse intensity within a space-charge tuneshift limit, the proton compressor ring should (firstly) be as compact as possible and (secondly) should have as large as possible transverse acceptance. Following this framework, a ring more compact than ORNL SNS ring but with similar transverse acceptance could provide 400 kW of beam power in 12.2 ns pulses.
The Snowmass paper [88] also laid out the proton compressor extraction scheme. The RF frequency of the ring should correspond to 12-25 ns RF buckets, with every RF bucket or every other RF bucket filled. The proton pulses are extracted bunch-by-bunch with 10-40 ns kicker rise/fall times. Since the injection scheme will require filling each bunch simultaneously, [86] proposes to extract multiple times for each injection. Indeed a (third) critical design strategy should be to have the highest possible extraction kicker repetition rate to alleviate space-charge and injection requirements.
Two other design strategies for the proton compressor ring should be considered. The beam energy can be increased above the value discussed above (fourth strategy) to overcome the space-charge limit to pulse intensity (even accounting for the fact that the ring could be less compact or have a smaller acceptance). The cost optimization of increasing the beam energy relative to other design strategies (such as transverse acceptance) is still to be determined. Lastly (fifth strategy), the machine acceptance should be much larger horizontally than vertically, to allow the strongest dipole field strength while accepting the greatest number of particles. The proton compressor ring design can use alternate gradient focusing to accommodate an open mid-plane in the accelerator bending arcs.
A more detailed design and optimization of the AMF proton compressor is currently underway. Our most recent design projection shows that a 1 MW proton compressor with 10-20 ns pulses may be achievable in a 1.2 GeV ring with a 95% normalized emittance of 100 \(\pi\) mm mrad in a 150 m circumference (with 800 Hz extraction and 100 Hz injection rates).
Design work includes the development of the proton compressor lattice (layout of magnets), dipole geometry, location of critical devices, assessment of injection and extraction strategies. While preliminary calculation of the H- injection parameters suggest that the ring is compatible with H- foil injection, this program would clearly benefit from the development of H\({}^{-}\) laser stripping injection technology [89].
#### vi.2.3 PIP2-BD Experimental Program
Theoretical work has highlighted that sub-GeV dark sector models are able to explain cosmological dark matter abundance and a broad class of these models can be tested with accelerator-based fixed-target experiments. Additionally the observation of coherent elastic neutrino-nucleus scattering (CEvNS) by the COHERENT experiment provides a novel and powerful experimental tool for beyond the Standard Model neutrino physics. The PIP2-BD program looks to access this physics with GeV-scale, high-power, low-duty beams.
The proton compressor for the proposed AMF facility would make an excellent proton source for a beam dump physics program called PIP2-BD [86]. Aside from the proton source and the beam dump, the PIP2-BD experiment requires only a 100t LAr detector.
Three scenarios were developed for the PIP2-BD Snowmass paper, all more modest than that proposed for AMF, given in the table below IV. As one example of the physics reach, the exclusion plots for vector portal dark matter (DM) models is shown for the three scenarios in Fig. 20.
The PIP2-BD program provides a natural staging scenario for the AMF program. First, a 0.8 GeV compact proton accumulator ring delivers beam to the PIP2-BD program at 100-300 kW. Next, the proton accumulator ring is upgraded (in energy, extraction rate, and/or pulse-length)
and begins beam delivery to the AMF program at 300-1000 kW. In this case, the design of the initial proton accumulator ring needs to account for the beam requirements of both the initial beam program and subsequent upgrade.
### Muon FFA
#### vi.2.1 Motivation for an FFA ring
Muon-to-electron conversion experiments such as Mu2e or COMET use the Lobashev scheme [90]. In that scheme, a proton beam strikes a target inside a solenoid. Produced \(\pi\)'s decay into \(\mu\)'s and a graded magnetic field directs the \(\mu\)'s through a "transport solenoid" and into a third "detector solenoid." Both Mu2e and COMET use this scheme; Fig. 21 shows Mu2e for definiteness.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Facility & Beam & Repetition & Pulse & Beam \\ & energy & rate (Hz) & length (s) & power \\ & (GeV) & & & (MW) \\ \hline PAR & 0.8 & 100 & \(2\times 10^{-6}\) & 0.1 \\ C-PAR & 1.2 & 100 & \(2\times 10^{-8}\) & 0.09 \\ RCS-SR & 2 & 120 & \(2\times 10^{-6}\) & 1.3 \\ \hline \end{tabular}
\end{table}
Table 4: The parameters of three possible accumulator ring scenarios considered to bunch the beam current from the PIP-II linac. See text for more details.
Figure 20: The 90% C.L. sensitivity to the vector portal DM model for the three accumulator ring scenarios. Both RCS-SR and C-PAR are able to search a large parameter space reaching the expected thermal relic density for both scalar and fermion DM. Figure and caption source [86]
This scheme has two fundamental limitations. First, some of the pions born in the first Production Solenoid survive to reach the final Detector Solenoid. When those pions interact with the conversion material (normally referred to as a stopping target, Al for either Mu2e or COMET), they can undergo radiative pion capture, \(\pi^{-}N\to\gamma N^{\prime}\). That photon can either convert in the stopping target or internally convert, yielding an electron in the signal region. Since pions decay with a 26 ns lifetime, a delayed live gate exponentially reduces the pion contamination (\({\cal O}(10^{-11})\) in Mu2e.) Second, the initial proton beam produces a "flash" of electrons from \(pN\to\pi^{0}\to\gamma\to e^{+}e^{-}\) that would overwhelm any detector. Again, a delayed live gate is the natural solution.
Unfortunately, this delay limits the reach of the experiments, yielding a second limitation. Fig. 22 shows the timeline for Mu2e with the lifetimes of muonic Au, Ti, and Al. We see \(\tau_{\rm Au}\) is well within the beam pulse and the flash period and is therefore not a practical target in this scheme. Ti is possibly workable but still presents difficulties. Going to high-\(Z\) materials such as Au are needed to either probe the nature of any CLFV interaction or set the most stringent limits, so these two limitations are a limit on the technique and prevent us from exploring what might be the most interesting physics.
A possible way to go beyond this limitation is to use a fixed-field alternating gradient synchrotron (FFA) [91] as a muon storage ring. Pions would decay in the ring leaving a nearly pure muon beam and only the relatively small amount of "beam flash" near the central momentum of the FFA could be circulated. The FFA can also use RF to perform "phase rotation," trading the pulsed time structure and broad momentum spread from the compressor ring for more continuous beam with a small momentum spread of a few percent. Such a cold, pure muon beam is ideal for exploring high-\(Z\) materials with a short muonic lifetime. A cold beam at low energy will lose energy quickly through \(dE/dx\) and come to rest in a thin target. The CSDA range in Al for a
Figure 21: The three-part solenoid of the Lobashev scheme, as implemented in Mu2e.
Figure 22: A timeline for Lobashev-style experiments, shown for Mu2e. A \(\approx 250\) ns wide proton pulse strikes a target, producing charged pions and a “beam flash” of electrons. The muonic lifetimes of Au, Ti, and Al are shown.
Mu2e-type beam at 40 MeV/c with \(T=7.3\) is about \(\times 3\) greater than a 29 MeV/c stopped beam at \(T=4\), and the Lobashev-style Mu2e beam extends to nearly 100 MeV/c with a \(\times 25\) greater CSDA range. We see that as the beam energy decreases, it becomes easier to find a conversion or decay point than for a Mu2e-type beam at about 40 MeV/c.
FFAs, although invented in the 1950's, have attracted considerable attention relatively recently due to their unique properties [92]. In particular, they can provide lattices with very large acceptances in both the transverse and longitudinal planes, which is essential for muon beam applications. The scientific interest in FFAs has increased significantly since the construction of the first proton machine with RF acceleration in 2000 [93]. Since then several FFA machines have been successfully constructed and operated, and several possible applications were developed. The review of FFA developments was presented and discussed during the workshop.
#### iv.2.2 FFA ring
A FFA ring was proposed for the next generation muon-to-electron conversion experiments [94] to provide the large required acceptance for the muon beam, which is produced as tertiary beam with large emittances and momentum spread. This experiment requires a high intensity compressed proton bunch to be sent to the pion production target, which is immersed in a high field capture solenoid. The pions produced from proton interactions decay into muons, which are transported and injected into a small FFA ring. In this ring, the beam undergoes the longitudinal phase space rotation using RF system to transform the initial short muon bunch with a large momentum spread of \(\sim\pm 20\%\) into a long bunch with the momentum spread reduced by about an order-of-magnitude. The use of the phase rotation motivated the name for this experimental system -- the Phase Rotated Intense Source of Muons (PRISM). The ring RF system can operate at harmonic number one, requiring low frequency RF cavities based on Magnetic Alloy technology, which allows to mix different RF frequencies creating the saw-tooth shape for the voltage phase dependence used to maximize the efficiency of the phase rotation [95]. The narrow momentum spread beam is then extracted and sent to the muon stopping target. The conceptual layout of the experiment is shown schematically in Fig. 23 and main parameters of the facility are given in Table 5.
Figure 23: The conceptual layout of the PRISM system.
Several lattice solutions have been proposed for PRISM [96], with an initial baseline based on a scaling DFD triplet [97], which was successfully constructed and verified experimentally. The studied solutions included both scaling and non-scaling FFA lattices, consisting of regular cells or with a racetrack geometry.
The current baseline, proposed to take advantage of recent advances in FFA accelerators, was discussed in details during the workshop. It is based on a regular FDF scaling lattice with ten identical cells, as shown in Fig. 24. The FDF symmetry can provide the required large dynamical acceptance, while simultaneously increasing the drift length for injection/extraction needs. The symmetrical optics in the ring with identical cells avoids large variations of the \(\beta\) functions, which can drive dangerous resonances. The use of the scaling FFA principle makes possible, thanks to its intrinsic zero-chromatic properties, to keep the optics quasi-identical for off-momentum particles. In particular, the tune working point is independent of momentum and situated away from the resonance lines, which can severely diminish the dynamical acceptance. The \(\beta\) functions in one of the symmetric lattice cells and the working point (tune per cell) of the baseline FFA ring are shown in Fig. 25. The magnetic field on the median plane of the FFA ring along the reference radius, described using the Enge model of the fringe fields, is shown in Fig. 26. The performance of the FFA ring was verified in tracking studies. In order to incorporate tracking through the combined-function magnets, taking into account the fringe fields and large amplitude effects, a code used for the full FFA machine developed previously (FixField code) was used [98]. It is a step-wise tracking code based on Runge-Kutta integration, using Enge-type fringe fields. The results of the multiturn tracking shows that the horizontal dynamical acceptance of the machine is very large and exceeds an impressive figure of \(77\,\pi\,\mathrm{mm}\,\) rad, as shown in Fig. 26 (right-hand plot). The vertical dynamical acceptance is still being optimized, with the goal to achieve at least \(5\,\pi\,\mathrm{mm}\,\) rad. The vertical dynamical acceptance is typically smaller in horizontal FFAs, and the vertical physical acceptance is nevertheless limited by the injection needs. The main parameters of the baseline FFA ring solution for PRISM can be found in Table 6.
#### iv.2.3 Beam transport and injection into an FFA ring
A significant challenge of the PRISM system resides in the design of the efficient beam transport from the decay solenoid, where the muon beam is formed, and its subsequent injection into the FFA ring. The beam in the solenoid is very strongly focused, has symmetric coupled optics in both transverse planes, and a large natural chromaticity. Dispersion is either zero or very small even in the presence of a bending field in the curved solenoids. This beam needs to be transported with negligible losses into the FFA, which has zero chromaticity, decoupled asymmetric optics with
\begin{table}
\begin{tabular}{l c} \hline \hline Proton beam power & \(\sim\)1 MW \\ Proton beam energy & \(\sim\)GeV \\ Proton bunch duration & \(\sim\)10 ns \\ Target type & solid \\ Pion capture solenoidal magnetic field & 10-20 T \\ Reference muon momentum & 45 MeV/c (or lower) \\ Momentum acceptance & \(\pm\)20\% \\ Minimal transverse physical acceptance (H/V) & 3.8/0.5 \(\pi\).cm.rad \\ RF voltage & 3-5.5 MV \\ RF frequency & 3-6 MHz \\ Repetition rate & 100-1000 MHz \\ \hline \hline \end{tabular}
\end{table}
Table 5: Main parameters of the PRISM facility.
intermediate focusing strength, and has relatively large dispersion function.
A conceptual design for the beam transport and injection system has been proposed (Fig. 27). The beam first needs to exit the solenoid while the beam dynamics is under control. The proximity of the solenoid may also saturate the downstream Alternating Gradient (AG) iron dominated magnets, which should be avoided, and the reduction of the solenoid field needs to be controlled. A system of two solenoidal coils is proposed to perform the beam matching from the quasi-uniform
Figure 24: Schematic drawing of the muon storage ring. The lattice is based on the scaling Fixed Field Alternating gradient (FFA) triplets with FDF symmetry.
Figure 25: Left: horizontal \(\beta\) function (red curve) and the vertical one (purple curve) in a symmetric cell of the baseline FFA ring for PRISM. Right: working point (tune per cell) of the baseline FFA ring solution. The second and the third order resonance lines are represented by the blue and purple lines, respectively. The bold lines denote the systematic resonances.
decay solenoid into the downstream accelerator, controlling the beam size and divergence, while a more complex system may be required. Located downstream from this matching section, the dispersion creator consists of two rectangular dipoles with equal, but opposite bending angles, which generates the initial dispersion required for the FFA lattice. This is followed by the scaling FFA matching section, which aims to control the matching of the \(\beta\) functions and the final dispersion into the values needed for the FFA ring. Finally, a system of bending magnets and septa is used to introduce the beam into the FFA ring with a vertical offset with respect to the circulating beam. The beam is bent horizontally and the dispersion flips sign due to the strength of the bending magnets needed. The final horizontal magnet is a Lambertson-type septum [99]. The horizontal septum needs to be followed by a vertical septum, which brings the beam closer to the closed orbit in the ring. The vertical magnets upstream and downstream of the horizontal septum provides the matching of the vertical dispersion function to zero in the ring. The beam is then passed through one cell of the FFA, where the offset in position is transformed into a vertical divergence offset, which is cancelled by the kicker magnets finishing the injection process by placing the beam on the circulating closed orbit. Further studies are needed to demonstrate the full feasibility of the described beam transport and injection system, and significant R&D is required to address its full optimisation and to design the hardware. The extraction system and the beam transport from the
\begin{table}
\begin{tabular}{l c} \hline \hline Reference radius & 7 m \\ Length of one straight section & 3.15 m \\ Initial momentum spread & \(\sim\pm 20\%\) \\ Final momentum spread & \(\sim\pm 2\%\) \\ Reference muon momentum & 45 MeV/c (or lower) \\ Reference tunes per cell (\(q_{h}\), \(q_{v}\)) & (0.245, 0.185) \\ Number of cells in the ring & 10 \\ Field index k & 4.3 \\ Harmonic number & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Selected parameters of the FFA storage ring.
Figure 26: Left: vertical magnetic field on the reference radius on the median plane of the baseline FFA ring. The field corresponds to the reference momentum of \(68\,\mathrm{MeV/c}\) for historical reasons and can easily be scaled down to a lower momentum. Right: horizontal dynamical acceptance studies in the FFA ring at the reference momentum. Particles are tracked over 100 turns with different amplitudes in the plane of study including a small off-set from the closed orbit in the other plane. The black ellipse represents the acceptance of \(77\,\pi\,\mathrm{mm}\,\mathrm{rad}\).
FFA ring into the experiment can be realized as a reverse copy of the injection system, but may be significantly simpler as the momentum spread of the beam is a factor of ten smaller. The main challenge is the rise time of the extraction kicker(s), which needs to be shorter than the injection ones as the bunch at extraction is significantly longer. However, the use of conventional magnets like quadrupoles in the transport line downstream of the extraction point may be feasible due to the reduced momentum spread.
The FFA can operate with both signs of muons, one at a time, if the machine fields can be reversed. However, it might be possible to apply the concept of the singlet FFA lattice [100], in which beams of both signs can circulate in the same direction simultaneously. Further studies are needed to investigate this idea.
### Conversion experiments
The physics case for muon-to-electron conversion experiment at AMF was strongly highlighted during the workshop, with potential sensitivity gains up to two orders of magnitude beyond what Mu2e-II aims to realize. Key design elements draw inspiration from the PRISM/PRIME concept [94], which introduced a fundamental shift in the experimental technique searching for muon conversion. The introduction of a phase-rotated, slow muon beam in an FFA ring alleviates several limitations of the scheme adopted by current collaborations [101; 102], and opens the possibility to measure the conversion rates using high-Z target material. Such measurement would be critical to study the source of New Physics [63], should a signal be observed. The following sections review the challenges of the current experimental approach, the broad requirements for a conversion experiment at AMF, and the main R&D efforts to develop a future detector concept.
#### vi.3.1 Current approach and a new FFA design
The COMET and Mu2e experiments are based on similar concepts to produce muons and detect conversion electrons, albeit with some differences. In both cases, muons are produced by a proton beam hitting a primary target in a solenoid, and transported to a stopping target to be captured by Al nuclei. COMET includes a C-shape magnetic spectrometer between the stopping target and the detector, filtering out neutral particles, low-energy electrons and positively charged
Figure 27: Conceptual layout of the muon beam transport from the decay solenoid and injection to the FFA ring. The beam is moving from right to left. Injection kickers located in the ring are not shown.
particles. This results in a lower occupancy and radiation dose in the detector, but precludes the possibility of measuring both positively and negatively charged tracks at the same time. By contrast, the Mu2e(-II) detector is (will be) placed just downstream of the stopping target, and features an annular design to be insensitive to low-energy particles. While this geometry presents more challenges in terms of track reconstruction and radiation dose, it enables the simultaneous measurements of \(\mu^{-}\to e^{+}\) and muon-to-electron conversion.
The sensitivity of these experiments is limited by several factors, including the background due to the beam flash and pion decays (also preventing measurements with high-Z targets), the background induced from out-of-time protons (characterized by the beam extinction factor), the tracker momentum resolution to distinguish muon DIOs from the signal, the cosmic-induced background, the radiation dose in the detector, and the trigger latency.
To overcome some of these limitations, a new design based of a fixed field alternating gradient synchrotron (FFA) has been proposed [94]. The pion/muon beam emerging from the primary production target is directed into a FFA ring and undergoes phase rotation, trading time resolution for energy resolution. A momentum spread of \(\sim 2\%\) is reached after six turns. During that time (\(\mathcal{O}(1)\)\(\mu\)s), the pion contamination is reduced to negligible levels. Using a primary proton beam energy below the antiproton production threshold would also eliminate the corresponding background, and the FFA injection / extraction system effectively cuts off other sources of delayed and out-of-time backgrounds.
As a result an ultra pure, cold, monochromatic muon beam is available at the exit of the FFA. Such a beam would enable measurement with short muonic atom lifetimes, i.e. high-Z target material. In addition, the average beam energy can be significantly lower than that available in COMET/Mu2e, potentially as low as \(20-30\,\mathrm{Me\kern-1.0ptV}\). Almost all muons could be stopped in a thinner stopping target, reducing energy loss fluctuations and improving the conversion electron momentum resolution.
Limitations arising from the DIO background, cosmic induced background and secondary particles produced from muon captures still need to be addressed. Adding a magnetic spectrometer after the stopping target would filter neutral and low-energy particles, significantly reducing the detector occupancy and radiation dose. The PRIME proposal includes a 540 degrees spectrometer (the so-called "Guggenheim scheme"), but other designs could be considered. While this approach isn't charge symmetric, it should be noted that there is no need to determine the RPC background in-situ with positrons.
#### vi.2.2 Momentum resolution requirements
The DIO background scales with the number of stopped muons and can only be distinguished from signal electrons by their momenta. A method to approximately quantify the relationship between signal resolution and statistical power of a conversion experiment has been recently developed, assuming all other sources of background are negligible [103]. A resolution function is convolved with both the signal and DIO spectra. This function comprises two components: a Landau distribution with width \(\sigma\) describing the core and low side tail - this broadly accounts for the effects related to charged particles traversing the detector material - and a power law with power \(s\) to model the high side tail arising from mis-reconstructed tracks. The results for a few scenarios on Al-27 and Ti-48 are shown in Figure 28. For moderate gain in muon statistics (\(\mathcal{O}(10^{17})\)), improving the core resolution from \(0.160\,\mathrm{Me\kern-1.0ptV/c}\) to \(0.05\,\mathrm{Me\kern-1.0ptV/c}\) has a larger impact than removing the tail. However, reducing the tail has the largest effect once reaching higher muon statistics (\(\mathcal{O}(10^{21}-10^{22})\)). In order to achieve sensitivity below the \(10^{-19}\) level, improvements on both the core and the tail are necessary.
#### vi.3.3 Tracking
The quantitative analysis described above demonstrates the necessity of improving momentum resolution to surpass the expected sensitivity of Mu2e-II. In both Mu2e(-II) and COMET, tracking detectors in a constant magnetic field are employed to sample the electron trajectory for this purpose. In Mu2e(-II), a low-mass ionizing straw tube tracker is designed with a hole through the center to maintain acceptable occupancy levels. Conversely, the COMET detection section features a C-shaped filtering spectrometer that allows for a straw tube tracker without a central hole. Considering the occupancy issue is exacerbated by increasing muon beam intensity, the PRIME design assumes the inclusion of a spectrometer between the stopping target and tracking detector for the next-generation experiment. The general consensus reached during the workshop discussion is that AMF should plan for a filtering spectrometer as well.
Assuming the presence of a magnetic spectrometer, the tracking environment of the AMF conversion experiment would differ significantly from Mu2e(-II). Several improvements in the tracking environment at AMF highlight the benefits of the FFA magnet and spectrometer solenoid in the detection region. These advantages include eliminating the need for a hole in the center of the detector, virtually eliminating beam flash background particles and radiation, facilitating straightforward shielding against muon capture products (such as neutrons, heavy charged particles, and gammas), filtering to allow only highly momentum electrons to reach the detector, and creating a minimal impact on the tracker when changing the stopping target material due to the large physical separation between the stopping target and tracker. However, a few downsides exist, such as the potential need for additional shielding materials, which could make accessing the tracking detector for repairs more challenging, and the loss of the ability to measure positively charged particles.
To work towards leveraging the improved tracking environment and ultimately enhancing the momentum resolution, several tracking technologies were discussed. Continuing the legacy of Mu2e(-II) and COMET by utilizing a straw tube proportional tracker is attractive for several
Figure 28: The relationship between experimental sensitivity, signal resolution, and muons in acceptance for an Al-27 target (blue) and Ti-48 target (red).
reasons. This technology allows for a highly segmented detection volume, offers good intrinsic momentum resolution, and benefits from the extensive expertise developed by the collaborations involved in current conversion experiments. However, manufacturing a straw tube tracker is challenging, especially if efforts are made to minimize tracker mass by using thinner straws. Additionally, the large number of straw tubes used in the detector provide many potential sources of gas leaks. An alternative option is employing a multi-wire proportional chamber (MWPC) for tracking, which inherently has less mass due to its single large gas volume. Furthermore, MWPCs are easier to manufacture compared to straw trackers. One major drawback of MWPCs is their inferior segmentation compared to straw trackers. Consideration may be given to exploring Gas Electron Multiplier (GEM) trackers that construct a high voltage potential across holes in a polymer sheet. GEM trackers offer advantages such as ease of manufacture, flexibility in geometric design, and a single large gas volume. However, drawbacks of GEM trackers include limited experience in their use and a potentially higher intrinsic mass. Exploring newer technologies represents another viable avenue, as a novel technique may better suit the needs of AMF, although this route very likely entails the most substantial R&D efforts.
It is important to note that the development of a tracking detector for the conversion experiment holds potential synergies with positive muon experiments conducted at AMF. R&D efforts in tracking technology could also benefit from collaborations with the Instrumentation Frontier, particularly the Micro-Pattern Gaseous Detector sub-group and the Solid State Detectors and Tracking sub-group.
#### iv.1.4 Simulation Scheme
Targeted simulations are a crucial tool to guide the design process forward. While most of the Mu2e-II simulation efforts so far have been reasonably straightforward extensions of the mature Mu2e effort, AMF will explore a much broader landscape of possibilities and it will be difficult to directly re-use the Mu2e simulation code. However, there are some benefits to starting fresh and using experiences from Mu2e(-II) to organize the simulations in a sensible and scalable way from the beginning.
The simulation scheme should be derived from the current and anticipated future needs. We must consider the wide range of study that will need to be pursued, and the many different software packages that will be employed in the various stages of the simulation. Given these requirements, it was advocated to develop compartmentalized simulations. A sketch of the simulation stages and potential software tools is given in Table 7.
One crucial element of compartmentalization is the connection of the different stages of the simulation. This is something that was not done for Mu2e, and we should make an effort to integrate this aspect form the start. A simple data format should be sufficient for the early studies, for example the HDF5 file format, as the produced samples should have a small footprint. Ultimately, a well thought out scheme should be devised.
#### iv.1.5 Cosmic Rays
Cosmic ray muons can decay or interact with material near the stopping target to mimic signal electrons. In fact, the cosmic ray background is expected to be at the level of 1 conversion-like electron / day, the largest source in the Mu2e experiment [104]. A dedicated cosmic ray veto (CRV) detector is used to maintain this background at an acceptable level, but its performance must be further improved to extend the sensitivity of conversion experiments. There are several reasons to believe this can be achieved.
First, the AMF detector region could be farther away from the primary target region. This region is close to the CRV in Mu2e(-II), resulting in large particle fluxes (mostly neutrons and gammas) producing false coincidences in the CRV and causing dead time. Keeping the detector region well separated from the production region will mitigate this issue. Second, high rates in the CRV near the stopping target could be significantly reduced if the concrete shielding blocks are enriched with high-Z materials section III.6. Work is planned for Mu2e-II to fabricate and test the performance of such shielding blocks. Third, increased overburden would reduce the hadronic component of cosmic rays, a contribution becoming significant for Mu2e-II. Finally, the muon beam frequency at AMF would be much lower than foreseen for Mu2e-II, significantly reducing the exposure to cosmic ray background.
#### iii.6.6 R&D Projects
The discussions during the parallel session yielded a number of R&D projects which are enumerated below and categorized by relative priority.
**High Priority**
* What type of beam can the FFA provide? Do we need to plan for an induction linac to lower the central momentum? This would have a large impact on designs.
* Map out the acceptance of different proposed experiment configurations (e.g. C-shaped vs. S-shaped spectrometer solenoid in the detection region)
* Utilize the sensitivity tool to study a few higher-Z target materials (e.g. Au).
* Which tracking technologies are people interested in pursuing? Determine the required R&D needed for these options.
* Develop a scheme for cohesive, compartmentalized simulations and work out scheme for connecting different stages of the simulation.
\begin{table}
\begin{tabular}{l|l|l} Simulation Stage & Subtasks & Potential Software \\ \hline Proton beam & Beam transport & G4Beamline \\ \(\rightarrow\) Production Target & & \\ Pion production & Radiation dose, power/heat, & Geant4/Mars \\ & pion transparency, extinction & \\ FFA (muon transport and preparation) & Accelerator dynamics, insertion, & G4Beamline \\ Muon stopping & Stopping target, muon intensity, & Geant4 \\ & daughter production & \\ Muon daughter transport & DIO collimation, signal selection, & G4Beamline \\ & solenoid design, EM field design & \\ Muon daughter detection & Detector solenoid, detector & TrackToy \\ & design, pileup & \\ \end{tabular}
\end{table}
Table 7: A sketch of the different stages of simulation that a conversion experiment at the AMF may need. Details of the stages and potential software to use is included.
* Determine consequences of running without \(e^{+}\) (e.g. \(\mu^{-}\to e^{+}\) signal channel, calibrations).
**Medium Priority**
* Update the PRISM/PRIME diagram to include the other experiments of AMF that would utilize the FFA.
* Start exploring CRV requirements and design. Keep cosmic-induced background in mind when exploring detector designs.
* Explore shielding and design improvements.
* Understand what background particles come out of the FFA.
**Low Priority**
* Determine which tools should be used for the different steps in the simulation/analysis chain.
* Explore exotic solutions to e\({}^{+}\) measurements (e.g. assay the stopping target, multiple tracking paths).
* Assess feasibility of track back-extrapolation in different designs.
### Decay experiments
One of the most appealing perspectives of AMF is the possibility of gathering in the same laboratory a vast community working on different experiments searching for New Physics in muon interactions, in order to take advantage of possible synergies. In this respect, besides the design of a next generation of muon to electron conversion experiments, it is critical to scrutinize, already in this conceptual design phase, both the machine and detector requirements posed by muon decay experiments. A similar effort was promoted in 2021 by the Particle Physics Laboratory of the Paul Scherrer Institut (PSI, Switzerland), in view of the upgrade of the muon beam lines foreseen in 2027-2028 (the High Intensity Muon Beam project, HIMB) and summarized in a written document [105].
#### vi.4.1 Beam requirements
Searches for rare muon decays require intense beams of positive muons (to avoid the energy spectrum deformation produced by the capture of negative muons in the nuclear field) with a continuous time structure (to minimize the accidental time coincidence of multiple muon decays). Moreover, the kinematical resolutions are limited by the interaction of the decay products in the muon stopping target, the thickness of which has to be minimized.
The standard solution is the use of surface muons (\(28.5\,\mathrm{MeV}\) muons produced by pion decays at rest on the surface of the proton target), slowed down in a thin degrader and stopped in a thin target, with thicknesses optimized in order to have the Bragg peak of the muons fully contained in the latter. In this strategy, starting from a relatively low momentum is critical in order to have a small straggling of the total range and hence the possibility of using a thin stopping target (few hundred microns of plastic materials). As an alternative, in particular if the maximum available beam rate is too high to be tolerated by the experiments, one can consider to stop only a fraction of the muons in a thinner target, letting the rest of the beam to go through it. As a drawback, one
has to prevent muons decaying right after the target since the products would contribute to the backgrounds without enhancing the signal. It implies that vacuum (and hence vacuum-compatible detectors) has to be used around the target, while both ongoing experiments (MEG II and Mu3e) are performed in a helium volume at atmospheric pressure.
#### vi.2.2 Experimental signatures and general detector requirements
The two golden channels for LFV in muon decays are \(\mu^{+}\to e^{+}\gamma\) and \(\mu^{+}\to e^{+}e^{+}e^{-}\), but other, more exotic decays are also of interest in some specific New Physics model [106]. In the search for the \(\mu^{+}\to e^{+}\gamma\) decay, using muons at rest, a positron and a photon are searched for, emitted back-to-back, each with an energy equal to half the mass of the muon, \(\sim 52.8\,\mathrm{Me\kern-1.0ptV}\). When very high muon beam intensities are used (up to \(5\times 10^{7}\,\mu/\mathrm{s}\) in MEG II), the dominant source of background is the accidental coincidence of positrons and photons from two different muon decays. Hence, the time difference between the two particles is an important discriminating variable for signal against background, along with the particle energies and their relative angle. These kinematic variables also allow to suppress the sub-leading background coming from the radiative muon decay \(\mu^{+}\to e^{+}\nu_{e}\overline{\nu}_{\mu}\gamma\).
The \(\mu^{+}\to e^{+}e^{+}e^{-}\) search is performed requiring two positrons and one electron from a common vertex, with an invariant mass equal to the muon mass. The three-track vertex strongly suppresses the accidental coincidences, so that the \(\mu^{+}\to e^{+}e^{+}e^{-}\nu_{e}\overline{\nu}_{\mu}\) background also plays an important role.
Both experiments, as well as the more exotic searches, require the reconstruction of electrons and/or positrons with a relatively low momentum, around and below \(50\,\mathrm{Me\kern-1.0ptV}\). In this range, magnetic spectrometers with tracking detectors overwhelm the calorimeters for the measurement of the energy, providing at the same time the necessary resolution on angles and vertex. The reconstruction is strongly affected by the interaction with the materials of and surrounding the target, and with the materials of the detectors. For this reason, extremely light trackers have to be used. Large volume gaseous detectors (drift chambers and time projection chambers) give the best compromise of single hit resolution and material budget, but poor granularity and significant aging rates will make their use problematic with the higher rates of the future facilities. State-of-the-art, 50 \(\mu\)m-thick monolithic silicon pixel sensors (MAPS) can provide the necessary rate capabilities with an acceptably low material budget, and the next generation of experiments could take advantage of even thinner devices. Both silicon and large gaseous detectors need to be complemented with faster detectors to achieve the time resolution that is necessary for the rejection of accidental backgrounds.
Photons can be reconstructed using calorimeters, with an appropriate choice of the scintillating material. The LXe calorimetry developed for the MEG experiment provides excellent performances at low energies, but cost and handling of large xenon detectors make problematic the construction of detectors with large acceptance. The use of innovative crystals have been investigated in the light of future precision physics experiments [107], although large scale production and cost can be an issue also for these kinds of materials. For some specific applications like \(\mu^{+}\to e^{+}\gamma\), where the accidental background limits in any case the acceptable rate of detected photons, an alternative solution is the conversion of photon in thin layers of materials and the reconstruction of the outcoming \(e^{+}e^{-}\) pair in a magnetic spectrometer, because the higher energy resolution that can be achieved with this technique and a higher rate of muons can be exploited to compensate for the very low efficiency of the conversion process.
#### vi.3.3 \(\mu^{+}\to e^{+}e^{+}e^{-}\)
At present, the best limit on the branching ration (BR) of \(\mu^{+}\to e^{+}e^{+}e^{-}\) comes from the SINDRUM experiment, \(BR(\mu^{+}\to e^{+}e^{+}e^{-})<1.0\times 10^{-12}\), and dates back to the late 1980s [108]. A new experiment called Mu3e is under construction at PSI [109]. The experiment has been already designed to be ultimately operated at HIMB, with more than \(10^{9}\,\mu\)/s, aiming at a final BR sensitivity around \(10^{-16}\).
As a consequence, the detector is already designed to cope with an environment very similar to the one expected at AMF. The detector is composed of cylindrical tracking stations made of HV-MAPS, complemented by layers of plastic scintillating tiles and fibers readout by silicon photomultipliers (SiPM), within a \(\sim 1\) T magnetic field. A sketch of the detector is depicted in Figure 29.
The HV-MAPS have \(80\times 80\,\mu\)m\({}^{2}\) pixels and 50 \(\mu\)m thickness, resulting in a material budget corresponding to \(X/X_{0}=0.011\%\) per layer. In the high occupancy environment produced by muon stopping rates exceeding \(10^{9}\)\(\mu\)/s, the relatively poor time resolution of the HV-MAPS, \(O(50\) ps), would compromise the possibility of a correct association of hits to different tracks. So, the scintillation detectors are critical to isolate hits produced by the same particle.
The two innermost tracking layers will provide an accurate vertexing with \(<200\)\(\mu\)m resolution, while the geometry of the outer layers is optimized to get \(<500\,\)keV momentum resolution. The scintillation detectors will also measure the time of the particles with a resolution better than 100 ps, allowing for an effective rejection of accidental backgrounds.
#### vi.3.4 \(\mu^{+}\to e^{+}\gamma\)
The best limit is much more recent, \(BR(\mu^{+}\to e^{+}\gamma)<4.2\times 10^{-13}\), published by the MEG collaboration in 2016 [110], and the first result of the upgraded experiment MEG II [111] is expected within 2023. Then, the future of this quest is much more unsettled. It was investigated in Ref. [112] and more recent advances were considered in the HIMB science case document [105].
By contrast to \(\mu^{+}\to e^{+}e^{+}e^{-}\), the photon reconstruction cannot provide by any means the sub-millimeter resolution that is necessary for an effective two-particle vertexing, so that the \(\mu^{+}\to e^{+}\gamma\) search is much more limited by the accidental background. Since it originates from accidental time coincidences, it scales with the square of the muon beam rates, and it poses a severe limitation on the maximum rate that can be proficiently exploited. Indeed, for a given beam rate, if the other discriminating variables (\(e^{+}\gamma\) energies, relative angle and relative time) do not provide enough separation to keep the background yield at \(O(1)\) over the full lifetime of the experiment, the sensitivity, scaling with \(S/\sqrt{B}\), becomes independent of the beam rate, the increase of which would only overcrowd the detectors without any real statistical advantage. In the MEG II experiment, the
Figure 29: Sketch of the Mu3e experiment at PSI.
optimal beam rate is found to be around \(5\times 10^{7}\,\mu\)/s, below the maximum rate already achievable at PSI.
Additionally, the detectors of the MEG II experiment are not designed to cope with the very high occupancy produced by a muon beam rate increased by a factor 10 to 100 at future facilities: both in the positron spectrometer (instrumented with a high-granularity drift chamber) and in the LXe calorimeter, pileup events are already impacting the reconstruction of the events at \(5\times 10^{7}\,\mu\)/s; moreover, at \(10^{9}\,\mu\)/s, aging effects in the drift chamber would become unmanageable if specific solutions are not found to strongly suppress them.
New options need to be investigated, aiming at an improvement of the resolutions (to suppress the accidental background) and an increase of the rate tolerance of the detectors. A few technical solutions were already investigated in the HIMB science case document.
On the positron side, a silicon detector _a la_ Mu3e would provide the necessary high rate capabilities, although the momentum resolution could be slightly worse, due to a less favorable compromise between material budget and number of hits. In this respect, a next generation of HV-MAPS, thinned down to 25 \(\mu\)m, and an optimization of the detector geometry and magnetic field could allow to fill the gap against gaseous detectors. To now, the silicon detector solution looks as the best option for the next-generation experiments, although some ideas still circulate in the community, to improve the rate capabilities of gaseous detectors with new geometries (transverse drift chambers, transverse drift tubes _a la_ Mu2e, radial time projection chambers), new wire materials and new, hydrocarbon-free gas mixtures.
On the photon side, the conversion technique seems to be the most promising one to fully exploit higher beam rates. The basic idea is that, with a beam rate up to 100 times higher, but a total conversion efficiency not much larger than a few per cent, the total photon rate (and hence the accidental yields) would not significantly increase, while the better resolutions achievable in the photon reconstruction would allow to more effectively reject the backgrounds. Moreover, a larger acceptance could be achieved with lower cost, and the directional information extracted from the \(e+e-\) pair would allow to build a \(e\gamma\) vertex, which would provide a further handle to reject the accidental coincidences.
In the last couple of years some significant advance was made in the understanding of the performance of the conversion technique and in the conceptualization of a photon detector for a future experiment. Figure 30 shows a possible arrangement for a photon detector based on the conversion technique.
Instead of a thin and dense passive converter, a scintillating crystal instrumented with SiPM is considered. It would allow to measure the energy deposit in the converter itself, that is the main limiting factor to the energy resolution when a passive converter is used.
Tracking the \(e^{+}e^{-}\) pair with excellent resolution and good efficiency will require at least two or three layers of position-sensitive detectors in close proximity to the conversion layer (within about 1.5 cm for a 1 T magnetic field). Although silicon detectors would fit this design, the necessity of stacking multiple conversion layers to increase the efficiency would make this option impractical from the technical and economical point of view, considering that tens of square meters of detectors would be needed to guarantee a good acceptance. However, it can be shown that the geometry of drift chambers with stereo wires do not fit this design, while time projection chambers would be too long to provide the necessary resolutions when operated with low-mass gas mixtures. The option that is currently under consideration is a time projection chamber with radial drift. Simulations are on going to assess the performance of a detector with such a geometry.
Finally, the conversion technique should be adopted without detriment to the good photon time resolution (\(\sim 70\) ps) characterizing the MEG and MEG II experiments. It could be achieved with the addition of gaseous detectors with fast timing, like multi-gap resistive plate chambers (mRPC), just in front of the active converter. These detectors would be light enough to not deteriorate the
performance of the active converter, while providing the necessary time resolution on the \(e^{+}\) and \(e^{-}\), after they recruited through the spectrometer.
Simulations and test beam measurements on scintillating crystals for the active converter are on-going. Preliminary results have been shown, indicating that four layers of 3 mm thick LYSO crystals could give a conversion efficiency of 10% and an energy resolution of about \(140\,\mathrm{k}\mathrm{e}\mathrm{V}\), to be combined with the resolution of the \(e^{+}e^{-}\) tracker, the simulation of which is on going.
Other innovations were also briefly discussed at the workshop, namely an active target to get a direct information on the positron vertex and multiple thinner targets in vacuum.
All these studies give a preliminary indication that the performance envisaged in Ref. [112] could be realistic, so that a BR sensitivity approaching \(10^{-15}\) could be reached with beam intensities exceeding \(10^{9}\)\(\mu\)/s, as shown in Figure 31.
#### v.1.5 Exotic decays
More exotic channels can be considered, where the muon decays to new, non standard particles, that can in turn decay into standard particles or remain invisibile (either because they are stable or their product do not interact with the detector). A review of several channels and searches can be found in Ref. [106].
As a case study, the decays of muons with lepton-flavor violating axion-like particles (ALPs) in the final state were discussed at the workshop. The two channels of interest are \(\mu^{+}\to e^{+}a\) and \(\mu^{+}\to e^{+}a\gamma\), where the ALP \(a\) escapes the detection.
Figure 30: The possible layout of a \(\mu^{+}\to e^{+}\gamma\) experiment based on a silicon positron tracker and a photon conversion detector (top), with a detail of a possible photon detector concept (bottom).
When the mass of the ALP is zero or very low compared the energy resolutions of the detector, the search for \(\mu^{+}\to e^{+}a\), which is based on the reconstruction of a monochromatic positron, is affected by large systematic uncertainties because a peak at the kinematical end point of the muon decay spectrum can be easily faked by a miscalibration of the momentum scale (e.g. in the calibration of the magnetic field). The most recent limit, from the TWIST collaboration [81], is the result of a careful exploration of possible miscalibrations at the \(\,\mathrm{keV}\) level, and strong improvements are difficult to envisage.
For higher masses, similar searches in experiments tailored for \(\mu^{+}\to e^{+}\gamma\) are typically limited by the momentum acceptance of the positron spectrometer, which is optimized for the highest momenta, but there is an intermediate range, where the TWIST search is statistically limited, that could be explored in more detail already in MEG II. It should be noticed that, although the trigger of MEG II requires the coincidence of a positron and a photon, the large majority of the acquired events is of accidental nature, and hence the observed positron can still be used to reconstruct signatures without photons.
Anyway, having in mind the possibility of reconstructing the photon with high resolutions, the \(\mu^{+}\to e^{+}a\gamma\) channel can be even more interesting in MEG II and future \(\mu^{+}\to e^{+}\gamma\) experiments, considering the background discrimination power provided by the reconstruction of a missing mass. This possibility was examined in detail in Ref. [113]. In this case, the available statistics is limited by the trigger requirements for the \(\mu^{+}\to e^{+}\gamma\) search, which restrict the available phase space to a small region at high energies and large \(e\gamma\) angle. The outcome of the studies done so far is that it would be advantageous to allocate for this specific search a relatively small data taking period (a few percent of the total beam time), during which the trigger requirements are released, and the beam rate reduced to enhance the prompt over accidental yield ratio. This new data taking strategy would strongly enhance the sensitivity of MEG II to \(\mu^{+}\to e^{+}a\gamma\), and with 1 month of data taking, assuming for instance V-A currents, the current limits for the ALP couplings would be overcame by a factor up to 5, as shown in Fig. 32. Detailed simulations and trigger rate estimates are ongoing within the collaboration to confirm these results.
Being the sensitivity of these searches also limited by the background from accidental coincidences, the same considerations done for \(\mu^{+}\to e^{+}\gamma\) at future high-intensity facilities also apply in this case.
Figure 31: Sensitivity of next-generation \(\mu^{+}\to e^{+}\gamma\) experiments under different assumptions for the detector design. See Ref. [112] for a detailed description of the underlying assumptions.
### Other experiments
#### iv.5.1 Session Summary
In this session of the workshop, we discussed the current status and future plans of various muonium experiments and proposals. Muonium (commonly referred to as M or Mu, by particle or nuclear physicists, respectively) is a \(\mu^{+}e^{-}\) bound state that is produced when \(\mu^{+}\) stop in matter. It is a purely leptonic, hydrogen-like atom where the nucleus consists of a \(\mu^{+}\). This makes it an excellent venue for precision tests of QED since, e.g., nuclear size effects do not need to be considered. Muonium experiments typically employ "surface" muon beams: beams of antimuons produced when positive pions decay at rest inside the production target; this two-body decay yields a forward-going, quasi-monoenergetic (\(\approx\) 4 MeV, or \(\approx\) 28 MeV/\(c\)) \(\mu^{+}\) beam that is 100% backward-polarized and can be stopped to produce muonium in thin conversion targets.
Three categories of fundamental-physics measurements were discussed at the Workshop: the search for M-\(\overline{\rm M}\) mixing, precision M spectroscopy measurements, and the measurement of the gravitational acceleration of M in the Earth's gravitational field. Published results from the first two measurement categories date from over 20 years ago [114; 115; 116; 117], so the field is ripe for renewed efforts. New spectroscopy measurements are in progress: the MuSEUM [118; 119] hyperfine-splitting experiment at J-PARC and the Mu-MASS [120; 121] 1s-2s experiment at PSI. The MACE M-\(\overline{\rm M}\) mixing experiment is proposed at the Chinese Spallation Neutron Source, and the LEMING M gravitational experiment is in an R&D stage at PSI [122]. The PIP-II linac under construction at Fermilab should enable the world's highest-intensity muonium beam. All three categories of experiments thus appear worthy of pursuit at Fermilab once PIP-II is operational, assuming a suitable high-power PIP-II target facility is built and the currently planned accelerator is upgraded for continuous operation.
#### iv.5.2 Muonium Theory
Rare muonium decays of interest include M \(\to e^{+}e^{-}\), M \(\to\gamma\gamma\), and M \(\to\nu_{e}\overline{\nu}_{e}\), all three of which would be dominated by new physics if they occur. In effective field theory, compared to e.g. \(\mu\to 3e\), they probe different combinations of Wilson coefficients, so are worth searching for whether or not \(\mu\to 3e\) is observed.
Conversion of muonium to antimuonium -- the simultaneous conversion of an antimuon to a
Figure 32: Sensitivity of MEG II, with a dedicated trigger strategy and a reduced muon beam intensity, to left-handed (V-A) ALP couplings, compared to the current limits. See Ref. [113] for details.
positron and an electron to a muon -- would be an example of double charged-lepton flavor violation. While not forbidden (neutrino oscillation being well established), the rate of such conversion via virtual neutrino oscillation is so low as to be essentially unobservable. Representative new-physics diagrams contributing to muonium-antimuonium oscillations are shown in Fig. 33. Since some new-physics mechanisms favor M-\(\overline{\mathrm{M}}\) mixing over muon-to-electron conversion, the former is not necessarily suppressed with respect to the latter. As with rare decays, since Mu2e and M-\(\overline{\mathrm{M}}\) mixing are differently sensitive to new physics, both should be sought as sensitively as possible.
#### iv.3.3 Muonium-Antimuonium Conversion Experiment (MACE)
The MACE experiment proposes to search for muonium-antimuonium oscillation at a high-intensity muon beamline at, e.g., the High Intensity Heavy-ion Accelerator Facility (HIAF) or the Chinese Spallation Neutron Source (CSNS).
The existing limit (\(P_{\mathrm{M}\overline{\mathrm{M}}}<8.3\times 10^{-11}\) at 90% C.L. in an 0.1 T magnetic field) was set by the MACS experiment at PSI in 1999 [114]. The goal of MACE is to improve on MACS sensitivity by more than two orders of magnitude, by increasing the rate of muonium formation and improving apparatus resolutions. The design (Fig. 34) employs a \(\mu^{+}\rightarrow\mathrm{M}\) conversion target made of silica aerogel placed within a cylindrical drift chamber surrounded by a solenoid magnet. The aerogel is perforated ("laser ablated") with an array of small blind holes to increase its surface-to-volume ratio, enhancing the probability that M atoms produced within escape to the surrounding vacuum [123]. M \(\rightarrow\overline{\mathrm{M}}\) conversion leads to an atom consisting of a muon bound to a positron, producing a fast electron and slow positron once the muon decays -- opposite in electric charge to the decay products of muonium. The fast electron is reconstructed and its sign and momentum measured in a cylindrical drift chamber. The slow atomic positron left behind when the muon decays is accelerated electrostatically and sign- and velocity-selected in a curved solenoid channel, which transports it to a microchannel plate (MCP), in which it annihilates, sending a pair of 511 keV gamma rays into the surrounding calorimeter.
Optimization of the muonium yield and detector performance is under study. The muonium vacuum yield for various target designs is simulated using the random-walk method. The simulation has been validated and is believed to be reasonably accurate. This simulation provides a basis for optimization of the perforated silica aerogel target and indicates that a surface muon beam with a relatively small momentum spread (2.5% or less) can increase the yield of muons to 1.5% or higher, to be compared with the 0.5% yield in MACS. Together with the higher expected muon intensity, this yields an improvement of about two orders of magnitude compared to MACS. On
Figure 33: Examples of new-physics diagrams contributing to M–\(\overline{\mathrm{M}}\) mixing: exchange of (a) a doubly charged Higgs boson \(\Delta^{++}\), (b) heavy Majorana neutrinos, (c) a neutral scalar \(\Phi_{N}\), or (d) a bileptonic flavor diagonal gauge boson \(X^{++}\) (from [114]).
the other hand, the background rejection capability of MACE depends on the spatial resolutions of the detector -- in particular, those of the spectrometer and MCP. The wire structure of the drift chamber has been modeled, and an initial reconstruction algorithm has been implemented. By examining various readout layer layouts, the geometry of the drift chamber can be further optimized. Current results demonstrate vertex resolution in the range 5-10 mm. By improving the reconstruction algorithm, the resolution of the drift chamber is expected to be further improved, with a goal below 5 mm.
The design has been simulated in Geant4 to characterize its performance [124]. The limiting backgrounds in MACS were accidental coincidences and the rare decay \(\mu^{+}\to e^{+}e^{+}e^{-}\nu_{e}\overline{\nu}_{\mu}\) (which can fake the signal mode if one of the positrons is missed). MACS had one event passing its elliptical time-vertex cut (cut radii of 4.5 ns and 12 mm, respectively). For a background-free sample with two orders of magnitude more M events, the time, vertex distance of closest approach, and energy resolutions should be substantially tightened with respect to those in MACS. Preliminary MACE simulations indicate resolutions of 6.5 mm in distance of closest approach (comparable to) and 0.65 ns (better than that in MACS) in electron-positron time difference. Further resolution improvement is desired and is the subject of ongoing studies.
#### iv.2.4 Muonium Spectroscopy
Muonium spectroscopy experiments aim to measure one of three transitions: the ground state hyperfine splitting, the 1s-2s interval, or the Lamb shift. The hyperfine and 1s-2s splittings together with \(g-2\) allow an independent, muon-only determination of the fine-structure constant \(\alpha\).
The hyperfine splitting is currently being measured by the MuSEUM experiment [118; 119] at J-PARC with an uncertainty goal of 5 Hz (1.2 ppb). This would exceed the precision of the theoretical prediction and so can be viewed as a measurement of the ratio \(m_{\mu}/m_{e}\), which dominates the systematic uncertainty of the prediction. In MuSEUM, muonium is created by stopping highly polarized muons in krypton gas. Positrons will be preferentially emitted in the spin direc
Figure 34: 3D cutaway view of proposed MACE apparatus. Antimuons are incident from the left on an aerogel target in which many of them stop and convert to muonium. If a muonium atom converts to antimuonium, its decay electron is reconstructed by the surrounding cylindrical drift chamber and its slow positron is accelerated and sign- and velocity-selected in the S-shaped solenoid channel, conveying it to a microchannel plate in which it annihilates, sending gammas to be detected in the surrounding calorimeter.
tion (upstream). A perpendicular microwave magnetic field is applied in order to flip the spins. The hyperfine splitting is determined by measuring the ratio between upstream and downstream positron emission as a function of microwave frequency.
The 1s-2s splitting (and Lamb shift) are currently being measured by Mu-MASS [120, 121] at PSI. The uncertainty goal for the 1s-2s splitting is 10 kHz (4 ppt), which will enable a 1 ppb measurement of \(m_{\mu}\). In Mu-MASS, muonium is formed in silica, then excited to the 2s state and ionized with a pair of lasers. The muon is then transported away from the creation point and its emitted Lyman-alpha photon is detected in an MCP detector.
#### iv.1.5 Low-Energy Muons at Fermilab
We briefly describe the MeV Test Area (MTA) beam line at FNAL, which could be used for low-energy muon and muonium R&D (and perhaps for competitive physics experiments as well), and expectations for the PIP-II superconducting linac, whose construction is in progress.
The MTA is situated near the end of Fermilab's 400 MeV H\({}^{-}\) Linac and shielded well enough to accept a large fraction of the protons accelerated by the Linac (although a shielding assessment demonstrating this has yet to be finalized). A new MTA beam line was recently designed and installed to serve an approved muon-catalyzed-fusion R&D program; it naturally produces low-energy pions and muons of both signs. The layout is shown in Fig. 35. Geant4 simulation studies using a 3-cm-long rectangular W target predict surface-muon production at a rate of order \(\sim\) 10\({}^{-9}\mu^{+}\)/proton-on-target (POT) into the acceptance of the beam line, which is centered at a 146.4\({}^{\circ}\) production angle. Measurements made in the 1970s [125] suggest that, compared to the thin graphite targets used at surface-muon beam lines at spallation-neutron facilities such as PSI, RAL, and TRIUMF, a Ta or W target should produce more charged pions and surface muons per POT by factors of 3 (\(\pi^{+}\) and \(\mu^{+}\)) and 8 (\(\pi^{-}\) and \(\mu^{-}\)). (These factors are not borne out in Geant4 studies, but Geant4 is not expected to be reliable for such cases; measurements are therefore needed, and planned.) Studies to optimize the target configuration and beam-line acceptance are in progress and may yield higher muon rates [126]. However, for purposes of surface-muon (and muon-catalyzed-fusion) R&D, the rates so far simulated are more than sufficient.
The PIP-II linac will of course accelerate to 800 MeV, twice the energy of Fermilab's existing Linac. While neither 800 nor 400 MeV is optimal for surface-muon production, the predicted rate decrease per POT with respect to the peak at 590 MeV is only about 15% [126]. Furthermore, especially if it is upgraded for CW operation, most of the protons that the PIP-II linac could accelerate will not be destined for the Main Injector and could enable a low-energy-muon physics program at Fermilab that -- even after PSI's High-Intensity Muon Beam (HIMB) upgrade is completed -- would be the world's most sensitive by more than an order of magnitude.
To prepare for a future, world-leading, PIP-II muon facility, it is highly desirable to conduct R&D now at the MTA to develop and gain experience with the techniques that will be needed at PIP-II, such as those discussed in the next subsection. Such a program at the MTA will also allow (for the first time) a U.S.-based program of \(\mu\)SR measurements, obviating the current need for Fermilab superconducting RF-cavity developers (and other U.S. researchers) to take their samples to Canada for \(\mu\)SR surface studies at TRIUMF.
#### iv.1.6 Muonium Production in Superfluid Helium
Measurements [127] (Fig. 36) have shown that below 1 K, superfluid helium (SFHe) is an efficient converter of stopped \(\mu^{+}\) to muonium. The signature of M formation is rapid precession of the muon spin; the resulting time evolution of the decay asymmetry in a magnetic field has been used to
explore the production of muonium when \(\mu^{+}\) are incident on pure superfluid \({}^{4}\)He or a solution containing 0.2% \({}^{3}\)He [127; 128]. The high conversion efficiency and the immiscibility of hydrogen isotopes in superfluid helium form the basis of a technique proposed by PSI's David Taqqu [129] that can make a monochromatic, parallel beam of muonium atoms propagating in vacuum by stopping slow \(\mu^{+}\) in a thin, horizontal SFHe layer. Such a beam would for the first time enable an interferometric muonium gravity experiment (see next subsection) and could also be a game-changer for other high-sensitivity muonium measurements.
Their immiscibility implies that any muonium atoms formed in the SFHe that reach its upper surface will be expelled perpendicular to that surface at a velocity determined by the chemical potential of the atoms in SFHe, predicted to be 270 K for muonium [129], implying an expelled-atom velocity of 6.3 mm/\(\mu\)s. (The prediction depends on isotopic mass and has been verified experimentally for the case of deuterium [130]; this immiscibility has also been exploited to make a functioning SFHe-coated focusing mirror for hydrogen atoms [131].) To maximize the efficiency of beam creation, it is desirable to suppress muonium formation in the SFHe bulk by applying an electric field to drift the muons away from their ionization electrons; the required field can be created by a surface-electron pool of \(10^{8}\,e/\)cm\({}^{2}\), maintained e.g. by use of a W tip. Such an electron pool is stable below 1 K [132]. Calculations show that for a layer thickness of \(\approx\) 300 \(\mu\)m, up to \(\approx\) 10% of muons entering the SFHe drift to the surface before decaying and yield escaping M atoms.
An alternative method of achieving high M escape efficiency is under development at PSI by our PSI and ETH-Zurich colleagues (K. Kirch _et al._[128]), in which an extremely thin (\(\sim\) 1 \(\mu\)m) SFHe layer is employed, such that about half of the muonium atoms formed in the bulk diffuse to the upper surface and are expelled. To achieve a substantial \(\mu^{+}\) stopping probability in such a thin
Figure 35: Photo of Fermilab MeV Test Area showing 400 MeV proton beamline at right and pion–muon quadrupole channel at left. The protons enter traveling into the page at the right edge of the photo, and muons (produced both within the target and via pion decay in flight) exit out of the page at the left edge. A target holder is situated at the intersection of the two beam lines, followed by a small solenoid to capture charged particles produced in the target.
layer, the \(\mu^{+}\) beam must be precooled. This "muCool" apparatus (also proposed by Taqqu [133]) has been developed and tested successfully, compressing the \(\mu^{+}\) beam phase space by a factor \(10^{10}\) at the expense of \(10^{-3}\) efficiency due to muon decay [134; 135].
#### vi.2.7 Muonium Antimatter Gravity Experiment (MAGE)
The gravitational force between antimatter and matter has yet to be directly measured. So far only a crude experimental limit has been established: \(-65\leq\overline{g}/g\leq 110\)[136], where \(\overline{g}\) is the gravitational acceleration of antimatter at the earth's surface (indirect limits on \(\overline{g}\), such as those derived from torsion-pendulum experiments and lunar radar ranging [137], are based on the assumed gravitational effects of virtual antiparticles in nuclear binding energy and are not applicable to antimuons). Any deviation of \(\overline{g}\) from \(g\) would be evidence for new physics -- possibly a "fifth force," as has been invoked to explain multiple muonic experimental anomalies [138]. From another perspective, antimatter gravity can be viewed as a test of the weak equivalence principle, which some authors suggest may be violated in the case of antimatter [139; 140]. Muonium gravity is thus worth measuring as sensitively as possible.
Measurement of muonium gravity has not heretofore been feasible. The predicted parallel beam produced by superfluid helium is a game changer that will enable it to be measured for the first time. This has led to the approval of the LEMING experiment at PSI, based on the muCool apparatus and use of a micron-thick thin film of SFHe [122]; however, LEMING is still in an R&D stage, and a parallel M beam has yet to be demonstrated. Furthermore, the inefficiency of muCool is likely to limit LEMING in its currently designed form to measuring the sign of \(\overline{g}\) and determining its magnitude only crudely. The thick-film approach described in the previous subsection, combined with the proton-beam power available from PIP-II and the use of heavy targets, should allow MAGE at Fermilab to measure \(\overline{g}\) to 1% of \(g\) or better. Depending on the progress of our R&D, a \(\overline{g}\) measurement at the MTA may also be competitive with LEMING.
MAGE and LEMING are based on the same idea [141]: create a horizontal, parallel, monoenergetic beam of muonium propagating in vacuum and impinging on a three-grating Mach-Zehnder-type interferometer, detecting the products of muonium decay in time-space coincidence downstream of the third grating. The fast positron from \(\mu^{+}\) decay is easily reconstructed using scintillating fibers. The slow atomic electron left behind when the muon decays can be accelerated electrostatically for detection by a 2D scintillating hodoscope array. The interferometer measures
Figure 36: Relative production vs. temperature of muonium in superfluid helium (blue points from [127] and recent measurements in red by a PSI–ETH-Zürich collaboration [128]).
the acceleration of the beam in the direction perpendicular to the grating slits, which (to measure gravitational deflection) should therefore be oriented horizontally -- in contrast to the more usual atom-interferometry applications, in which gravity is an unwanted influence and vertical slits are employed. The de Broglie waves corresponding to the atom beam diffract in each grating, and interference of the diffracted waves creates a sinusoidal intensity distribution at the third grating. The details of the pattern (its phase and period) exceed the position-reconstruction capability of particle detectors but can be measured by translating the third grating up and down periodically over time. The phase is proportional to the gravitational deflection, and its sign gives the direction (deflection up or down).
The beam is created by stopping surface or "subsurface" \(\mu^{+}\) in superfluid helium, deflecting the resulting vertically propagating M beam into the horizontal by means of a SFHe-coated surface angled at 45\({}^{\circ}\). (Subsurface muons are surface muons created slightly deeper into the target that lose a fraction of their energy to \(dE/dx\) on the way out of the target.) Given the predicted 6.3 mm/\(\mu\)s M velocity discussed above, the 2.2 \(\mu\)s muon lifetime corresponds to 1.4 cm of travel, and with the statistically optimal 2-lifetime grating separation (for measurement of an acceleration \(\propto t^{2}\)), the apparatus easily fits on a tabletop, or within the sample volume of a dilution refrigerator. Since the measurement resolution is inversely proportional to the grating pitch, the interferometer should have as fine a grating pitch as possible, but the gratings must also be fairly large and as open as possible (i.e., with minimal supporting struts, which potentially could block the passage of a significant fraction of M atoms). The current state of the art is judged to be \(\approx\) 1 cm\({}^{2}\) grating area and 100 nm grating pitch. With \(\approx\) 50%-open (approximately optimal [135]) gratings, and an approximately parallel beam, the geometric acceptance is close to 100%, but half of the beam is absorbed at each grating, in addition to decay losses.
The measurement resolution depends on the pitch \(d\), inter-grating travel time \(t\), and interferometric contrast \(C\) according to [142]
\[\delta g=\frac{1}{C\sqrt{N}}\frac{d}{2\pi}\frac{1}{t^{2}}\,, \tag{2}\]
where \(N\) is the number of detected events. With the (conservative) fringe contrast \(C\) = 0.1, and 100,000 M/s detected, the resolution is then \(\delta g/g\approx(0.35/\sqrt{\#\mbox{ days}})\) for gratings placed 1 muon lifetime apart, or \(\approx(0.1/\sqrt{\#\mbox{ days}})\) with 2-lifetime grating separation. Thus the sign of \(\overline{g}\) is determined to 5 standard deviations in less than a day of running, and the magnitude determined to 1% (if systematic effects can be sufficiently well controlled) in about a year. To these estimates some margin must of course be added for calibration runs and accelerator downtime. Thus precision measurements of \(\overline{g}\) (to 1% or better) will require the highest possible M intensity.
#### vi.1.8 R&D Projects / Potential Synergies / Open Questions
Here is a list of R&D projects with some comments about potential synergies and some open questions.
**High priority**
* Pursue SFHe production of muonium
* this could greatly improve the efficiency of muonium production for all muonium experiments, and is required for muonium antimatter-gravity experiments
* Pursue low-energy muon production at MTA
could be used for R&D of other experiments (e.g., measure muon production at energies similar to those expected from PIP-II and study pion and muon production-target materials and configurations)
* Make first measurement of \(\bar{g}\) with muonium
* this would help establish what the limiting systematics of such an experiment are
* How does AMF host all of \(\mu^{-}\), \(\mu^{+}\), and muonium experiments?
* low-energy surface \(\mu^{+}\) are needed for both decay and muonium experiments - can both be supported at the same time?
#### Low priority
* Determine what improved muon production from SFHe means for all muonium experiments
* can we keep the same design for muonium oscillation experiments?
* can muonium spectroscopy be done in a vacuum?
* Determine feasibility of other muonium experiments (e.g., \(\mathrm{M}\rightarrow\nu\bar{\nu}\))
Conclusion
Charged lepton flavor violating processes open a unique window on the physics of generation and flavor, complementing studies pursued at collider and neutrino experiments. Muons offer a promising avenue to search for CLFV, and a global experimental program is underway to explore these possibilities, promising impressive sensitivity gains in the coming decade.
A staged program of next-generation experiments and facilities exploiting the full potential of the PIP-II accelerator has been outlined in this document. This program comprises Mu2e-II as a near term evolution of the Mu2e experiment, with the goal of improving the sensitivity to muon-to-electron conversion by an order of magnitude, followed by the Advanced Muon Facility. This new facility would provide the world's most intense positive and negative muon beams, enabling a suite of experiments with unprecedented sensitivity (probing mass scales of the order of \(10^{4}-10^{5}\,\mathrm{TeV}\)), and offering _unique capabilities_ to explore in depth the underlying New Physics in case of an observation. This program includes many synergies with the development of a muon collider and a beam dump dark matter experiment at FNAL.
These experiments and facilities would provide the foundation for a comprehensive muon physics program in the next decade and beyond, an essential component of a global program to search for New Physics. The R&D roadmap outlined in this document should begin immediately to ensure a timely realization of these possibilities.
###### Acknowledgements.
This work was supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0019095, DE-SC0007884, DE-SC0011632 and DE-AC02-06CH11357 and by the Walter Burke Institute for Theoretical Physics. This work was partly authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This work was supported by the EU Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 101006726. J. Zettlemoyer acknowledges support in part by the DOE grant DE-SC0011784 and the NSF grant OAC-2103889. R. Plestid is supported by the Neutrino Theory Network Program Grant Award under Number DEAC02-07CH11359 and the US DOE under Award Number DE-SC0020250. J. Tang acknowledges support in part by National Natural Science Foundation of China under Grant No. 12075326 and Fundamental Research Funds for the Central Universities (23ckjc017) in Sun Yat-sen University.
|
2309.12297 | Satellite knots and immersed Heegaard Floer homology | We describe a new method for computing the $UV = 0$ knot Floer complex of a
satellite knot given the $UV = 0$ knot Floer complex for the companion and a
doubly pointed bordered Heegaard diagram for the pattern, showing that the
complex for the satellite can be computed from an immersed doubly pointed
Heegaard diagram obtained from the Heegaard diagram for the pattern by
overlaying the immersed curve representing the complex for the companion. This
method streamlines the usual bordered Floer method of tensoring with a bimodule
associated to the pattern by giving an immersed curve interpretation of that
pairing, and computing the module from the immersed diagram is often easier
than computing the relevant bordered bimodule. In particular, for (1,1)
patterns the resulting immersed diagram is genus one, and thus the computation
is combinatorial. For (1,1) patterns this generalizes previous work of the
first author which showed that such immersed Heegaard diagram computes the
$V=0$ knot Floer complex of the satellite. As a key technical step, which is of
independent interest, we extend the construction of a bigraded complex from a
doubly pointed Heegaard diagram and of an extended type D structure from a
torus-boundary bordered Heegaard diagram to allow Heegaard diagrams containing
an immersed alpha curve. | Wenzhao Chen, Jonathan Hanselman | 2023-09-21T17:57:12Z | http://arxiv.org/abs/2309.12297v1 | # Satellite knots and immersed Heegaard Floer homology
###### Abstract.
We describe a new method for computing the \(UV=0\) knot Floer complex of a satellite knot given the \(UV=0\) knot Floer complex for the companion and a doubly pointed bordered Heegaard diagram for the pattern, showing that the complex for the satellite can be computed from an immersed doubly pointed Heegaard diagram obtained from the Heegaard diagram for the pattern by overlaying the immersed curve representing the complex for the companion. This method streamlines the usual bordered Floer method of tensoring with a bimodule associated to the pattern by giving an immersed curve interpretation of that pairing, and computing the module from the immersed diagram is often easier than computing the relevant bordered bimodule. In particular, for (1,1) patterns the resulting immersed diagram is genus one, and thus the computation is combinatorial. For (1,1) patterns this generalizes previous work of the first author which showed that such immersed Heegaard diagram computes the \(V=0\) knot Floer complex of the satellite. As a key technical step, which is of independent interest, we extend the construction of a bigraded complex from a doubly pointed Heegaard diagram and of an extended type D structure from a torus-boundary bordered Heegaard diagram to allow Heegaard diagrams containing an immersed alpha curve.
###### Contents
* 1 Introduction
* 1.1 Satellite knots and immersed curves
* 1.2 Immersed Heegaard Floer theory
* 1.3 Strategy to prove the main theorem
* 1.4 Further discussions
* 1.5 Organization
* 2 Bordered Floer invariants of immersed Heegaard diagrams
* 2.1 Immersed bordered Heegaard diagrams
* 2.2 Moduli spaces of stay-on-track holomorphic curves
* 2.3 Compactification
* 2.4 Gluing results
* 2.5 Degeneration of moduli spaces
* 2.6 Embedded holomorphic curves
* 2.7 Ends of moduli spaces of 0-P curves
* 2.8 Ends of moduli spaces of 1-P curves
* 2.9 Type D structures
* 2.10 Weakly extended Type D structures
* 2.11 Invariance
* 3 Knot Floer homology of immersed Heegaard diagrams
* 4
3.1 Immersed doubly-pointed Heeggard diagram * 3.2 The knot Floer chain complex * 3.3 Bi-grading * 3.4 Invariance * 4 Paring theorems * 4.1 Immersed curves in the marked torus * 4.2 Pairing diagrams * 4.3 z-adjacency * 4.4 The collapsing operation * 4.5 Unobstructedness and admissibility of paring diagrams * 4.6 The first paring theorem * 4.7 The second pairing theorem
* 5 Knot Floer homology of satellite knots
* 5.1 Proof of the main theorem, ungraded version
* 5.2 \(\mathcal{H}_{w,z}(\alpha_{K})\) is gradable
* 5.3 Gradings in the main theorem
* 6 (1,1) Patterns
* 6.1 \((1,1)\) diagrams
* 6.2 Removing the \(z\)-passable assumption
* 6.3 One-bridge braids
* 6.4 Immersed curves for 1-bridge braid satellites
* 6.5 L-space slopes, \(\tau\), and \(\epsilon\) for one-bridge braid satellites
* 6.6 Mazur satellites
## 1. Introduction
The study of knot Floer chain complexes of satellite knots has many applications. For instance, computation of knot-Floer concordance invariants of satellite knots is instrumental in establishing a host of results in knot concordance, like [11, 12, 13, 15, 16, 17, 18, 19]. To further understand the behavior of knot Floer chain complexes under satellite operations, the current paper introduces an immersed-curve technique to compute knot Floer chain complexes of satellite knots. This method subsumes most of the previous results in this direction, including [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. This technique is derived from an immersed Heegaard Floer theory that is developed in this paper, which is built on the work by the second author, Rasmussen, and Watson [16].
### Satellite knots and immersed curves
Knot Floer homology was introduced by Ozsvath and Szabo and independently by Rasmussen [11, 22]. Recall that any knot can be encoded by a doubly-pointed Heegaard diagram, which is a closed oriented surface with two sets of embedded circles and two base points. Knot Floer theory, using the machinery of Lagrangian Floer theory, associates a bigraded chain complex over \(\mathbb{F}[U,V]\) to such a double pointed Heegaard diagram, and the bigraded chain homotopy type of this chain complex is an invariant of the isotopy type of the knot. The literature studies various versions of the knot Floer chain complex obtained by setting the ground ring to be a suitable quotient ring
of \(\mathbb{F}[U,V]\); throughout this paper we will consider the complex defined over the ground ring \(\mathcal{R}=\mathbb{F}[U,V]/UV\). The knot Floer chain complex over \(\mathcal{R}\) of a knot \(K\) in the \(3\)-sphere is equivalent to the bordered Floer invariant of the knot complement \(S^{3}\backslash\nu(K)\), and it was shown in [10] that this is equivalent to an immersed multicurve in the punctured torus decorated with local systems. The punctured torus we refer to here is a torus with a single puncture and a parametrization allowing us to identify it with the boundary of the knot complement with a chosen basepoint.
A satellite knot is obtained by gluing a solid torus that contains a knot \(P\) (called the _pattern knot_) to complement of a knot \(K\) in the \(3\)-sphere (called the _companion knot_) in a compatible way, after which the glued-up manifold is a \(3\)-sphere and the pattern knot \(P\) gives rise to the satellite knot \(P(K)\) in the \(3\)-sphere. Just as knots in closed \(3\)-manifolds are encoded by doubly-pointed Heegaard diagrams, a pattern knot in the solid torus can be represented by a doubly-pointed bordered Heegaard diagram, which is an oriented surface of some genus \(g\) with one boundary component, together with two base points and a suitable collection of \(g\)\(\beta\)-curves, \(g-1\)\(\alpha\)-curves, and two \(\alpha\) arcs.
Our technique involves constucting an immersed doubly pointed Heegaard diagram by combining a doubly pointed bordered Heegaard diagram for the pattern \(P\) with the immersed curve associated with the companion \(K\). More precisely, we fill in the boundary of the bordered Heegaard diagram for \(P\) and remove the two \(\alpha\) arcs and then add the immersed curve for \(K\) to the diagram by identifying the punctured torus containing \(K\) with a neighborhood of the now filled in boundary and the \(\alpha\) arcs in a way dictated by the given parametrizations. The resulting diagram is just like a standard genus \(g\) doubly pointed Heegaard diagram except that one of the \(\alpha\) curves, which are usually embedded, is now replaced with a decorated immersed multicurve. See the top row of Figure 1 and 2 for examples of immersed doubly pointed diagrams constructed in this way.
Our main theorem asserts that this diagram can be used to compute the knot Floer complex over \(\mathcal{R}\) of \(P(K)\). We state the main theorem below, with technical inputs referenced in the remark afterwards.
**Theorem 1.1**.: _Let \(\mathcal{H}_{w,z}\) be a doubly-pointed bordered Heegaard diagram for a pattern knot \(P\), and let \(\alpha_{K}\) be the immersed multicurve associated to a companion knot \(K\). Let \(\mathcal{H}_{w,z}(\alpha_{K})\) be the immersed doubly-pointed Heegaard diagram obtained by pairing \(\mathcal{H}_{w,z}\) and \(\alpha_{K}\), in which \(\alpha_{K}\) is put in a \(z\)-passable position. Then the knot Floer chain complex \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}),\mathfrak{d})\) defined using \(\mathcal{H}_{w,z}(\alpha_{K})\) and a generic choice of auxiliary data \(\mathfrak{d}\) is bi-graded homotopy equivalent to the knot Floer chain complex of the satellite knot \(P(K)\) over \(\mathcal{R}\), where \(\mathcal{R}=\mathbb{F}[U,V]/UV\)._
_Remark 1.2_.: The paring operation for constructing \(\mathcal{H}_{w,z}(\alpha_{K})\) is defined in Section 4.2. The knot Floer chain complex of an immersed doubly-pointed Heegaard diagram is defined in Section 3. While the definition of the Heegaard Floer theory with immersed Heegaard diagrams is similar to that in the usual setup, it is complicated by the appearance of boundary degenerations. The \(z\)-passable condition on \(\alpha_{K}\) is a diagrammatic condition used to handle boundary degenerations; it is specified in Definition 5.5 and can be arranged easily via finger moves as in Example 5.7. Moreover, the \(z\)-passable condition is not required when \(\mathcal{H}_{w,z}\) is a genus-one diagram; see Theorem 6.1. The proof of Theorem 1.1 is separated into two stages: we first prove the ungraded version in Section 5.1 and then the gradings are addressed in
Section 5.3. The gradings can be combinatorially computed using an index formula established in Section 2.6; also see Definition 3.8.
Theorem 1.1 is especially useful when the pattern knot is a (1,1) pattern, meaning that it admits a genus one doubly pointed bordered Heegaard diagram \(\mathcal{H}_{w,z}\). This is because in this setting the immersed doubly pointed Heegaard diagram \(\mathcal{H}_{w,z}(\alpha_{K})\) is genus one, and the complex \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}),\mathfrak{d})\) for such a diagram is straightforward to compute even in the presence of immersed curves; it only requires counting bigons which can be done combinatorially. An example is shown in Figure 1, where we compute the knot Floer complex \(CFK_{\mathcal{R}}\) of the \((2,1)\) cable of the trefoil \(T_{2,3}\). The top row of the figure gives the pairing diagram, formed from a doubly pointed bordered Heegaard diagram \(\mathcal{H}_{w,z}\) for the cable knot and the immersed curve \(\alpha_{K}\) associated with \(T_{2,3}\). The bottom left shows the curves lifted to an appropriate covering space, after a homotopy putting them in minimal position. There are seven generators, labeled as in the figure, and it is straightforward
Figure 1. Obtaining the immersed curve for the \((2,1)\)-cable of the trefoil \(T_{2,3}\). The top row shows the pairing diagram \(\mathcal{H}_{w,z}(\alpha_{K})\) obtained by merging a doubly-pointed bordered Heegaard diagram \(\mathcal{H}_{w,z}\) for the \((2,1)\)-cable pattern and the immersed curve \(\alpha_{K}\) for the trefoil \(T_{2,3}\). The bottom row shows a lift of this diagram to a suitable covering space and a planar transform that sends the lift of the immersed curve for \(T_{2,3}\) to that of \((T_{2,3})_{2,1}\).
to count the bigons that cover only \(w\) or only \(z\) and see that the differential in \(CFK_{\mathcal{R}}((T_{2,3})_{2,1})\) is given by
\[\partial a=0,\quad\partial b=Ua+V^{2}e,\quad\partial c=Vd,\quad\partial e=Ud, \quad\partial d=0,\partial f=U^{2}c+Vg,\quad\partial g=0.\]
The Alexander grading changes by one decreases when traveling along the \(\beta\) curve each time the \(\beta\) curve crosses the short arc connecting the \(z\) and \(w\) basepoints, increasing if \(z\) is on the left and decreasing if \(z\) is on the right, so we have
\[A(a)=2,\quad A(b)=A(c)=1,\quad A(d)=0,\quad A(e)=A(f)=-1,\quad A(g)=-2.\]
Relative Maslov gradings can also be computed from the diagram, with the absolute grading fixed by the normalization \(M(a)=0\).
In the case of (1,1) patterns, the first author proved a weaker version of Theorem 1.1 in [1, Theorem 1.2], where the knot Floer chain complexes are only defined over \(\mathbb{F}[U]\cong\mathbb{F}[U,V]/V\). Recall that the complex over \(\mathbb{F}[U]\) does not count any disks covering the \(z\) basepoint, while the complex over \(\mathcal{R}\) allows disks to cover either basepoint as long as they do not cover both. [1, Theorem 1.2] can be used to recover the \(\tau\)-invariant formula for cable knots, Mazur satellites, and Whitehead doubles [1, 1]. Theorem 1.1 generalizes this earlier result by showing that the same process recovers the complex over \(\mathcal{R}\), which carries strictly more information. In particular, this version of the knot Floer complex allows one to compute the \(\epsilon\)-invariant introduced by Hom [1] and infinitely many concordance homomorphisms \(\phi_{j}\) (\(j\in\mathbb{Z}^{+}\)) defined by Dai-Hom-Stoffregen-Truong [1]. For example, we use Theorem 1.1 to recover and generalize the \(\tau\) and \(\epsilon\) formulas for cables from [1, Theorem 2] in Section 6.5 and to recover the \(\tau\) and \(\epsilon\) formulas for Mazur patterns from [1, Theorem 1.4] in Section 6.6.
The computation described above is even easier for a certain family of (1,1) patterns. In [1, Theorem 1], the second author and Watson showed that the immersed multicurve of a cable knot can be obtained from that of the companion knot via a planar transform after lifting the immersed multicurves to an appropriate covering space of the marked torus. In fact, we show in Theorem 1.3 that the same procedure works for a broader family of (1,1) patterns called 1-bridge braids (see Definition 6.2). In addition to cables this class of patterns contains all Berge-Gabai knots. These patterns are specified by three integers and are denoted \(B(p,q,b)\). We let \(K_{p,q,b}\) denote the satellite of the companion knot \(K\) with pattern \(B(p,q,b)\). In Section 6.4 we define a diffeomorphism \(f_{p,q,b}\) of \(\mathbb{R}^{2}\) taking the integer lattice \(\mathbb{Z}^{2}\) to itself and show that this transformation computes the immersed curve for \(K_{p,q,b}\) from that of \(K\).
**Theorem 1.3**.: _Let \(\gamma_{K}\) and \(\gamma_{K_{p,q,b}}\) be the immersed multicurve associated with \(K\) and \(K_{p,q,b}\) respectively. Let \(\tilde{\gamma}_{K}\) and \(\tilde{\gamma}_{K_{p,q,b}}\) be the lifts of \(\gamma_{K}\) and \(\gamma_{K_{p,q,b}}\) to \(\tilde{T}_{\bullet}\) respectively. Then \(\tilde{\gamma}_{K_{p,q,b}}\) is homotopic to \(f_{p,q,b}(\tilde{\gamma}_{K})\)._
We demonstrate how this result is obtained from Theorem 1.1, in the example of \((2,1)\) cabling the trefoil, in the bottom row of Figure 1. Note that the \((2,1)\) cable pattern is the 1-bridge braid \(B(2,1,0)\). On the left is the diagram \(\mathcal{H}_{w,z}(\alpha_{K})\), which by Theorem 1.1 computes the complex \(CFK_{\mathcal{R}}((T_{2,3})_{2,1})\), lifted to an appropriate covering space (specifically \((\mathbb{R}/p\mathbb{Z})\times\mathbb{R}\), where \(p\) is the winding number). There is a homotopy that pulls the the \(\beta\) curve coming from \(\mathcal{H}_{w,z}\) straight to a vertical line, sliding the basepoints and the \(\alpha\) curve along the way, and rescales to obtain a different covering space of the marked torus (namely \((\mathbb{R}/\mathbb{Z})\times\mathbb{R}\)). This homotopy
does not change the Floer complex associated with the diagram, so the new diagram still computes \(CFK_{\mathcal{R}}((T_{2,3})_{2,1})\), and since the \(\beta\) curve is vertical and passes through each pair of basepoints twice, the \(\alpha\) curve in this diagram is precisely the immersed curve representing \(CFK_{\mathcal{R}}((T_{2,3})_{2,1})\). The homotopy that pulls the \(\beta\) curve to the vertical line is precisely the planar transformation \(f_{2,1,0}\). We note that in the special case of cables (for which \(b=0\)), the transformation \(f_{p,q,b}\) agrees with the planar transformation \(f_{p,q}\) described in [10, Theorem 1].
We will use Theorem 1.3 to derive formulas for \(\epsilon\) and \(\tau\) for \(1\)-bridge braid satellites in Theorem 6.6 and Theorem 6.8, generalizing similar formulas for cables. We also determine the precise criteria for a \(1\)-bridge braid satellite to be an L-space knot in Theorem 6.5; this unifies and generalizes similar results known for cables and Berge-Gabai knots [12, 13].
### Immersed Heegaard Floer theory
The underlying machinery for proving Theorem 1.1 is the bordered Heegaard Floer theory introduced by Lipshitz-Ozsvath-Thurston [11, 12]. The new input is an immersed Heegaard Floer theory that we develop in this paper in which we allow Heegaard diagrams with an immersed multicurve in place of one \(\alpha\) curve. We closely follow the construction of bordered invariants in [11], highlighting the points at which more care is needed in this broader setting.
Bordered Heegaard Floer theory is a toolkit to compute Heegaard Floer invariants of manifolds that arise from gluing in terms of a set of relative invariants for manifolds with boundaries. In the simplest setting, assume \(Y_{1}\) and \(Y_{2}\) are two oriented \(3\)-manifolds with parametrized boundary such that \(\partial Y_{1}\) is identified with an oriented parametrized surface \(\mathcal{F}\) and \(\partial Y_{2}\) is identified with \(-\mathcal{F}\) the orientation reversal of \(\mathcal{F}\), and let \(Y=Y_{1}\cup_{\mathcal{F}}Y_{2}\). Up to a suitable notion of homotopy equivalence, the bordered Heegaard Floer theory associates to \(Y_{1}\) a graded \(A^{\infty}\)-module \(\widehat{CFA}(Y_{1})\) (called type \(A\) module) and associates to \(Y_{2}\) a graded differential module \(\widehat{CFD}(Y_{2})\) (called type D module). Moreover, there is a box-tensor product operation \(\widehat{CFA}(Y_{1})\boxtimes\widehat{CFD}(Y_{2})\) which produces a chain complex that is graded homotopy equivalent to the hat-version Heegaard Floer chain complex \(\widehat{CF}(Y)\) of the glued-up manifold. The second author, Rasmussen, and Watson introduced an immersed-curve technique for working with these invariants for manifolds with torus boundary [14]. When \(\mathcal{F}\) mentioned above is a parametrized torus \(T^{2}\), \(\widehat{CFA}(Y_{1})\) and \(\widehat{CFD}(Y_{2})\) are equivalent to immersed multicurves \(\gamma_{1}\) and \(\gamma_{2}\) (decorated with local systems) in the parametrized torus \(T^{2}\) away from a marked point \(z\). Moreover, the Lagrangian Floer chain complex \(\widehat{CF}(T^{2}\backslash\{z\},\gamma_{1},\gamma_{2})\) is homotopy equivalent to \(\widehat{CFA}(Y_{1})\boxtimes\widehat{CFD}(Y_{2})\), which is in turn homotopic equivalent to \(\widehat{CF}(Y)\).
The bordered Heegaard Floer theory also contains a package to deal with the situation of gluing a \(3\)-manifold \(M_{1}\) with two parametrized boundary components \(-\mathcal{F}_{1}\) and \(\mathcal{F}_{2}\) to a \(3\)-manifold \(M_{2}\) with \(\partial M_{2}=-\mathcal{F}_{2}\). It associates to \(M_{1}\) a type \(DA\) bimodule \(\widehat{CFDA}(M_{1})\) up to a suitable equivalence, and there is a box-tensor product \(\widehat{CFDA}(M_{1})\boxtimes\widehat{CFD}(M_{2})\) resulting in a type D module that is homotopy equivalent to \(\widehat{CFD}(M_{1}\cup_{\mathcal{F}_{2}}M_{2})\). In this paper, we introduce an immersed-Heegaard-diagram approach to recapture this bimodule pairing when the manifold boundaries are tori.
Recall that we can encode a manifold \(M_{1}\) whose boundary are two parametrized tori by some arced bordered Heegaard diagram \(\mathcal{H}_{M_{1}}\) (from which the type \(DA\) bimodule is defined). Let \(\alpha_{M_{2}}\) be the immersed multicurve for an oriented \(3\)-manifold \(M_{2}\) with a single torus boundary component. In Section 4.2, we give a pairing construction that merges such an arced bordered Heegaard diagram \(\mathcal{H}_{M_{1}}\) and an immersed multicurve \(\alpha_{M_{2}}\) to obtain an immersed bordered Heegaard diagram \(\mathcal{H}_{M_{1}}(\alpha_{M_{2}})\); see Figure 2 for a schematic example of pairing an arced bordered diagram and an immersed curve. Extending the original way of defining type D modules from (non-immersed) bordered Heegaard diagrams, we define type D modules for a class of immersed bordered Heegaard diagrams that contain such pairing diagrams, and we prove the following theorem in Section 4.6.
**Theorem 1.4**.: _Let \(\mathcal{H}^{a}\) be a left provincially admissible arced bordered Heegaard diagram, and let \(\alpha_{im}\) be a \(z\)-adjacent immersed multicurve. Then_
\[\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\cong\widehat{CFDA}(\mathcal{H}^{ a})\boxtimes\widehat{CFD}(\alpha_{im}).\]
_Remark 1.5_.: Similar to the \(z\)-passable condition in Theorem 1.1, the \(z\)-adjacency is a diagrammatic condition that is used to handle boundary degeneration in immersed Heegaard Floer theory; it is defined in Section 4.3 and can be easily achieved via finger moves.
Among manifolds with torus boundary, of particular interests to us are knot complements. By the results in [1, Section 11.4-11.5], the knot Floer chain complex \(\mathcal{CFK}_{\mathcal{R}}(J)\) of any knot \(J\subset S^{3}\) is equivalent to the type D module \(\widehat{CFD}(S^{3}\backslash\nu(J))\) of the knot complement. (Consequently, \(\mathcal{CFK}_{\mathcal{R}}(J)\) is equivalent to an immersed-mulicurve in a marked torus.)
More concretely, the current state of bordered Floer theory recovers certain versions of knot Floer chain complex of a knot from the type D module of its complement as follows. Note that a knot \(J\) may be obtained from the knot complement \(S^{3}\backslash\nu(J)\) by gluing in the solid torus containing the _identity pattern knot_, which is the core of the solid torus. Let \(\mathcal{H}_{id}\) denote the standard doubly-pointed Heegaard diagram for the identity pattern knot; see Figure 2. In [1], an \(A^{\infty}\)-module \(CFA^{-}(\mathcal{H}_{id})\) is associated to \(\mathcal{H}_{id}\), and it is shown that
\[gCFK^{-}(J)\cong CFA^{-}(\mathcal{H}_{id})\boxtimes\widehat{CFD}(S^{3} \backslash\nu(J)),\]
where \(gCFK^{-}(-)\) denotes the version of knot Floer chain complex over \(\mathbb{F}[U]\)[1, Theorem 11.9]. To recover knot Floer complexes over the larger ground ring \(\mathcal{R}=\mathbb{F}[U,V]/UV\), we use a stronger pairing theorem which occurred implicitly in [10] (and even more implicitly in [1]): There are suitable extensions \(\widehat{CFA}(\mathcal{H}_{id})\) and \(\widehat{CFD}(S^{3}\backslash\nu(J))\) of \(CFA^{-}(\mathcal{H}_{id})\) and \(\widehat{CFD}(S^{3}\backslash\nu(J))\) respectively such that
\[CFK_{\mathcal{R}}(J)\cong\widehat{CFA}(\mathcal{H}_{id})\boxtimes\widehat{ CFD}(S^{3}\backslash\nu(J)).\]
We provide an immersed-Heegaard-diagram approach to recapture the above pairing theorem as well. In Section 2, we define the so-called weakly extended type D structures \(\widehat{CFD}(-)\) of a certain class of immersed bordered Heegaard diagrams that contains pairing diagrams \(\mathcal{H}_{M_{1}}(\alpha_{M_{2}})\) mentioned earlier. In Section 3, we define knot Floer chain complexes of a class of immersed doubly-pointed Heegaard diagrams that includes any diagram \(\mathcal{H}_{id}\cup\mathcal{H}_{im}\) obtained by gluing \(\mathcal{H}_{id}\) and
an immersed bordered Heegaard diagram \(\mathcal{H}_{im}\). Moreover, we prove the following theorem in Section 4.7.
**Theorem 1.6**.: _Let \(\mathcal{H}_{im}\) be an unobstructed, bi-admissible immersed bordered Heegaard diagram, and let \(\mathcal{H}_{id}\) be the standard bordered Heegaard diagram for the identity pattern. Then_
\[CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{im})\cong\widetilde{CFA}( \mathcal{H}_{id})\boxtimes\widetilde{CFD}(\mathcal{H}_{im}).\]
Figure 2. The pairing diagram \(\mathcal{H}_{w,z}(\alpha_{K})\) can alternatively be obtained in the following three steps: First, pair an arced bordered Heegaard diagram \(\mathcal{H}_{X(P)}\) obtained from \(\mathcal{H}_{w,z}\) with the immersed curve \(\alpha_{K}\); secondly, construct a closed doubly-pointed immersed Heegaard diagram \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}\); third, apply Heegaard moves to \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}\) to get \(\mathcal{H}_{w,z}(\alpha_{K})\).
### Strategy to prove the main theorem
The above theorems are used to compute the knot Floer chain complex of satellite knots as follows.
First, the knot complement of a satellite knot \(P(K)\) decomposes into the union of two 3-manifolds along a torus: the exterior \(X(P)=(S^{1}\times D^{2})\backslash\nu(P)\) of the pattern knot and the complement \(S^{3}\backslash\nu(K)\) of the companion knot. Therefore,
\[\widehat{CFD}(S^{3}\backslash\nu(P(K)))\cong\widehat{CFDA}(X(P))\boxtimes \widehat{CFD}(S^{3}\backslash\nu(K)),\]
and hence we can apply Theorem 1.4 to compute \(\widehat{CFD}(S^{3}\backslash\nu(P(K)))\). More concretely, given a doubly-pointed bordered diagram \(\mathcal{H}_{w,z}\) for the pattern knot \(P\), one can apply a standard stabilization-and-drilling procedure to obtain an arced bordered Heegaard diagram \(\mathcal{H}_{X(P)}\) for \(X(P)\), which is then paired with the immersed multicurve \(\alpha_{K}\) for \(K\) to obtain an immersed bordered Heegaard diagram \(\mathcal{H}_{X(P)}(\alpha_{K})\). The type D module \(\widehat{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))\) is then homotopy equivalent to a type D module of \(S^{3}\backslash\nu(P(K))\) by Theorem 1.4.
Second, one can define a weakly extended type D module \(\widetilde{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))\) of the pairing diagram \(\mathcal{H}_{X(P)}(\alpha_{K})\). As mentioned above, the underlying (hat-version) type D module \(\widehat{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))\) defined using the same diagram is homotopy equivalent to a type D module of \(S^{3}\backslash\nu(P(K))\). Since extensions of type D modules are unique up to homotopy, \(\widetilde{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))\) is homotopy equivalent to a weakly extended type D module of \(S^{3}\backslash\nu(P(K))\). Now Theorem 1.6 implies that the knot Floer chain complex \(CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{K}))\) is homotopy equivalent to the knot Floer chain complex of \(P(K)\).
Finally, the immersed doubly-pointed Heegaard diagram \(\mathcal{H}_{w,z}(\alpha_{K})\) can be obtained from \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{K})\) via Heegaard moves. We show knot Floer chain complexes defined from immersed doubly-pointed Heegaard diagrams that differ by Heegaard moves are homotopy equivalent, and hence \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\) is homotopic equivalent to a knot Floer chain complex of \(P(K)\).
See Figure 2 for an illustration of the operations on Heegaard diagrams involved in the strategy of the proof.
### Further discussions
#### 1.4.1. Immersed Heegaard diagrams
The work presented in this paper opens a new avenue for studying Heegaard Floer homology using immersed Heegaard diagrams. While the results in this paper already demonstrate this strategy can be useful for studying satellite operations, many questions remain that are worthy of further study. For example, a natural question is whether \(CFD\) can be defined for a more general class of immersed Heegaard diagrams in which more than one \(\alpha\) and/or \(\beta\) curve may be immersed, rather than just for a single \(\alpha\) curve as in the present setting. We expect this is possible but the the technical difficulties will be greater.
As a special case, one could consider doubly pointed genus one immersed Heegaard diagrams in which both the \(\alpha\) and \(\beta\) curve are allowed to be immersed. In this case there are no technical difficulties in defining a Floer complex from such a diagram, as the construction is combinatorial. We expect that this class of diagrams will be useful for studying satellite knots with arbitrary patterns, so that it will not be necessary to restrict to \((1,1)\) patterns to perform computations in a
genus one surface. More precisely, an arbitrary pattern should give rise to an immersed (1,1)-diagram curve which can be used to recover the action of the satellite operation on knot Floer complexes, and an analog of Theorem 1.1 should hold so that pairing the immersed (1,1)-diagram for the satellite with the immersed curve for the companion computes the knot Floer complex of a satellite. This will be explored in future work.
A related question concerns immersed diagrams for bimodules in bordered Floer theory. Stabilizing a (1,1) diagram gives a genus two arced bordered diagram for the complement of the pattern knot, which gives rise to a bordered Floer bimodule. In analogy to immersed (1,1) diagrams, we could consider arced bordered diagram with an immersed \(\beta\) curve. We could ask which bimodules can be represented by such diagrams and if these diagrams are useful in determining how bimodules act on immersed curves. Just as modules over the torus algebra correspond to (decorated) immersed curves in the punctured torus \(T_{\bullet}\), it is expected that bimodules are related to immersed surfaces in \(T_{\bullet}\times T_{\bullet}\). It may be that arced bordered diagrams with immersed curves are helpful in understanding this connection.
#### 1.4.2. Pattern detection
In another direction, we can ask if the nice behavior demonstrated for one-bridge braid patterns extends to any other patterns. Recall that given two patterns \(P_{1}\) and \(P_{2}\), we define the composition \(P_{1}\circ P_{2}\) to be the pattern knot so that \((P_{1}\circ P_{2})(K)\) is \(P_{1}(P_{2}(K))\) for any companion knot \(K\). Theorem 1.3 implies that one-bridge-braid patterns and their compositions act as planar transforms on the (lifts of) immersed curves of companion knots. We wonder if this property characterize these pattern knots.
**Question 1.7**.: Are one-bridge braid patterns and their compositions the only pattern knots that induce satellite operations that act as planar transforms on immersed curves in the marked torus?
More generally, one can ask the following question.
**Question 1.8**.: Which pattern knots are detected by the bordered Floer bimodule?
Pattern knot detection is closely related to the pursuit of understanding which links are detected by Heegaard Floer homology, as a pattern knot \(P\) is uniquely determined by the link \(L_{P}\) in \(S^{3}\) consisting of \(P\) and the meridian of the solid torus containing \(P\). For example, detection of \((n,1)\)-cable patterns would follow from the corresponding link detection result on \(T_{2,2n}\) by Binns-Martin [1, Theorem 3.2]. Note that the bimodule of a pattern knot complement is stronger than the knot/link Floer homology group of \(L_{P}\), so it is also natural to wonder if one can detect patterns using bimodules that are not seen by the link detection results.
### Organization
The rest of the paper can be divided into two parts.
The first part includes Section 2 to Section 4 and establishes the immersed Heegaard Floer theory outlined in the introduction. Section 2 defines bordered Heegaard Floer invariants of immersed bordered Heegaard diagrams. Section 3 defines knot Floer chain complexes of immersed doubly-pointed Heegaard diagrams. Section 4 introduces the pairing constructions and proves the corresponding pairing theorems, i.e., Theorem 1.4 and Theorem 1.6.
The second part concerns satellite knot and includes Section 5 and Section 6. Section 5 proves Theorem 1.1 by applying the machinery established in the previous sections. Section 6 applies Theorem 1.1 to study satellite knots with \((1,1)\) patterns,
in which we remove the \(z\)-passable assumption and analyze satellites with one-bridge-braid patterns and the Mazur pattern in detail.
### Acknowledgment
The authors would like to thank Robert Lipshitz, Adam Levine, and Liam Watson for helpful conversations while this work was in progress. The first author was partially supported by the Max Planck Institute for Mathematics and the Pacific Institute for the Mathematical Sciences during the preparation of this work; the research and findings may not reflect those of the institutes. The second author was partially supported by NSF grant DMS-2105501.
## 2. Bordered Floer invariants of immersed Heegaard diagrams
This section aims to define (weakly extended) type D structures1 of a certain class of immersed bordered Heegaard diagrams. Even though the Lagrangians are possibly immersed, we can still define such structures by counting holomorphic curves with smooth boundaries. The main technical complication compared to the embedded-Heegaard-diagram case are the possible appearance of boundary degeneration, which interferes with the usual proof that the differential squares to a desired element. Our method to deal with this issue is to employ a key advantage of Heegaard Floer homology: The boundary degenerations can be controlled by imposing certain diagrammatic conditions on the Heegaard diagrams (in Section 2.1), and in Section 4 we show these conditions are achievable via perturbing the \(\alpha\) curves on the Heegaard diagrams that we are interested in.
Footnote 1: Here, weakly extended type D structures are same as type D structures with generalized coefficient maps appearing in [10, Chapter 11.6], and we call them weak since they are a quotient of the extended type D structures defined in [11].
This section can be viewed as modifying [10, Chapter 5 and Chapter 11.6] and [10, Errata] to our setting. Indeed, the local results on holomorphic curves such as transversality and gluing results carry over without changes. The main differences are that (1) the embedded index formula is different (see Section 2.6) and that (2) we need a parity count of boundary degenerations with corners at the self-intersection points of the immersed Lagrangian (see Section 2.7-2.8). We also add in more details on counting holomorphic curves with a single interior puncture as sketched in [10, Errata]. The counting of such curves also appeared in [10], and the more general case where multiple interior punctures are allowed is studied in-depth in the recent work by Lipshitz-Ozsvath-Thurston on defining a \(HF^{-}\) bordered invariant [10]; our analysis of the degeneration of one-punctured curves at east infinity is extracted from [11].
### Immersed bordered Heegaard diagrams
**Definition 2.1**.: A _local system_ over a manifold \(M\) consists of a vector bundle over \(M\) and a parallel transport of the vector bundle. A trivial local system is a local system where the vector bundle is of rank \(1\).
**Definition 2.2**.: An _immersed bordered Heegaard diagram_ is a quadruple \(\mathcal{H}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta},z)\), where
1. \(\bar{\Sigma}\) is a compact, oriented surface of genus \(g\) with one boundary component;
2. \(\bar{\boldsymbol{\alpha}}=\bar{\boldsymbol{\alpha}}^{a}\cup\boldsymbol{\alpha }^{c}\). Here \(\bar{\boldsymbol{\alpha}}^{a}=\{\bar{\alpha_{1}}^{a},\bar{\alpha_{2}}^{a}\}\) is a set of two properly embedded arcs in \(\bar{\Sigma}\), and \(\boldsymbol{\alpha}^{c}=\{\alpha_{1},\dots,\alpha_{g-2},\alpha_{g-1}\}\), where \(\alpha_{1},\dots,\alpha_{g-2}\) are embedded circles in the interior of \(\bar{\Sigma}\) and \(\alpha_{g-1}=\{\alpha_{im}^{0},\dots,\alpha_{im}^{n}\}\) is an immersed
mulicurve with a local system in the interior of \(\bar{\Sigma}\). We require that \(\alpha^{0}_{im}\) has a trivial local system and that \(\{\bar{\alpha}^{a}_{1},\bar{\alpha}^{a}_{2},\alpha_{1},\ldots,\alpha_{g-2}, \alpha^{0}_{im}\}\) are pairwise disjoint and homologically independent in \(H_{1}(\bar{\Sigma},\partial\bar{\Sigma};\mathbb{Z})\). We also require the curves \(\alpha^{1}_{im},\ldots,\alpha^{n}_{im}\) are homologically trivial in \(H_{1}(\bar{\Sigma},\mathbb{Z})\)2; Footnote 2: To maintain this property, we will not consider handleslides of homologically-trivial curves over other embedded \(g\)-curves.
3. \(\boldsymbol{\beta}=\{\beta_{1},\ldots,\beta_{g}\}\) consists of \(g\) pairwise disjoint, homologically independent circles embedded in the interior of \(\bar{\Sigma}\);
4. A base point \(z\in\partial\bar{\Sigma}\backslash\partial\bar{\boldsymbol{\alpha}}^{a}\).
_Remark 2.3_.: We also denote \(\alpha_{g-1}\) by \(\alpha_{im}\) and call \(\alpha^{0}_{im}\) the _distinguished component_ of \(\alpha_{im}\). Note if \(\alpha_{im}\) is embedded and consists of only the distinguished component, then the immersed bordered Heegaard diagram is just an ordinary bordered Heegaard diagram representing a bordered \(3\)-manifold with torus boundary.
_Remark 2.4_.: One can define immersed bordered Heegaard diagrams that have more than one immersed multicurves. We content ourselves with the one-immersed-mulicurve setting, for such diagrams occur naturally in applications and we would also like to avoid the more tedious notations incurred by allowing more immersed multicurves. One can also work with immersed bordered diagrams that generalizes regular bordered diagrams for \(3\)-manifolds with higher-genus or multi-component boundaries. Again, we avoid such cases out of conciseness.
We need to impose additional conditions on immersed bordered diagrams to define (weakly extended) type D structures. We give some terminology before stating the conditions. Let \(\{\mathcal{D}_{i}\}\) be the closures of the regions in \(\bar{\Sigma}\backslash(\bar{\boldsymbol{\alpha}}\cup\boldsymbol{\beta})\). A _domain_\(B\) is a formal linear combination of the \(\mathcal{D}_{i}\)'s, i.e., an object of the form \(\sum_{i}n_{i}\mathcal{D}_{i}\) for \(n_{i}\in\mathbb{Z}\), and the coefficient \(n_{i}\) is called the multiplicity of \(B\) at \(\mathcal{D}_{i}\). Given a point \(p\in\bar{\Sigma}\backslash(\bar{\boldsymbol{\alpha}}\cup\boldsymbol{\beta})\), \(n_{p}(B)\) denotes the multiplicity of \(B\) at the region containing \(p\). A domain is called positive if \(n_{i}\geq 0\) for all \(i\). Note that a domain \(B\) specifies an element \([B]\) in \(H_{2}(\bar{\Sigma},\partial\bar{\Sigma}\cup\boldsymbol{\beta}\cup\bar{ \boldsymbol{\alpha}};\mathbb{Z})\). Let \(l:\mathrm{II}S^{1}\to\partial\bar{\Sigma}\cup\boldsymbol{\beta}\cup\bar{ \boldsymbol{\alpha}}\subset\bar{\Sigma}\) be an oriented multicurve; note that this multicurve is not necessarily immersed, it can have corners at intersections of \(\bar{\alpha}\) with \(\beta\), intersections of \(\bar{\alpha}\) with \(\partial\bar{\Sigma}\), or self-intersections of \(\alpha_{im}\). A domain \(B\) is said to be bounded by \(l\) if \(\partial[B]=[l]\) in \(H_{1}(\partial\bar{\Sigma}\cup\boldsymbol{\beta}\cup\bar{\boldsymbol{\alpha}}; \mathbb{Z})\). A domain \(B\) is called a _periodic domain_ if it is bounded by a (possibly empty) loop in \(\partial\bar{\Sigma}\cup\bar{\boldsymbol{\alpha}}^{a}\) together with some copies of the \(\beta\)-circles and the \(\alpha\)-circles, where we allow at most one component of \(\alpha_{im}\) to appear in \(\partial B\).
We will need some language to keep track of when the boundaries of domains include corners at self-intersection points of \(\alpha_{i}\). Note that, ignoring the local systems, we can identify \(\alpha_{im}=\{\alpha^{0}_{im},\ldots,\alpha^{n}_{im}\}\) as the image of an immersion \(f_{im}:\mathrm{II}^{n+1}S^{1}\to\bar{\Sigma}\).
**Definition 2.5**.: A closed curve \(l:S^{1}\to\alpha_{im}\subset\bar{\Sigma}\) is said to be _stay-on-track_ or _zero-cornered_ if it lifts to a map \(\tilde{l}:S^{1}\to S^{1}\subset\mathrm{II}S^{1}\) such that \(f_{im}\circ\tilde{l}=l\). Note that this is nearly the same as saying the curve \(l\) is immersed; the difference is that stay-on-track paths can stop and change directions along \(\alpha_{im}\). A curve \(l:S^{1}\to\alpha_{im}\) is said to be _\(n\)-cornered_ if there exists \(n\)-points \(\xi_{1},\ldots\xi_{n}\) in \(S^{1}\) dividing \(S^{1}\) into arcs \(a_{1},\ldots,a_{n}\) (ordered cyclically, with indices understood mod \(n\)) such that \(l|_{a_{i}}\) lifts through \(f_{im}\) for each \(i\), but \(l|_{a_{i}\cup a_{i+1}}\) does not. Note that \(l\) maps each \(\xi_{i}\) to some self-intersection point \(q_{i}\) of \(\alpha_{im}\) and makes a sharp turn at \(q_{i}\); we refer to this as
a corner of the curve \(l\). We define an arc to be either stay-on-track or \(n\)-cornered similarly.
See Figure 3 for an example of a \(3\)-cornered curve. Next, we define a class of domains on immersed bordered Heegaard diagrams.
**Definition 2.6**.: A domain \(B\) on a genus \(g\) immersed bordered Heegaard diagram is called a _stabilized teardrop_ if it satisfies the following conditions:
1. \(B\) is a positive domain bounded by \(\partial\bar{\Sigma}\) (with induced boundary orientation) and a one-cornered subloop of \(\alpha_{im}\). (In particular, \(B\) is a formal linear combination of regions of \(\bar{\Sigma}\backslash\bar{\boldsymbol{\alpha}}\).)
2. There exists a separating curve \(C\) of \(\bar{\Sigma}\) which does not intersect \(\bar{\boldsymbol{\alpha}}\), and the local multiplicity of \(B\) on the region containing \(C\) is \(1\).
3. Surgery on \((\bar{\Sigma},\bar{\boldsymbol{\alpha}})\) along \(C\) produces two oriented surfaces with \(\alpha\)-curves: \((E_{1},\bar{\boldsymbol{\alpha}}^{a},\alpha_{1},\ldots,\alpha_{g-1})\), where \(E_{1}\) is a genus-\((g-1)\) with one boundary component, and \((E_{2},\alpha_{im})\), where \(E_{2}\) a genus-one surface. The domain \(B\) gives rise to two domains \(B_{1}\) and \(B_{2}\) on \(E_{1}\) and \(E_{2}\) respectively, such that \([B_{1}]=[E_{1}]\) and \(B_{2}\) is an immersed teardrop in \(E_{2}\) bounded by a one-cornered subloop of \(\alpha_{im}\). (Here, we also allow teardrops with concave corner.)
A pictorial example of a stabilized teardrop is shown in Figure 3. For convenience, we introduce the following terminology.
**Definition 2.7**.: A domain \(B\) is said to be an \(n\)_-cornered \(\alpha\)-bounded domain_ if it is bounded by (possibly trivial) loops contained in \(\partial\bar{\Sigma}\cup\bar{\boldsymbol{\alpha}}^{a}\cup\alpha_{1}^{c}\cup \ldots\cup\alpha_{g-2}^{c}\) and an \(n\)-cornered loop contained in some connected component of \(\alpha_{im}\).
The condition on bordered diagrams needed to deal with boundary degenerations is the following.
Figure 3. Examples of a \(3\)-cornered curve and a stabilized teardrop on a genus-\(2\) immersed bordered Heegaard diagram, where the \(\beta\) curves are omitted. The \(3\)-cornered curve is the boundary of the shaded triangular region. The highlighted region is a stabilized teardrop, where the dashed circle is the separating curve.
**Definition 2.8**.: Given an immersed bordered Heegaard diagram \(\mathcal{H}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta},z)\), \(\bar{\boldsymbol{\alpha}}\) is called _unobstructed_ if
1. there are no positive zero- or one-cornered \(\alpha\)-bounded domains \(B\) with \(n_{z}(B)=0\),
2. the only positive zero-cornered \(\alpha\)-bounded domain \(B\) with \(n_{z}(B)=1\) is \([\Sigma]\),
3. any positive one-cornered \(\alpha\)-bounded domain \(B\) with \(n_{z}(B)=1\) is a stabilized teardrop, and
4. any positive two-cornered \(\alpha\)-bounded domain \(B\) with \(n_{z}(B)=0\) is a bigon.
Abusing the terminology, we also say an immersed Heegaard diagram is unobstructed if its \(\bar{\boldsymbol{\alpha}}\) is so.
By a _Reeb chord_ in \((\partial\bar{\Sigma},\partial\bar{\boldsymbol{\alpha}}^{a})\), we mean an oriented chord on \(\partial\bar{\Sigma}\) whose endpoints are on \(\partial\bar{\boldsymbol{\alpha}}^{a}\) and whose orientation is induced from that of \(\partial\bar{\Sigma}\). When defining type D structures, it will be convenient to use Reeb chords in \((-\partial\bar{\Sigma},\partial\bar{\boldsymbol{\alpha}}^{a})\). Let \(\rho_{0},\rho_{1},\ldots,\rho_{3}\) denote the Reeb chords corresponding to the four arcs in \(-\partial\bar{\Sigma}\backslash\partial\bar{\boldsymbol{\alpha}}^{a}\), where \(\rho_{0}\) contains the base point \(z\) and the sub-index increases according to the orientation of \(-\partial\bar{\Sigma}\). We call these four Reeb chords the _elementary Reeb chords_. Other Reeb chords in \((-\partial\bar{\Sigma},\partial\bar{\boldsymbol{\alpha}}^{a})\) can be obtained by concatenation of the four elementary Reeb chords. For example, \(\rho_{12}\) denotes the concatenation of \(\rho_{1}\) and \(\rho_{2}\). We use the notation \(-\rho_{I}\) to indicate the orientation reversal of \(\rho_{I}\), where \(I\) is a string of words in \(\{0,1,2,3\}\) that permits concatenation; note that \(-\rho_{I}\) is a Reeb chord in \((\partial\bar{\Sigma},\partial\bar{\boldsymbol{\alpha}}^{a})\).
We shall need two types of admissibility on the bordered Heegaard diagrams: the first one is needed for defining the (extended) type D structures, and the second one is needed for the paring operation studied in Section 4.7. Given a domain \(B\), denote by \(n_{-\rho_{i}}(B)\) (\(i=0,1,2,3\)) the local multiplicity of \(B\) in the region containing the Reeb chord \(-\rho_{i}\). (In particular, \(n_{-\rho_{0}}(B)=n_{z}(B)\)). A periodic domain \(B\) is called _provincial_ if \(n_{-\rho_{i}}(B)=0\) for all \(i=0,1,2,3\).
**Definition 2.9**.: An immersed bordered Heegaard diagram is _provincially admissible_ if all non-trivial provincial periodic domains have both positive and negative local multiplicities.
**Definition 2.10**.: An immersed bordered Heegaard diagram is _bi-admissible_ if any non-trivial periodic domain \(B\) satisfying \(n_{-\rho_{0}}(B)=n_{-\rho_{1}}(B)=0\) or \(n_{-\rho_{2}}(B)=n_{-\rho_{3}}(B)=0\) has both positive and negative local multiplicities.
Note that bi-admissibility implies provincial admissibility.
### Moduli spaces of stay-on-track holomorphic curves
In this subsection, we set up the moduli spaces that we use to define the (weakly extended) type D structures. Roughly, the moduli spaces consist of pseudo-holomorphic curves in \(\Sigma\times[0,1]\times\mathbb{R}\). We define them by modifying the corresponding definitions in Section 5.2 of [18] with two main differences: The first one is a new constraint on the boundaries with respect to \(\alpha_{im}\), and the second one is that we include pseudo-holomorphic curves with a single interior puncture for the sake of defining a weakly extended type D structure.
#### 2.2.1. Definition of moduli spaces of holomorphic curves
\(\bullet\)
**Definition 2.11**.: A _decorated source \(S^{\circ}\) of type \(0\)-\(P\)_ is a smooth Riemann surface \(S\) with boundary such that
1. it has boundary punctures and has no interior punctures,
2. there is a labeling of each puncture by \(+\), \(-\), or \(e\), and
3. there is a labeling of each \(e\) puncture by a Reeb chord on the boundary of the immersed bordered Heegaard diagram.
A _decorated source \(S^{\circ}\) of type \(1\)-\(P\)_ is a smooth Riemann surface \(S\) with boundary such that
1. it has boundary punctures and a single interior puncture,
2. there is a labeling of each boundary puncture by \(+\) or \(-\), and
3. there is a labeling of the interior puncture by \(e\).
By a decorated source, we mean it is either a decorated source of type \(0\)-\(P\) or a decorated source of type \(1\)-\(P\).
Let \(\mathcal{H}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta},z)\) be an immersed bordered Heegaard diagram. Let \(\Sigma\) denote the interior of \(\bar{\Sigma}\). Equip the target manifold \(\Sigma\times[0,1]\times\mathbb{R}\) with an admissible almost complex structure \(J\) as in Definition 5.1 of [1]. Let \(\pi_{\mathbb{D}}\), \(\pi_{\Sigma}\), \(s\), and \(t\) denote the canonical projection maps from \(\Sigma\times[0,1]\times\mathbb{R}\) to \([0,1]\times\mathbb{R}\), \(\Sigma\), \([0,1]\), and \(\mathbb{R}\) respectively. We will count maps
\[u:(S,\partial S)\to(\Sigma\times[0,1]\times\mathbb{R},\boldsymbol{\beta}\times \{0\}\times\mathbb{R},\bar{\boldsymbol{\alpha}}\times\{1\}\times\mathbb{R})\]
from decorated sources to the target manifold satisfying the following conditions:
1. \(u\) is \((j,J)\)-holomorphic, where \(j\) is a complex structure on the surface \(S\).
2. \(u\) is proper.
3. \(u\) extends to a proper map \(u_{\bar{e}}:S_{\bar{e}}\to\Sigma_{\bar{e}}\times[0,1]\times\mathbb{R}\), where \(S_{\bar{e}}\) and \(\Sigma_{\bar{e}}\) are surfaces obtained by filling in the corresponding east puncture(s).
4. the map \(u_{\bar{e}}\) has finite energy in the sense in [1].
5. \(\pi_{\mathbb{D}}\circ u_{\bar{e}}\) is a \(g\)-fold branched cover.
6. \(t\circ u\) approaches \(\infty\) at \(+\) punctures.
7. \(t\circ u\) approaches \(-\infty\) at \(-\) punctures.
8. \(\pi_{\Sigma}\circ u\) approaches the labeled Reeb chord at a boundary \(e\) punctures.
9. \(\pi_{\Sigma}\circ u\) covers each of the regions next to \(\bar{e}\in\Sigma_{\bar{e}}\) at most once.
10. (Strong boundary monotonicity) For each \(t\in\mathbb{R}\), each of \(u^{-1}(\beta_{i}\times\{0\}\times\{t\})\) (\(i=1,\dots,g\)) and \(u^{-1}(\alpha_{i}^{c}\times\{1\}\times\{t\})\) (\(i=1,\dots,g-1\)) consists of exactly one point, and \(u^{-1}(\alpha_{i}^{a}\times\{1\}\times\{t\})\) (\(i=1,2\)) consists of at most one point3.
Footnote 3: In [1], a weak boundary monotonicity condition was introduced as well as the strong boundary monotonicity. When restricting to torus-boundary bordered manifolds, however, these two conditions are equivalent. So, we only state the strong boundary monotonicity condition here.
11. (Stay-on-track boundary condition) Let \(A\) be the boundary component of \(S\) that is mapped to the \(\alpha_{im}\times\{1\}\times\mathbb{R}\). Then \(\pi_{\Sigma}\circ u|_{A}\) is stay-on-track.
_Remark 2.12_.: Only (M-9) and (M-11) are different from the corresponding conditions in [1]. We impose (M-9) since we aim to define an extended type D structure, which will not need holomorphic curves covering the boundary regions multiple times.
Given an immersed (bordered) Heegaard diagram, generators and homology classes of maps connecting generators are defined similarly as in the embedded-\(\alpha\)-curve case.
**Definition 2.13**.: Let \(\mathbf{x}\) and \(\mathbf{y}\) be two generators and let \(B\in\widetilde{\pi}_{2}(\mathbf{x},\mathbf{y})\) be a homology class connecting \(\mathbf{x}\) to \(\mathbf{y}\). \(\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) is defined to be the moduli space of holomorphic curves with decorated source \(S^{\triangleright}\), satisfying (M-1)-(M-11), asymptotic to \(\mathbf{x}\) at \(-\infty\) and \(\mathbf{y}\) at \(+\infty\), and inducing the homology class \(B\).
Let \(E(S^{\triangleright})\) be the set of east punctures of \(S^{\triangleright}\) lying on the boundary. Let \(\widetilde{e}v\colon\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{ \triangleright})\to\mathbb{R}^{|E(S^{\triangleright})|}\) be the evaluation map given by the values of \(t\circ u_{\bar{e}}\) at the east punctures; the values are called heights of the east punctures.
**Definition 2.14**.: Let \(P=\{P_{i}\}\) be a partition of \(E\). Then
\[\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\coloneqq \widetilde{e}v^{-1}(\Delta_{P}),\]
where \(\Delta_{P}:=\{(x_{p})\in\mathbb{R}^{|E|}\mid x_{p}=x_{q}\text{ if }p,q\in P_{i} \text{ for some i}\}\).
**Definition 2.15**.: Let \(\overrightarrow{P}=(P_{1},\dots,P_{k})\) be an ordered partition \(E\), and let \(P\) denote the corresponding underlying unordered partition. Define \(\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};\overrightarrow{P})\) to be \(\{u\in\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\mid t\circ u (p)<t\circ u(q)\text{ for }p\in P_{i},\text{ }q\in P_{i^{\prime}},\text{ and }i<i^{ \prime}\}\).
There is an \(\mathbb{R}\)-action on the above moduli spaces given by translations along the \(\mathbb{R}\)-coordinate of \(\Sigma\times[0,1]\times\mathbb{R}\). _The reduced moduli spaces_ are the quotient of the relevant moduli spaces by the \(\mathbb{R}\)-action; they are denoted by \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\), \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\), and \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};\overrightarrow{P})\), respectively. The evaluation maps \(\widetilde{e}v\) induce maps \(ev\) from the reduced moduli spaces to \(\mathbb{R}^{|E(S^{\triangleright})|}/\mathbb{R}\), which record the relative heights between boundary east punctures.
**Notation.** When we need to distinguish moduli spaces of 0-P holomorphic curves without east punctures and moduli spaces of 1-P holomorphi curves, we will use the notation \(\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};U)\) to emphasize the source \(S^{\triangleright}\) is of type 1-P.
#### 2.2.2. Regularity and the expected dimension
**Proposition 2.16**.: _For a generic admissible almost complex structure on \(\Sigma\times[0,1]\times\mathbb{R}\), the moduli space \(\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\) is transversally cut out._
Proof.: See Proposition 5.6 of [11]. We point out that the \(\alpha\) curves being immersed does not affect the usual proof. When analyzing the linearization of the \(\bar{\partial}\)-operator in the standard proof of such a result, one would be working with a pull-back bundle over \(S\), on which one will not see the immersed boundary condition anymore.
**Proposition 2.17**.: _Let \(B\in\widetilde{\pi}_{2}(\mathbf{x},\mathbf{y})\). Let \(g\) denote the genus of the bordered Heegaard diagram. Let \(\chi(\cdot)\) and \(e(\cdot)\) denote the Euler number and Euler measure, respectively._
1. _Let_ \(S^{\triangleright}_{0}\) _be a decorated source of 0-P. Then the expected dimension of the moduli space_ \(\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright}_{0};P)\) _is_ \[\text{ind}(B,S_{0},P)=g-\chi(S_{0})+2e(B)+|P|.\]
2. _Let_ \(S^{\triangleright}_{1}\) _be a decorated source of 1-P. Then the expected dimension of the moduli space_ \(\widetilde{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright}_{1};U)\) _is_ \[\text{ind}(B,S_{1})=g-\chi(S_{1})+2e(B)+1.\]
Proof.: For (1) see Proposition 5.8 of [11]. For the same reason mentioned in the proof of the previous proposition, our \(\alpha\)-curves being immersed does not affect the proof.
For (2), note that using the removable singularity theorem we can identify holomorphic curves in \(\widetilde{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S_{1}^{\triangleright};U)\) with holomorphic curves in \(\Sigma_{\bar{e}}\times[0,1]\times\mathbb{R}\) that intersects \(\{e\}\times[0,1]\times\mathbb{R}\) geometrically once. The formula then follows from the index formula in the close-Heegaard-diagram case given in Section 4.1 of [10]. (Again, \(\alpha_{im}\) being immersed does not affect the formula).
_Remark 2.18_.: If we regard the interior puncture of a 1-P holomorphic curve is asymptotic to a single closed Reeb orbit, denoted by \(U\), then (1) and (2) in the above proposition may be unified using the formula in (1).
### Compactification
The moduli spaces defined in Section 2.2 admit compactification similar to that in [11]. The overall idea is that a sequence of holomorphic curves in \(\Sigma\times[0,1]\times\mathbb{R}\) may converge to a holomorphic building in \(\Sigma\times[0,1]\times\mathbb{R}\) together with some holomorphic curves in the east-\(\infty\) attaching to it; such nodal holomorphic objects are called _holomorphic combs_. In our setup, the degeneration in the east-\(\infty\) is the same as those when the \(\alpha\)-curves are embedded; we recollect the relevant material (with straightforward modifications to accommodate for 1-P holomorphic curves) in Subsection 2.3.1. However, the immersed \(\alpha\) curves do complicate the situation. For example, a limit holomorphic building in \(\Sigma\times[0,1]\times\mathbb{R}\) may have corners at self-intersection points of \(\alpha_{im}\). We will give a precise description of this phenomenon in Subsection 2.3.2.
#### 2.3.1. Holomorphic curves in the end at east-infinity
Let \(Z\) denote the oriented boundary \(\partial\bar{\Sigma}\) of the bordered Heegaard surface. We define the moduli spaces of holomorphic curves in the east end \(\mathbb{R}\times Z\times[0,1]\times\mathbb{R}\). They host possible degenerations of the limits of holomorphic curves at east-\(\infty\). Since the closed \(\alpha\) curves do not approach the cylindrical end at east-\(\infty\), these moduli spaces are not affected by the closed \(\alpha\) curves being immersed and their definition is the same as the usual embedded case. We first specify the sources of the holomorphic curves.
**Definition 2.19**.: A _bi-decorated source_\(T^{\circ}\) is a smooth Riemann surface \(T\) with boundary such that
1. it has boundary punctures and at most one interior puncture,
2. the boundary punctures are labeled by \(e\) or \(w\),
3. the interior puncture, if exits, is labeled by \(e\) and,
4. the boundary punctures are also labeled by Reeb chords.
Equip \(\mathbb{R}\times Z\times[0,1]\times\mathbb{R}\) with a split almost complex structure \(J=j_{\mathbb{R}\times Z}\times j_{\mathbb{D}}\). The four points \(\mathbf{a}=\partial\boldsymbol{\alpha}^{a}\) on \(Z\) give rise to four Lagrangians \(\mathbb{R}\times\mathbf{a}\times\{1\}\times\mathbb{R}\).
**Definition 2.20**.: Given a bi-decorated source \(T^{\circ}\), define \(\widetilde{\mathcal{N}}(T^{\circ})\) to be the moduli spaces of maps \(v:(T,\partial T)\to(\mathbb{R}\times Z\times[0,1]\times\mathbb{R},\mathbb{R} \times\mathbf{a}\times\{1\}\times\mathbb{R})\) satisfying the following conditions:
1. \(v\) is \((j,J)\)-holomorphic with respect to some complex structure \(j\) on \(T\).
2. \(v\) is proper.
* Let \(T_{\bar{e}}\) and \((\mathbb{R}\times Z)_{\bar{e}}\) denote the spaces obtained from \(T\) and \(\mathbb{R}\times Z\) by filling in the east punctures. Then \(v\) extends to a proper map \(v_{\bar{e}}:T_{\bar{e}}\to(\mathbb{R}\times Z)_{\bar{e}}\times[0,1]\times \mathbb{R}\) such that \(\pi_{\Sigma}\circ v_{\bar{e}}(e)=e\).4 Footnote 4: This condition excludes mapping the interior puncture end to the west infinity end.
* At each boundary \(w\) puncture, \(\pi_{\Sigma}\circ v\) approaches the corresponding Reeb chords in \(-\infty\times Z\) that labels \(w\).
* At each boundary \(e\) puncture, \(\pi_{\Sigma}\circ v\) approaches the corresponding Reeb chords in \(\infty\times Z\) that labels \(e\).
We have the following proposition regarding the regularity of the moduli spaces.
**Proposition 2.21** (Proposition 5.16 of [18]).: _If all components of a bi-decorated source \(T^{\diamond}\) are topological disks (possibly with an interior puncture), then \(\widetilde{\mathcal{N}}(T^{\diamond})\) is transversally cut out for any split almost complex structure on \(\mathbb{R}\times Z\times[0,1]\times\mathbb{R}\)._
The heights of \(v\in\widetilde{\mathcal{N}}(T^{\diamond})\) at east or west boundary punctures induce evaluation functions \(\widetilde{ev}_{e}\) and \(\widetilde{ev}_{w}\). Given partitions \(P_{e}\) and \(P_{w}\) of the boundary east and west punctures, one defines \(\widetilde{\mathcal{N}}(T^{\diamond};P_{e};P_{w})\) in an obvious way. One defines the reduced moduli space \(\mathcal{N}\) by taking the quotient of the \(\mathbb{R}\times\mathbb{R}\)-action induced by translations in both \(\mathbb{R}\)-directions in \(\mathbb{R}\times\mathcal{Z}\times[0,1]\times\mathbb{R}\). The evaluation maps \(\widetilde{ev}_{e}\) and \(\widetilde{ev}_{w}\) also descend to \(\mathcal{N}\), taking values in \(\mathbb{R}^{|E(T^{\diamond})|}/\mathbb{R}\) and \(\mathbb{R}^{|W(T^{\diamond})|}/\mathbb{R}\), respectively.
Given \(u\in\mathcal{N}(T^{\diamond})\), the open mapping theorem implies that the map \(\pi_{\mathbb{D}}\circ u\) is constant on connected components of \(T^{\diamond}\) (taking values in \(\{1\}\times\mathbb{R}\)). So, the map \(u\) is determined by its projection \(\pi_{\Sigma}\circ u\) and \(t\)-coordinates on connected components of \(T\). Of primary interest to us are the following three types of holomorphic curves.
* A _join component_ of a bi-decorated source is a topological disk with three boundary punctures, and the punctures are labeled by \((e,\sigma)\), \((w,\sigma_{1})\), and \((w,\sigma_{2})\) counter-clockwise, where the Reeb chords satisfy the relation \(\sigma=\sigma_{1}\uplus\sigma_{2}\) (here \(\uplus\) denotes concatenation of Reeb chords). A _trivial component_ of a bi-decorated source is a topological disk with two boundary punctures, one \(w\) puncture and one \(e\) puncture, and both are labeled by the same Reeb chord. Holomorphic maps from a join component or a trivial component to \(\mathbb{R}\times Z\times[0,1]\times\mathbb{R}\) exist and are unique up to translations. A _join curve_ is a holomorphic curve with a bi-decorated source consisting of a single join component and possibly some trivial components.
* A _split component_ of a bi-decorated source is a topological disk with three boundary punctures, where the punctures are counter-clockwisely labeled by \((e,\sigma_{1})\), \((e,\sigma_{2})\) and \((w,\sigma)\) with \(\sigma=\sigma_{1}\uplus\sigma_{2}\). Holomorphic maps from a split component to \(\mathbb{R}\times Z\times[0,1]\times\mathbb{R}\) exist and are unique up to translations. A _split curve_ is a holomorphic curve with a bi-decorated source consisting of one or more split components and possibly some trivial components.
* An _orbit component_ of a bi-decorated source is a topological disk with a single boundary \(w\) puncture labeled by \(\sigma\in\{\ -\rho_{0123},-\rho_{1230},-\rho_{2301},-\rho_{3012}\}\) and a single interior \(e\) puncture. Holomorphic maps from an orbit component to \(\mathbb{R}\times Z\times[0,1]\times\mathbb{R}\) exist and are unique up to translations. An _orbit curve_ is holomorphic curve with a bi-decorated source consisting of a single orbit component and possibly some trivial components.
#### 2.3.2. Compactification by Holomorphic combs
We describe holomorphic combs in this subsection. We begin with a description of nodal holomorphic curves.
**Definition 2.22**.: A _nodal decorated source_\(S^{\triangleright}\) is a decorated source together with a set of unordered pairs of marked points \(D=\{\{\overline{d}_{1},\underline{d}_{1}\},\{\overline{d}_{2},\underline{d}_{2} \},\ldots,\{\overline{d}_{k},\underline{d}_{k}\}\}\). The points in \(D\) are called nodes.
**Definition 2.23**.: Let \(\boldsymbol{x}\) and \(\boldsymbol{y}\) be generators and let \(B\in\widetilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\). Let \(S^{\triangleright}\) be a nodal decorated source. Let \(S_{i}\) be the components of \(S\backslash\{\text{nodes}\}\). Then a nodal holomorphic curve \(u\) with source \(S^{\triangleright}\) in the homology class of \(B\) is a continuous map
\[u:(S,\partial S)\to(\Sigma\times[0,1]\times\mathbb{R},\beta\times\{0\}\times \mathbb{R},\alpha\times\{1\}\times\mathbb{R})\]
such that
1. the restriction of \(u\) to each \(S_{i}\) is a map satisfying condition (M-1)-(M-11) except for (M-5),
2. \(\lim_{p\to\overline{d}_{i}}u(p)=\lim_{p\to\underline{d}_{i}}u(p)\) for every pair of nodes,
3. \(u\) is asymptotic to \(\boldsymbol{x}\) at \(-\infty\) and \(\boldsymbol{y}\) at \(\infty\), and
4. \(u\) induces the homology class specified by \(B\).
The nodes in a nodal source \(S^{\triangleright}\) induce punctures on the connected components \(S_{i}\) of \(S\backslash\{\text{nodes}\}\); they can be interior punctures as well as boundary punctures. Note that \(u|_{S_{i}}\) extends across these punctures continuously. We further divide the boundary punctures induced by the nodes into two types.
**Definition 2.24**.: Let \(u\) be a nodal holomorphic curve. Let \(d\) be a boundary puncture induced by a node on a component \(S_{i}\) of the nodal Riemann surface. Let \(l_{1}\) and \(l_{2}\) denote the components of \(\partial S_{i}\) adjacent to \(d\). If the path \(\pi_{\Sigma}\circ u|_{l_{1}\cup\{d\}\cup l_{2}}\) is stay-on-track in the sense of (M-11), then we say \(d\) is a _type I puncture_; otherwise, we say \(d\) is a _type II puncture_.
There are only type I punctures when the attaching curves are embedded. Type II punctures naturally appear in our setup since we have an immersed \(\alpha\) curve.
One can still define the evaluation map from the space of nodal holomorphic maps to \(\mathbb{R}^{|E(S^{\triangleright})|}\), where the value at a nodal holomorphic curve is the heights of the east punctures.
**Definition 2.25**.: A _holomorphic story_ is an ordered \((k+1)\)-tuple \((u,v_{1},\ldots,v_{k})\) for some \(k\geq 0\) such that
1. \(u\) is a (possibly nodal) holomorphic curve in \(\Sigma\times[0,1]\times\mathbb{R}\),
2. each \(v_{i}\)\((i=1,2,\ldots,k)\) is a holomorphic curve in \(\mathbb{R}\times\mathcal{Z}\times[0,1]\times\mathbb{R}\),
3. the boundary east punctures of \(u\) match up with the boundary west punctures of \(v_{1}\) (i.e., the two sets of punctures are identified by a one-to-one map such that both the Reeb-chord labels and the relative heights are the same under this one-to-one correspondence), and
4. the boundary east punctures of \(v_{i}\) match up with the boundary west punctures of \(v_{i+1}\) for \(i=1,2,\ldots,k-1\).
**Definition 2.26**.: Let \(N\geq 1\) be an integer, and let \(\boldsymbol{x}\) and \(\boldsymbol{y}\) be two generators. A _holomorphic comb_ of height \(N\) connecting \(\boldsymbol{x}\) to \(\boldsymbol{y}\) is a sequence of holomorphic stories \((u_{i},v_{i,1},\ldots,v_{i,k_{i}})\), \(i=1,2,\ldots,N\), such that \(u_{i}\) is a (possibly nodal) stable curve in \(\mathcal{M}^{B_{i}}(\boldsymbol{x}_{i},\boldsymbol{x}_{i+1};S^{\triangleright}_ {i})\) for some generators \(\boldsymbol{x}_{1},\ldots,\boldsymbol{x}_{N+1}\) such that \(\boldsymbol{x}_{1}=\boldsymbol{x}\) and \(\boldsymbol{x}_{N+1}=\boldsymbol{y}\).
Given a holomorphic comb, the underlying (nodal) decorated sources and bi-decorated sources can be glued up and deformed in an obvious way to give a smooth decorated source; it is called the _preglued source_ of the holomorphic comb.
**Definition 2.27**.: Given generators \(\mathbf{x}\) and \(\mathbf{y}\), and a homology class \(B\in\tilde{\pi}_{2}(\mathbf{x},\mathbf{y})\). \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) is defined to be the space of all holomorphic combs with preglued source \(S^{\triangleright}\), in the homology class of \(B\), and connecting \(\mathbf{x}\) to \(\mathbf{y}\). \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) is defined to be the closure of \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) in \(\overline{\overline{\mathcal{M}}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\). \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\) and \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};\overrightarrow{P})\) are defined to be the closure of \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\) and \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};\overrightarrow{P}\) in \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) respectively.
The compactness result is stated below.
**Proposition 2.28**.: _The moduli space \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) is compact. The same statement holds for \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};P)\) and \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright};\overrightarrow{P})\)._
We omit the proof of the above proposition and remark that the proof of [1, Proposition 5.24] adapts to our setup easily.
### Gluing results
As the regularity results, gluing results in pseudo-holomorphic curve theory are proved by analyzing the \(\bar{\partial}\)-operator over certain section space of some pull-back bundles over the underlying source surfaces. In particular, having immersed \(\alpha\) curves does not affect the proof of such results. We hence recall the following results that we shall need without giving the proof.
**Proposition 2.29** (Proposition 5.30 of [1]).: _Let \((u_{1},u_{2})\) be a two-story holomorphic building with \(u_{1}\in\mathcal{M}^{B_{1}}(\mathbf{x},\mathbf{y};S^{\triangleright}_{1};P_{1})\) and \(u_{2}\in\mathcal{M}^{B_{2}}(\mathbf{y},\mathbf{z};S^{\triangleright}_{2};P_{2})\). Assume the moduli spaces are transversally cut out. Then for sufficiently small neighborhood \(U_{i}\) of \(u_{i}\) (\(i=1,2\)), there is neighborhood of \((u_{1},u_{2})\) in \(\overline{\mathcal{M}}^{B_{1}+B_{2}}(\mathbf{x},\mathbf{z};S^{\triangleright}_{1} \natural S^{\triangleright}_{2};P_{1}\cup P_{2})\) homeomorphic to \(U_{1}\times U_{2}\times[0,1)\)._
**Definition 2.30**.: A holomorphic comb is said to be _simple_ if it only has a single story and it is of the form \((u,v_{1})\), where \(u\) is a non-nodal holomorphic curve.
**Proposition 2.31** (Proposition 5.31 of [1]).: _Let \((u,v)\) be a simple holomorphic comb with \(u\in\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\) and \(v\in\mathcal{N}(T^{\circ};P_{e})\). Let \(m\) denote the number of east punctures of \(S^{\triangleright}\). Assume the moduli spaces are transversally cut out, and the evaluation maps \(ev:\mathcal{M}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright})\to\mathbb{R}^{m}/\mathbb{R}\) and \(ev_{w}:\mathcal{N}(T^{\circ};P_{e})\to\mathbb{R}^{m}/\mathbb{R}\) are transverse at \((u,v)\). Then for sufficiently small neighborhood \(U_{u}\) of \(u\) and \(U_{v}\) of \(v\), there is a neighborhood of \((u,v)\) in \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};S^{\triangleright}\natural T^{\circ};P _{e})\) homeomorphic to \(U_{u}\times_{ev}U_{v}\times[0,1)\)._
### Degeneration of moduli spaces
This subsection provides constraints on the degeneration in 1-dimensional moduli spaces using the index formulas and strong boundary monotonicity. The results here are simpler than the corresponding results in Section 5.6 of [1] since we restrict to the torus-boundary case. However, as mentioned earlier, nodal curves with corners at self-intersection points of \(\alpha_{im}\) may occur in the compactification; we defer the further analysis of these degenerations to later subsections. Readers who wish to skip the details in this subsection are referred to Definition 2.34 and Proposition 2.40 for a quick summary.
The index formulas in Proposition 2.17 lead to the following constraints on the ends of moduli spaces of 0-P curves.
**Proposition 2.32** (cf. Proposition 5.43 of [1]).: _Let \(B\in\tilde{\pi}_{2}(\mathbf{x},\mathbf{y})\), let \(S^{\triangleright}\) be a decorated source of 0-P, and let \(P\) be a discrete partition of the east punctures of \(S^{\triangleright}\)._
_Suppose that \(Ind(B,S^{\triangleright},P)=2\). Then for a generic almost complex structure \(J\), every holomorphic comb in \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright} ;P)=\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright} ;P)-\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};P)\) has one of the following forms:_
1. _a two-story holomorphic building_ \((u_{1},u_{2})\)_;_
2. _a simple holomorphic comb_ \((u;v)\) _where_ \(v\) _is a join curve;_
3. _a simple holomorphic comb_ \((u;v)\) _where_ \(v\) _is a split curve with a single split component;_
4. _a nodal holomorphic comb, obtained by degenerating some arcs with ends on_ \(\partial S\)_._
Proof.: This proposition is proved the same way as Proposition 5.43 of [1]: It is a consequence of compactness, transversality, index formula, and gluing results. Note that our statement is simpler as we restrict to a discrete partition \(P\): the shuffle-curve end and the multi-component split-curve end that appear in [1] do not occur in our setting.
For moduli spaces of 1-P curves, we have the following proposition.
**Proposition 2.33**.: _Let \(B\in\tilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\) and let \(S^{\triangleright}\) be a decorated surface of type 1-P. Suppose \(Ind(B,S^{\triangleright})=2\). Then for a generic almost complex structure \(J\), every holomorphic comb in \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{ \triangleright};U)\) has one of the following forms:_
1. _a two-story holomorphic building_ \((u_{1},u_{2})\)_;_
2. _a simple holomorphic comb_ \((u;v)\) _where_ \(v\) _is an orbit curve;_
3. _a nodal holomorphic comb, obtained by degenerating some arcs with boundary on_ \(\partial S\)_._
Proof.: Suppose a given holomorphic comb in \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{ \triangleright};U)\) is not nodal, then it possibly has degeneration at the east infinity and level splittings. The form of such holomorphic combs is analyzed in the Proof of Proposition 42 in [11]; the results are precisely item (1) and (2) in the above statement.
We will only be interested in those moduli spaces prescribed by an ordered discrete partition \(\overrightarrow{P}\); this is a void requirement for 1-P holomorphic curves. This condition is also automatic for 0-P; it follows easily from boundary monotonicity and holomorphicity (see, e.g., Lemma 5.51 of [1]).
The strong boundary monotonicity imposes further constraints on the degeneration of one-dimensional moduli spaces. We first describe the constraints on nodal holomorphic curves. Recall there are three types of nodal holomorphic curves.
**Definition 2.34**.: A nodal holomorphic comb \(u\) is called a _boundary degeneration_ if it has an irreducible component \(S_{0}\) that contains no \(\pm\)-punctures and \(\pi_{\Sigma}\circ u|_{S_{0}}\) is non-constant.
**Definition 2.35**.: A _boundary double point_ is a holomorphic comb with a boundary node \(p\) such that the projection to \([0,1]\times\mathbb{R}\) is not constant near either preimage point \(p_{1}\) or \(p_{2}\) of \(p\) in the normalization of the nodal curve.
**Definition 2.36**.: A holomorphic comb \(u\) is called _haunted_ if there is a component \(S_{0}\) of the source such that \(u|_{S_{0}}\) is constant.
**Proposition 2.37**.: _Let \(J\) be a generic almost complex structure. For one-dimensional moduli spaces \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}_{0};\overrightarrow {P})\) of 0-P curves and for one-dimensional moduli spaces
\(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};U)\) of 1-P curves, boundary double points and haunted curves do not appear in \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}_{0}; \overrightarrow{P})\) and \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}_{1} ;U)\)._
Proof.: This is Lemma 5.56 and Lemma 5.57 of [1].
In summary, the only nodal holomorphic combs that could possibly appear are boundary degenerations. We defer the analysis of such degenerations to later subsections. Instead, we conclude this subsection with some further constraints in \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}_{0} ;\overrightarrow{P})\) obtained by combining the strong boundary monotonicity and the torus-boundary condition.
**Proposition 2.38**.: _For one-dimensional moduli spaces \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};\overrightarrow {P})\) of 0-P curves, join curve ends do not appear in \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}; \overrightarrow{P})\)._
Proof.: Suppose this is not true. Write \(\overrightarrow{P}=(\sigma_{1},\ldots,\sigma_{k})\) for some \(k\geq 1\). The appearance of join curve end means there is a holomorphic comb \((u;v)\) such that \(u\in\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}{}^{\prime} ;(\sigma_{1},\ldots,\{\sigma_{i}^{\prime},\sigma_{i}^{\prime\prime}\},\ldots, \sigma_{k}))\) where \(\sigma_{i}=\sigma_{i}^{\prime}\uplus\sigma_{i}^{\prime\prime}\). The strong boundary monotonicity condition implies \(\boldsymbol{x}\) has one and only one component that lies on exactly one of the \(\alpha\) arcs. Hence all the east punctures of \(u\) are of different heights due to holomorphicity. This contradicts that the east punctures marked by \(\sigma_{i}^{\prime}\) and \(\sigma_{i}^{\prime\prime}\) are of the same height.
**Proposition 2.39**.: _Let \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{ \triangleright};\overrightarrow{P})=\overline{\mathcal{M}}^{B}(\boldsymbol{x}, \boldsymbol{y};S^{\triangleright};\overrightarrow{P})-\mathcal{M}^{B}(\boldsymbol {x},\boldsymbol{y};S^{\triangleright};\overrightarrow{P})\). Then \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{ \triangleright};\overrightarrow{P})\cap\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol {y};S^{\triangleright};P)=\emptyset\)._
Proof.: If not, then collision of levels appears, i.e., there is a sequence of \(u_{i}\in\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}; \overrightarrow{P})\) converging to a holomorphic curve \(u\in\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};P)\) such that at least two of the east punctures are of the same height. However, we already observed such \(u\) does not exist in the proof of Proposition 2.38. In other words, \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};P)\) is equal to \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};\overrightarrow {P})\) for a particular order determined by \(S^{\triangleright}\) when we restrict to the torus-boundary case.
We summarize the results above into the following proposition for convenience.
**Proposition 2.40**.: _Let \(B\in\tilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\) and let \(J\) be a generic almost complex structure. For a one-dimensional moduli space \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};\overrightarrow {P})\) of 0-P curves, where \(\overrightarrow{P}\) is a discrete ordered partition of the east punctures of \(S^{\triangleright}\), every holomorphic comb in \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{ \triangleright};\overrightarrow{P})\) has one of the following forms:_
1. _a two-story holomorphic building_ \((u_{1};u_{2})\)_;_
2. _a simple holomorphic comb_ \((u;v)\) _where_ \(v\) _is a split curve with a single split component;_
3. _a boundary degeneration._
_For a one-dimensional moduli space \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright};U)\) of 1-P curves, every holomorphic comb in \(\partial\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};S^{ \triangleright};U)\) has one of the following forms:_
1. _a two-story holomorphic building_ \((u_{1},u_{2})\)_;_
2. _a simple holomorphic comb_ \((u;v)\) _where_ \(v\) _is an orbit curve;_
3. _a boundary degeneration._
_Remark 2.41_.: The restriction to manifolds with torus boundary greatly simplifies the study of moduli spaces. In higher-genus cases, join ends, collision of levels, and
split curve with many splitting components may appear. In particular, the latter prevents one from proving the compactified moduli spaces are manifolds with boundaries (as the gluing results fail to apply). In [10], notions like smeared neighborhood, cropped moduli spaces, and formal ends were employed to circumvent this difficulty. The results in this subsection allow us to avoid introducing any of these terms.
### Embedded holomorphic curves
We only use embedded holomorphic curves when defining (weakly extended) type D structures. When \(\alpha\)-curves are embedded, a holomorphic curve is embedded if and only if the Euler characteristic of its underlying source surface is equal to the one given by an _embedded Euler characteristic formula_. This is still true in the current setup, but the embedded Euler characteristic formula needs to be generalized to take care of new phenomena caused by immersed \(\alpha\)-multicurves.
The embedded Euler characteristic formula involves signs of self-intersections of oriented immersed arcs in our setup. Here we fix the sign convention. Let \(f:(0,1)\to\Sigma\) be an immersed arc with transverse double self-intersection at \(p=f(a)=f(b)\) with \(0<a<b<1\). Then the sign of the intersection point at \(p\) is positive if the orientation at \(T_{p}\Sigma\) coincides with the one specified by the ordered pair \((f^{\prime}(a),f^{\prime}(b))\); otherwise, the sign of the intersection point is negative. Let \(f\cdot f\) denote the signed count of self-intersection points of the arc \(f\). Let \(B\in\pi_{2}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{P})\) be a domain. Then up to homotopy within \(\alpha_{im}\), the boundary of \(B\) at \(\alpha_{im}\) gives rise to a curve \(\partial_{\alpha_{im}}B\). We will restrict to those domains for which \(\partial_{\alpha_{im}}B\) has a single component. We define \(s(\partial_{\alpha_{im}}B)\) to be \((\partial_{\alpha_{im}}B)\cdot(\partial_{\alpha_{im}}B)\), the signed count of transverse double points in \(\partial_{\alpha_{im}}B\).
For convenience, we also define the _length_ of each elementary Reeb chord to be one and the _length_\(|\sigma|\) of a general Reeb chord \(\sigma\) to be the number of elementary Reeb chords it consists of. Note if \(\overrightarrow{\rho}\) is a sequence of Reeb chords appearing as the east boundary of a holomorphic curve, then by Condition (M-9) we know for any \(\sigma\in\overrightarrow{\rho}\), \(|\sigma|\leq 4\).
With these terminologies set, our proposition is the following.
**Proposition 2.42**.: _A holomorphic curve \(u\in\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}, \overrightarrow{P})\) is embedded if and only if \(\chi(S)=g+e(B)-n_{\boldsymbol{x}}(B)-n_{\boldsymbol{y}}(B)-\iota([ \overrightarrow{P}])+s(\partial_{\alpha_{im}}B)\)._
Here, \(S^{\triangleright}\) can either be a 0-P source or a 1-P source. If \(S^{\triangleright}\) is of 0-P, then \([\overrightarrow{P}]\) stands for the sequence of Reeb chords obtained from the labels of the east punctures, and the term \(\iota([\overrightarrow{P}])\) is defined in Formula (5.65) of [10] (in the non-extended case) and Section 4.1 of [10] (for Reeb chords of length 4). In particular, if \(\overrightarrow{P}\) contains a Reeb chord of length 4, then it consists of a single Reeb chord by (M-9), and hence the only new case we need to know is \(\iota((\sigma))=-1\) when \(|\sigma|=4\). If \(S^{\triangleright}\) is of 1-P, then by Condition (M-9) \(\overrightarrow{P}=\emptyset\) and the term \(\iota([\overrightarrow{P}])\) vanishes.
Proof of Proposition 2.42.: The proof is adapted from the corresponding proof in the embedded alpha curve case, [10, Proposition 5.69]. To keep it concise, we state the main steps and skip the details that can be found in [10], but we give details on the modifications. The proof is divided into four steps; only the third step is signicantly different from the embedded case.
Let \(u\in\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};S^{\triangleright}, \overrightarrow{P})\) be an embedded curve.
**Step 1.** Apply the Riemann-Hurwitz formula to express \(\chi(S)\) in terms of \(e(B)\) and \(br(u)\), where \(br(u)\) stands for the total number of branch points of the projection \(\pi_{\Sigma}\circ u\), counted with multiplicities:
\[\chi(S)=e(S)+\frac{g}{2}+\sum_{i}\frac{|P_{i}|}{2}=e(B)-br(u)+\frac{g}{2}+\sum_ {i}\frac{|P_{i}|}{2}. \tag{2.1}\]
**Step 2.** Let \(\tau_{\epsilon}(u)\) be the translation of \(u\) by \(\epsilon\) in the \(\mathbb{R}\)-direction. Since we are in the torus-boundary case, we may always assume \(\overrightarrow{P}\) is discrete. There is only one case where there is a branch point escaped to east infinity: This occurs when \(\overrightarrow{P}=\{\sigma\}\) for some \(|\sigma|=4\). Therefore,
\[br(u)=\tau_{\epsilon}(u)\cdot u-\frac{|\{\sigma|\sigma\in\overrightarrow{P}, \ |\sigma|=4\}|}{2} \tag{2.2}\]
when \(\epsilon\) is sufficiently small. This is because when \(u\) is embedded and \(\epsilon\) is small, excluding those intersection points which escape to east infinity when \(\epsilon\to 0\), the remaining intersection points correspond to points at which \(u\) is tangent to \(\frac{\partial}{\partial t}\), which are exactly the branch points of \(\pi_{\Sigma}\circ u\).
**Step 3.** Compute \(\tau_{\epsilon}(u)\cdot u\). For all sufficiently large \(R\),
\[\tau_{R}(u)\cdot u=n_{x}(B)+n_{y}(B)-\frac{g}{2}. \tag{2.3}\]
To see this, note that the intersection of \(u\) and \(\tau_{R}(u)\) looks like the intersection of \(u\) with the trivial disk from the generator \(\mathbf{x}\) to itself when \(t\) is small and it looks like the intersection of \(u\) with the trivial disk corresponding to the generator \(\mathbf{y}\) when \(t\) is large (see Proposition 4.2 of [13] for the computation).
To recover \(\tau_{\epsilon}(u)\cdot u\), we need to understand how \(\tau_{t}(u)\cdot u\) changes as \(t\) varies. In the closed-manifold and embedded-alpha-curve case, \(\tau_{t}(u)\cdot u\) does not change. In the bordered-manifold case, there is a change in \(\tau_{\epsilon}(u)\cdot u-\tau_{R}(u)\cdot u\) caused by the relative position change of \(\partial u\) and \(\partial\tau_{t}(u)\) at east infinity. This change is captured by the term \(\iota([\overrightarrow{P}])\). More precisely, when the \(\alpha\)-curves are embedded,
\[\tau_{\epsilon}(u)\cdot u-\tau_{R}(u)\cdot u=\iota([\overrightarrow{P}])+ \frac{|\overrightarrow{P}|}{2}+\frac{|\{\sigma|\sigma\in\overrightarrow{P}, \ |\sigma|=4\}|}{2}.\]
When the \(\alpha\)-curves are immersed, there is another source of change in \(\tau_{t}(u)\cdot u\) corresponding to self-intersection points of \(\partial_{\alpha_{im}}u\). We explain such change in the examples below. As one shall see, such phenomenon is local (i.e., depends only on the behavior of \(u\) near the pre-images of self-intersection points of \(\partial_{\alpha_{im}}u\)). Therefore, the examples also speak for the general situation. We spell out the example. Consider an embedded disk \(u\) in \(\Sigma\times[0,1]\times\mathbb{R}\) as shown in Figure 4 (a). In the figure, \(\pi_{\Sigma}\circ u\) is shown by its domain, and we also introduced immersed \(s\)-grid lines to help visualize the \([0,1]\times\mathbb{R}\)-coordinates: the points on a single immersed \(s\)-grid line are all mapped to the same value in \([0,1]\), and as we move from \(x\) to \(y\), the \(t\)-coordinate on an \(s\)-grid line increases from \(-\infty\) to \(\infty\). In the figure, we highlighted a few \(s\)-grid lines and self-intersection points on them. Despite having these self-intersection points on the \(s\)-grid lines, \(u\) is still embedded as each such self-intersection point corresponds to two points in \(\Sigma\times[0,1]\times\mathbb{R}\) with different \(t\)-coordinate. However, each self-intersection point gives rise to an intersection point in \(\tau_{t}u\cdot u\) for an appropriate \(t\). In our example, the projection to \(\Sigma\) of \(\partial_{\alpha}u\) has two self-intersection points \(p\) and \(q\). There are \(t_{i}\in\mathbb{R}\) (\(i=1,2,3,4\)), \(t_{1}<t_{2}<t_{3}<t_{4}\) so
that \(p\) has \(t\)-coordinate \(t_{2}\) and \(t_{3}\), and \(q\) has coordinate \(t_{1}\) and \(t_{4}\). We first examine what is incurred by the negative intersection point \(p\). Note \(\tau_{t}u\cdot u\) does not change for \(t<t_{3}-t_{2}\), and at \(t=t_{3}-t_{2}\), \(\tau_{t}u\cap u\) picks up a boundary intersection point \((p,1,t_{3})\). Inspecting the example, we can see for \(t\in[t_{3}-t_{2}-\epsilon,t_{3}-t_{2}+\epsilon]\) for a small \(\epsilon\), a boundary intersection point appears and then enters to become an interior intersection point. So \(\tau_{t_{3}-t_{2}+\epsilon}u\cdot u-\tau_{t_{3}-t_{2}-\epsilon}u\cdot u=1\). Similarly, for the positive self-intersection point \(q\), for \(t\in[t_{4}-t_{1}-\epsilon,t_{4}-t_{1}]\), we see an interior intersection point of \(\tau_{t}u\cdot u\) hits the boundary and then disappears.
In general, an intersection point \(p\) of \(\partial_{\alpha_{im}}u\) contributes a boundary intersection of \(u\) with \(\tau_{t}u\) for some value of \(t=t_{p}\), and intersection points of arcs for \(s\)-values just less than one contribute intersections of \(u\) with \(\tau_{t}u\) for nearby values of \(t\). These intersections occur at shifts \(t>t_{p}\) if \(p\) is a positive intersection point and at \(t<t_{p}\) if \(p\) is a negative intersection point, so positive intersection points always give rise to times \(t\) at which an intersection point appears on the boundary and then moves into the interior, while negative intersection points give rise to times \(t\) at which an interior intersection moves to the boundary and disappears. Thus the net change in \(\tau_{t}u\cdot u\) caused by negative and positive boundary intersection points is given by \(s(\partial_{\alpha_{im}}u)\). In the example above from Figure 4(a), \(s(\partial_{\alpha_{im}}u)=0\). An example with \(s(\partial_{\alpha_{im}}u)=1\) is shown in Figure 4 (b).
Overall, taking into account the changes at the east infinity and at \(\alpha_{im}\times\{1\}\times\mathbb{R}\), we can derive the following equation from Equation (2.3):
\[\tau_{\epsilon}(u)\cdot u=n_{x}(B)+n_{y}(B)-\frac{g}{2}+\iota([\overrightarrow {P}])+\frac{|\overrightarrow{P}|}{2}+\frac{|\{\sigma|\sigma\in\overrightarrow {P},\ |\sigma|=4\}|}{2}+s(\partial_{\alpha_{im}}B) \tag{2.4}\]
**Step 4.** Synthesize the steps. Synthesizing Equation (2.1), Equation (2.2), and Equation (2.4) gives the formula, proving the "only if" direction. To see the "if"
Figure 4. Examples of embedded disks in \(\Sigma\times[0,1]\times\mathbb{R}\). The \(\alpha\)-curve is drawn in red, and the \(\beta\)-curves are in green. We also depicted a few \(s\)-grid lines; points on a single \(s\)-grid line have constant \(s\)-coordinate.
direction, note if the holomorphic curve is not embedded, then in Step 2 we know \(br(u)=\tau_{\epsilon}u\cdot u-2\mathrm{sigu}(u)\), where \(\mathrm{sigu}(u)>0\) is the order of singularity \(u\). So, in this case we have \(\chi(S)\) is strictly greater than \(g+e(B)-n_{x}(B)-n_{y}(B)-\iota([\overrightarrow{P}])+s(\partial_{\alpha_{im}}B)\).
From now on, we use \(\overrightarrow{\rho}\) to denote either a sequence of Reeb chords of length less than or equal to \(4\) or the length one sequence containing a single closed Reeb orbit \(U\) which is used for marking the interior puncture of a \(1\)-P curve. Given such a \(\overrightarrow{\rho}\), denote by \(\overrightarrow{\rho_{\star}}\) the sub-sequence of non-closed Reeb chords, i.e., if \(\overrightarrow{\rho}=(U)\) then \(\overrightarrow{\rho_{\star}}=\emptyset\) and otherwise \(\overrightarrow{\rho_{\star}}=\overrightarrow{\rho}\). A domain \(B\in\tilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\) is said to be _compatible_ with a sequence of Reeb chords \(\overrightarrow{\rho}\) if the _homology class_ induced by the east boundary \(\partial^{\partial}B\) of \(B\) agrees with that induced by \(\overrightarrow{\rho}\), and \((\boldsymbol{x},\overrightarrow{\rho_{\star}})\) is strongly boundary monotone (in the sense of Definition 5.52 of [1]). In view of Proposition 2.42, we make the following definitions.
**Definition 2.43**.: Given \(B\in\tilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\) and a sequence of Reeb chords \(\overrightarrow{\rho}\) such that \((B,\overrightarrow{\rho})\) is compatible. The _embedded Euler characteristic_ is defined to be
\[\chi_{emb}(B,\overrightarrow{\rho}):=g+e(B)-n_{x}(B)-n_{y}(B)-\iota( \overrightarrow{\rho_{\star}})+s(\partial_{\alpha_{im}}B).\]
The _embedded index_ is defined to be
\[\mathrm{ind}(B,\overrightarrow{\rho}):=e(B)+n_{x}(B)+n_{y}(B)+|\overrightarrow {\rho}|+\iota(\overrightarrow{\rho_{\star}})-s(\partial_{\alpha_{im}}B).\]
The _embedded moduli space_ is defined to be
\[\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{\rho}):= \bigcup_{\chi(S)=\chi_{emb}(B,\overrightarrow{\rho}),\ [\overrightarrow{P}]= \overrightarrow{\rho_{\star}}}\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y}; S^{\triangleright};\overrightarrow{P}).\]
Clearly, the embedded moduli space \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{\rho})\) has expected dimension \(\mathrm{ind}(B,\rho)-1\) by Proposition 2.42 and Proposition 2.17.
**Proposition 2.44**.: _The embedded index formula is additive, i.e., for compatible pairs \((B_{i},\overrightarrow{\rho_{i}})\) where \(B_{1}\in\pi_{2}(\boldsymbol{x},\boldsymbol{w})\) and \(B_{2}\in\pi_{2}(\boldsymbol{w},\boldsymbol{y})\) we have \(\text{ind}(B_{1}+B_{2},(\overrightarrow{\rho_{1}},\overrightarrow{\rho_{2}}) )=\text{ind}(B_{1},\overrightarrow{\rho_{1}})+\text{ind}(B_{2},\overrightarrow {\rho_{2}})\)._
Proof.: The proof is a modification of the proof of Proposition 5.75 in [1], which is adapted from the proof in the closed-manifold case in [1]. We will only mention the modifications instead of going into the details. Given oriented arcs \(a\) and \(b\), the _jittered intersection number_\(a\cdot b\) is defined to be \(\frac{1}{4}(a_{NE}+a_{NW}+a_{SE}+a_{SW})\cdot b\), where \(a_{NE},\ldots,a_{SW}\) are slight translations of the arc \(a\) in directions suggested by the subscript (and we assume these translations intersect \(b\) transversely). The proof of Proposition 5.75 of [1] uses that if \(a\) and \(a^{\prime}\) are two arcs contained in the \(\alpha\)-curve, then \(a\cdot a^{\prime}=0\) (Lemma 5.73 of [1] (2)). This is no longer true in our setting. Overall5, running the proof in [1] in our setting now gives
Footnote 5: For careful readers: Specifically, Lemma 5.73 (5) of [1] needs to be changed to \(a\cdot a^{\prime}+a\cdot b^{\prime}+b\cdot a^{\prime}=0\) and Lemma 5.74 of [1] needs to be changed to \(a\cdot a^{\prime}+a\cdot b^{\prime}+b\cdot a^{\prime}=L(\partial^{\partial}B, \partial^{\partial}B^{\prime})\) when \(a\cdot a^{\prime}\) can no longer be assumed to be zero.
\[e(B_{1}+B_{2})+n_{\boldsymbol{x}}(B_{1}+B_{2})+n_{\boldsymbol{y} }(B_{1}+B_{2})+|(\overrightarrow{\rho_{1}},\overrightarrow{\rho_{2}})|+\iota( (\overrightarrow{\rho_{1}},\overrightarrow{\rho_{2}})_{b})\] \[= e(B_{1})+n_{\boldsymbol{x}}(B_{1})+n_{\boldsymbol{y}}(B_{1})+| \overrightarrow{\rho_{1}}|+\iota((\overrightarrow{\rho_{1}})_{b})+e(B_{2})+n_ {\boldsymbol{x}}(B_{2})\] \[+n_{\boldsymbol{y}}(B_{2})+|\overrightarrow{\rho_{2}}|+\iota(( \overrightarrow{\rho_{2}})_{b})+\partial_{\alpha_{im}}B_{1}\cdot\partial_{ \alpha_{im}}B_{2}. \tag{2.5}\]
(If the \(\alpha\)-curves are embedded, then the term \(\partial_{\alpha_{im}}(B_{1})\cdot\partial_{\alpha_{im}}B_{2}\) vanishes, recovering the additivity of the index in that case.) And it is easy to see
\[s(\partial_{\alpha_{im}}(B_{1}+B_{2}))=s(\partial_{\alpha_{im}}B_{1})+s( \partial_{\alpha_{im}}B_{2})+\partial_{\alpha_{im}}B_{1}\cdot\partial_{\alpha_{ im}}B_{2}. \tag{2.6}\]
Now the additivity of the index follows readily from the definition of the embedded index, Equation (2.5), and Equation (2.6).
When proving the structure equations (e.g., \(\partial^{2}=0\) for type D structures), one needs to relate the coefficient of the structural equation and the ends of 1-dimensional moduli spaces. A key result that allows us to do this is the following proposition, which is the counterpart of Lemma 5.76 of [10].
**Proposition 2.45**.: _Given \((B,\overrightarrow{\rho})\) such that \(\text{ind}(B,\overrightarrow{\rho})=2\). Then the two-story buildings that occur in the degeneration of \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) are embedded._
Proof.: Given a two-story building \((u_{1},u_{2})\), by transversality we know \(\text{ind}(u_{i})=1\), \(i=1,2\). Let \((B_{i},\overrightarrow{\rho_{i}})\) be the corresponding pair of domain and Reeb chords of \(u_{i}\). In view of Proposition 2.42, \(\text{ind}(u_{i})\leq\text{ind}(B_{i},\overrightarrow{\rho_{i}})\), and the equality is achieved if and only if \(u_{i}\) is embedded. Now by Proposition 2.44, if some \(u_{i}\) is not embedded, we will have \(\text{ind}(B,\overrightarrow{\rho})=\text{ind}(B_{1},\overrightarrow{\rho_{ 1}})+\text{ind}(B_{2},\overrightarrow{\rho_{2}})>2\), which contradicts our assumption.
### Ends of moduli spaces of 0-P curves
We analyze boundary degenerations of moduli spaces of 0-P curves in this subsection, which was left untouched in Section 2.5. We will separate the discussion into two cases. The first case is when \(n_{z}(B)=0\), needed for defining the hat-version type D structure. The second case is when \(n_{z}(B)=1\), needed for defining extended type D structures. Readers who wish to skip the details are referred to Proposition 2.47, Proposition 2.48, and Proposition 2.49 for the statement of the results.
#### 2.7.1. Ends of \(\mathcal{M}^{B}\) when \(n_{z}(B)=0\)
**Proposition 2.46**.: _If \(n_{z}(B)=0\), then \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) does not have boundary degeneration._
Proof.: Suppose boundary degeneration appears. Let \(v\) denote the (union of the) component(s) of the nodal holomorphic curve with no \(\pm\)-punctures. The union of components that have \(\pm\)-punctures will be called the _main component_. We say that the boundary degeneration has corners if at least one of the nodes connecting \(v\) and the main component is of type II. Otherwise, a boundary degeneration is said to have no corners. (See Definition 2.24 for type II nodes.)
Let \(B_{v}\) denote the domain corresponding to \(v\). If the boundary degeneration has no corners, then \(B_{v}\) is a positive zero-cornered \(\alpha\)-bounded domain. Such \(B_{v}\) does not exist as \(\mathcal{H}\) is unobstructed. This observation left us with the possibility of boundary degeneration with corners. If such degeneration appears, then each type II node connecting the main component and the degenerated components is obtained from pinching a properly embedded arc on the original source surface \(S\), whose endpoints are on the component of \(\partial S\) that is mapped to \(\alpha_{im}\times[0,1]\times\mathbb{R}\). Therefore, \(B_{v}\) is a (union of) positive one-cornered \(\alpha\)-bounded domain with \(n_{z}=0\). Such domains do not exist either since \(\mathcal{H}\) is unobstructed. Therefore, there is no boundary degeneration.
**Proposition 2.47**.: _For a generic almost complex structure \(J\). Let \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{\rho})\) be a one-dimensional moduli space with \(n_{z}(B)=0\) and \(a(-\overrightarrow{\rho})\neq 0\). Then \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) is a compact 1-manifold with boundary such that all the boundary points correspond to two-story embedded holomorphic buildings._
Proof.: It follows from Proposition 2.40 and Proposition 2.46 that the only elements that can appear in \(\partial\overline{\mathcal{M}}^{B}_{0}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{P})\) are split curves or two-story buildings. However, if split curves appear in the torus-boundary case and \(n_{z}(B)=0\), a simple examination of the torus algebra implies we will always have \(a(-\overrightarrow{\rho})=0\). Therefore, the only degenerations that can appear are two-story buildings. The statement that \(\overline{\mathcal{M}}^{B}_{0}(\boldsymbol{x},\boldsymbol{y};\overrightarrow {P})\) is a 1-manifold with boundary then follows from the gluing results.
#### 2.7.2. Ends of \(\mathcal{M}^{B}\) when \(n_{z}(B)=1.\)
We separate the discussion into two subcases. First, we consider the sub-case where \(n_{-\rho_{i}}(B)=0\) for at least one of \(\rho_{i}\in\{\rho_{1},\rho_{2},\rho_{3}\}\), in which we have the following proposition.
**Proposition 2.48**.: _If \(n_{z}(B)=1\) and \(n_{-\rho_{i}}(B)=0\) for at least one \(\rho_{i}\in\{\rho_{1},\rho_{2},\rho_{3}\}\), then \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) does not have boundary degeneration. In particular, if \(a(-\overrightarrow{\rho})\neq 0\), then \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) is a compact 1-manifold with boundary such that all the boundary points correspond to two-story embedded holomorphic buildings._
Proof.: The proof for the first part of this is similar to that of Proposition 2.46, which reduces to excluding the existence of positive zero- and one-cornered \(\alpha\)-bounded domains \(B_{v}\). The existence of such a \(B_{v}\) with \(n_{z}(B)=0\) is excluded by unobstructedness; see Condition (1) of Definition 2.8. The existence of such \(B_{v}\) with \(n_{z}(B)=1\) is excluded by Condition (2) and (3) of Definition 2.8: If it exists, then the multiplicity of \(B_{v}\) at all the Reeb chords are equal to one, which contradicts our assumption that \(n_{-\rho_{i}}(B)=0\) for some \(\rho_{i}\). The proof for the second part is identical to that of Proposition 2.47.
Next, we consider the case where the multiplicity of \(B\) at all of the four regions next to the east puncture of \(\Sigma\) is equal to one. Let \(q\) be a self-intersection point of \(\alpha_{im}\). We use \(T(q)\) to denote the set of stabilized teardrops with an acute corner at \(q\). We will use \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};q)\) to denote the moduli space of embedded holomorphic curves whose \(\alpha\)-boundary projection to \(\Sigma\) is allowed to take a single sharp turn at \(q\); see also Definition 2.53 below.
**Proposition 2.49**.: _Given a compatible pair \((B,\overrightarrow{\rho})\) with \(B\in\pi_{2}(\boldsymbol{x},\boldsymbol{y})\), \(\overrightarrow{\rho}\in\{(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3}),(-\rho_{0 },-\rho_{123}),(-\rho_{012},-\rho_{3})\}\), and \(\text{Ind}(B,\overrightarrow{\rho})=2\), the compactified moduli space \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{\rho})\) is a compact 1-manifold with boundary. The ends are of the following four types:_
1. _Two-story ends;_
2. _split curve ends;_
3. _ends corresponding to boundary degeneration with corners;_
4. _ends corresponding to boundary degeneration without corners._
_Moreover,_
1. _If split curve ends occur, then_ \(\overrightarrow{\rho}\) _is either_ \((-\rho_{0},-\rho_{123})\) _or_ \((-\rho_{012},-\rho_{3})\)_, and the number of such ends is_ \(\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{1230})\) _if_ \(\overrightarrow{\rho}=(-\rho_{0},-\rho_{123})\)_; otherwise the number of ends is equal to_ \(\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{3012})\)
2. _If ends corresponding to boundary degeneration with corners occur and_ \(\overrightarrow{\rho}=(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3})\)_, then the number of such ends is mod 2 congruent to_ \[\sum_{\{(B_{1},\ q)|\exists B_{2}\in T(q),\ B_{1}+B_{2}=B\}}\#\mathcal{M}^{B_{1}} (\boldsymbol{x},\boldsymbol{y};q).\] _For the other two choices of_ \(\overrightarrow{\rho}\)_, the numbers of ends corresponding to boundary degeneration with corners are both even;_
3. _If (E-4) occurs, then_ \([B]=[\Sigma]\)_,_ \(\boldsymbol{x}=\boldsymbol{y}\)_, and the number of such ends is odd._
_Remark 2.50_.: Proposition 2.49 considers \(\overrightarrow{\rho}\) with \(a(-\overrightarrow{\rho})=\rho_{0123}\). Corresponding propositions hold for \(a(-\overrightarrow{\rho})\in\{\rho_{1230},\rho_{2301},\rho_{3012}\}\) by cyclically permuting the subscripts in the above statement.
The remainder of the subsection is devoted to proving Proposition 2.49.
#### 2.7.3. Reformulation of the moduli spaces
To apply gluing results needed for studying the ends corresponding to boundary degeneration, we need to know the moduli spaces of the degenerate components are transversely cut out. However, this transversality result is not clear as a key lemma needed for the proof, Lemma 3.3 of [10], is not available for degenerate curves. In [11], this difficulty is overcome by using the formulation of moduli spaces in terms of holomorphic disks in the symmetric product \(Sym^{g}(\Sigma)\). Such moduli spaces are identified with the moduli spaces of holomorphic curves in \(\Sigma\times[0,1]\times\mathbb{R}\) through a tautological correspondence (provided one uses appropriate almost complex structures). One can prove the desired transversality results of degenerate discs in the symmetric product in a standard way. We shall employ the same strategy here. The difference is that the Lagrangians holding the boundary of holomorphic disks are no longer embedded. To cater for this change, the definition of moduli spaces we give below corresponds to the one used in Floer theory for immersed Lagrangians with clean self-intersections [1, 2].
Specifically, equip \(\Sigma\) with a Kahler structure \((j,\eta)\), where \(j\) denotes a complex structure and \(\eta\) is a compatible symplectic form. Let \(\Sigma_{\bar{e}}\) denote the closed Riemann surface obtained from \(\Sigma\) by filling in the east puncture \(e\). Then \(Sym^{g}(\Sigma)\) can be viewed as the complement of \(\{e\}\times Sym^{g-1}(\Sigma_{\bar{e}})\) in \(Sym^{g}(\Sigma_{\bar{e}})\). It is a symplectic manifold with a cylindrical end modeled on the unit normal bundle of \(\{e\}\times Sym^{g-1}(\Sigma_{\bar{e}})\). In particular, there is a Reeb-like vector field \(\overrightarrow{R}\) tangent to the \(S^{1}\)-fibers of the unit normal bundle.
The products \(\mathbb{T}_{\beta}=\beta_{1}\times\cdots\times\beta_{g}\) and \(\mathbb{T}_{\alpha,i}=\alpha_{i}^{a}\times\alpha_{1}^{c}\times\cdots\times \alpha_{g-1}^{c}\), \(i=1,2\), are Lagrangian submanifolds of \(Sym^{g}(\Sigma)\). Note \(\mathbb{T}_{\alpha,i}\) is immersed with self-intersections \(\alpha_{i}^{a}\times\alpha_{1}^{c}\times\ldots\times\alpha_{g-2}^{c}\times q\), where \(q\) is some self-intersection point of \(\alpha_{im}=\alpha_{g-1}^{c}\). We identify \(\mathbb{T}_{\alpha,i}\) with the image of a map \(\iota_{i}:(0,1)\times\mathbb{T}^{g-2}\times(\mathbb{I}S^{1})\to Sym^{g}(\Sigma)\).
The immersed Lagrangian \(\mathbb{T}_{\alpha,i}\) (\(i=1,2\)) intersects the ideal boundary of \(Sym^{2}(\Sigma)\) at \(\partial\overline{\alpha_{i}^{a}}\times\alpha_{1}^{c}\times\cdots\times \alpha_{g-1}^{c}\). Each Reeb chord \(\rho\), which connects two (possibly the same) alpha arcs, now corresponds to a one-dimensional family of \(\overrightarrow{R}\)-chords \(\rho\times\boldsymbol{x}\) that connects two (possibly the same) \(\mathbb{T}_{\alpha,i}\), parametrized by \(\boldsymbol{x}\in\alpha_{1}^{c}\times\cdots\times\alpha_{g-1}^{c}\).
To define pseudo-holomorphic maps, we shall work with an appropriate class of almost complex structures called _nearly-symmetric almost complex structures_ that restrict to \(Sym^{g}(j)\) on the cylindrical end. (The concrete definitions do not matter for our purpose, and we refer the interested readers to Definition 3.1 of [13] and Definition 13.1 of [10]).
In this subsection, we only give definitions to the moduli spaces relevant to the case of 0-P curves; the 1-P counterparts are postponed to the next subsection.
**Definition 2.51**.: Let \(J_{s}\), \(s\in[0,1]\), be a path of nearly-symmetric almost complex structures. Let \(\boldsymbol{x},\boldsymbol{y}\in\mathrm{II}\mathbb{T}_{\alpha,i}\cap\mathbb{T}_{\beta}\), let \(\overrightarrow{\rho}=(\sigma_{1},\ldots,\sigma_{n})\) be a sequence of Reeb chords, and let \(B\in\tilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\). We define \(\widetilde{\mathcal{M}}^{B}_{Sym,J_{s}}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{\rho})\) as the set of maps
\[u:([0,1]\times\mathbb{R}\backslash\{(1,t_{1}),\ldots,(1,t_{n})\})\to Sym^{g}(\Sigma)\]
such that
1. \(t_{1}<\ldots<t_{n}\) and are allowed to vary;
2. \(u(\{0\}\times\mathbb{R})\subset\mathbb{T}_{\beta}\);
3. \(u(\{1\}\times(\mathbb{R}\backslash\{t_{1},\ldots,t_{n}\}))\subset\mathbb{T}_{ \alpha,1}\cup\mathbb{T}_{\alpha,2}\). Moreover, the restriction of \(u\) to any connected components of \(\{1\}\times(\mathbb{R}\backslash\{t_{1},\ldots,t_{n}\})\) lifts through \(\iota_{i}:(0,1)\times\mathbb{T}^{g-2}\times(\mathrm{II}S^{1})\to Sym^{g}(\Sigma)\) for an appropriate \(i\in\{1,2\}\);
4. \(\lim_{t\to\infty}u(s+it)=\boldsymbol{y}\), and \(\lim_{t\to-\infty}u(s+it)=\boldsymbol{x}\);
5. \(\lim_{(s,t)\to(1,t_{i})}u(s+it)\) is an \(\overrightarrow{R}\)-chord \(\sigma_{i}\times\boldsymbol{a}\) for some \(\boldsymbol{a}\in\alpha_{1}^{c}\times\cdots\times\alpha_{g-1}^{c}\);
6. \(\frac{du}{ds}+J_{s}\frac{du}{dt}=0\);
7. \(u\) is in the homology class specified by \(B\).
_Remark 2.52_.: The only difference between our setting and the setting of embedded Lagrangians is the lifting property stated in (3). This condition ensures that \(\partial u\) does not have corners at self-intersection points of \(\alpha_{im}\).
The tautological correspondence between the two moduli spaces defined using two different ambient symplectic manifolds holds in our setting as well. Roughly, holomorphic disks \(u\) in the symmetric product are in one-to-one correspondence with pairs \((v,\pi)\), where \(v\) is a stay-on-track 0-P holomorphic curves \(S\to\Sigma\), and \(\pi:S_{\varepsilon}\to[0,1]\times\mathbb{R}\) is a \(g\)-fold branched cover where the filled-in punctures are mapped to \((1,t_{i})\), \(i=1,\ldots,n\). The tautological correspondence was first proved in Section 13 of [10]. The proof for our case follows the same line and is omitted. From now on, we will simply denote \(\widetilde{\mathcal{M}}^{B}_{Sym,J_{s}}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{\rho})\) by \(\widetilde{\mathcal{M}}^{B}_{J_{s}}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{\rho})\). The reduced moduli space \(\mathcal{M}^{B}_{J_{s}}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{\rho})\) is the quotient of \(\widetilde{\mathcal{M}}^{B}_{J_{s}}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{\rho})\) by the \(\mathbb{R}\)-action given by vertical translation.
The moduli spaces \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};q)\) where \(q\) is some self-intersection point of \(\alpha_{im}\) can be similarly defined in the symmetric-product setup (and the tautological correspondence to curves in \(\Sigma\times[0,1]\times\mathbb{R}\) holds).
**Definition 2.53**.: \(\mathcal{M}^{B}_{J_{s}}(\boldsymbol{x},\boldsymbol{y};q)\) is the space of \(J_{s}\)-holomorphic maps
\[u:([0,1]\times\mathbb{R}\backslash\{(1,0)\})\to Sym^{g}(\Sigma)\]
satisfying conditions (2)(3)(4)(6)(7) of Definition 2.51 and \(\lim_{(s,t)\to(1,0)}u(s+it)=(q,\boldsymbol{a})\) for some \(\boldsymbol{a}\in\alpha_{i}^{a}\times\alpha_{1}^{c}\times\cdots\times\alpha_{ g-2}^{c}\) for an appropriate \(i\in\{1,2\}\).
Note there is a natural evaluation map \(ev_{J_{s}}:\mathcal{M}^{B}_{J_{s}}(\boldsymbol{x},\boldsymbol{y};q)\to\alpha _{i}^{a}\times\alpha_{1}^{c}\times\cdots\times\alpha_{g-2}^{c}\) for an appropriate \(i\in\{1,2\}\), given by \(u\mapsto\boldsymbol{a}\) if \(\lim_{(s,t)\to(1,0)}u(s+it)=(q,\boldsymbol{a})\).
We call a holomorphic disc _degenerate_ if its boundaries are in \(\mathbb{T}_{\alpha,i}\) (\(i=1,2\)). A degenerate holomorphic disc may be viewed as a map from the upper-half plane \(\mathbb{H}\) with boundary punctures to the symmetric product. We further divide such discs into degenerate discs with or without corners based on the behavior of asymptotics at the point at infinity, corresponding to Type I and Type II nodes in Definition 2.24. We spell out the definitions for completeness.
**Definition 2.54** (Degenerate disks without corners).: Let \(J\) be a nearly-symmetric almost complex structure. Let \(\mathbf{x}\in\mathbb{T}_{\alpha}\) and \(\overrightarrow{\rho}=(\sigma_{1},\ldots,\sigma_{n})\). \(\mathcal{N}_{J}(\mathbf{x};\overrightarrow{\rho})\) is the set of maps \(v:\mathbb{H}\backslash\{t_{1},\ldots,t_{n}\}\to Sym^{g}(\Sigma)\) such that
1. \(0=t_{1}<\ldots<t_{n}\) and are allowed to vary;
2. \(v(\mathbb{R}\backslash\{t_{1},\ldots,t_{n}\}))\subset\mathbb{T}_{\alpha,1} \cup\mathbb{T}_{\alpha,2}\). Moreover, the restriction of \(v\) to any connected components of \(\mathbb{R}\backslash\{t_{1},\ldots,t_{n}\}\) lifts through \(\iota_{i}:(0,1)\times\mathbb{T}^{g-2}\times(\mathrm{ILS}^{1})\to Sym^{g}(\Sigma)\) for an appropriate \(i\in\{1,2\}\);
3. \(\lim_{z\to\infty}v(z)=\mathbf{x}\), and the path obtained from \(v|_{(-\infty,t_{1})\cup(t_{n},\infty)}\) by continuous extension at \(\infty\) lifts through \(\iota_{i}\) for an appropriate \(i\in\{1,2\}\);
4. \(\lim_{z\to t_{i}}v(z)\) is an \(\overrightarrow{R}\)-chords \(\sigma_{i}\times a\) for some \(a\in\alpha_{1}^{c}\times\cdots\times\alpha_{g-1}^{c}\);
5. \(\frac{du}{ds}+J\frac{du}{dt}=0\).
**Definition 2.55** (Degenerate disks with corners).: Let \(J\) be a nearly-symmetric almost complex structure. Let \(q\) be a self-intersection point of \(\alpha_{im}\). Let \(\overrightarrow{\rho}=(\sigma_{1},\ldots,\sigma_{n})\). \(\mathcal{N}_{J}(q;\overrightarrow{\rho})\) is the set of maps \(v:\mathbb{H}\backslash\{t_{1},\ldots,t_{n}\}\to Sym^{g}(\Sigma)\) such that
1. \(0=t_{1}<\ldots<t_{n}\) and are allowed to vary;
2. \(v(\mathbb{R}\backslash\{t_{1},\ldots,t_{n}\}))\subset\mathbb{T}_{\alpha,1} \cup\mathbb{T}_{\alpha,2}\). Moreover, the restriction of \(v\) to any connected components of \(\mathbb{R}\backslash\{t_{1},\ldots,t_{n}\}\) lifts through \(\iota_{i}:(0,1)\times\mathbb{T}^{g-2}\times(\mathrm{ILS}^{1})\to Sym^{g}(\Sigma)\) for an appropriate \(i\in\{1,2\}\);
3. \(\lim_{z\to\infty}v(z)=(q,\mathbf{p})\) for some \(\mathbf{p}\in\alpha_{i}^{a}\times\alpha_{1}^{c}\times\cdots\times\alpha_{g-2}^{c}\) for an appropriate \(i\in\{1,2\}\), and and the path from \(v|_{(-\infty,t_{1})\cup(t_{n},\infty)}\) by continuous extension at \(\infty\) does not lift through \(\iota_{i}\);
4. \(\lim_{z\to t_{i}}v(z)\) is an \(\overrightarrow{R}\)-chords \(\sigma_{i}\times a\) for some \(a\in\alpha_{1}^{c}\times\cdots\times\alpha_{g-1}^{c}\);
5. \(\frac{du}{ds}+J\frac{du}{dt}=0\).
We call \(q\) the corner of such a degenerate disk. We also have an evaluation map \(ev_{J}:\mathcal{N}_{J}(q;\overrightarrow{\rho})\to\alpha_{i}^{a}\times\alpha_ {1}^{c}\times\cdots\times\alpha_{g-2}^{c}\) defined by \(v\mapsto\mathbf{p}\) if \(\lim_{z\to\infty}v(z)=(\mathbf{p},q)\).
#### 2.7.4. Boundary degeneration with corners
**Definition 2.56**.: A _simple boundary degeneration_ is a boundary degeneration of the form \(u\lor v\), where \(u\) is a (non-nodal) holomorphic curve, and \(v\) is a degenerate disk.
**Proposition 2.57**.: _If a boundary degeneration with corners appears in a one-dimensional moduli space \(\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};\overrightarrow{\rho})\) where \((B,\overrightarrow{\rho})\) is as in Proposition 2.49, the boundary degeneration must be a simple boundary degeneration. Moreover, the domain for the degenerate disk must be a stabilized teardrop with an acute corner._
Proof.: First, we consider the case where we assume degeneration at east infinity, multi-story splitting, and boundary degeneration without corners do not occur simultaneously with the boundary degeneration with corners. Also, note that sphere bubbles do not occur as \(\Sigma\) is punctured at the east infinity. Hence we may assume the boundary degeneration with corners is a holomorphic map \(u_{\infty}:\mathbb{B}\to Sym^{g}(\Sigma)\), where \(\mathbb{B}\) is a disc bubble tree: \(\mathbb{B}\) has one main component containing the \(\pm\)-puncture and some other components attached to the main component or each other such that the graph obtained by turning the components of \(\mathbb{B}\backslash\{\text{nodes}\}\) into vertexes and nodes into edges is a tree. (See Figure 5 (a).) The vertex corresponding to the main component will be called the _root_.
Our first claim is that \(\mathbb{B}\) must only have one leaf. (See Figure 5 (b).) To see this, note a leaf corresponds to a degenerate disk whose
by homological consideration the domain is bounded by a one-cornered subloop of \(\alpha_{im}\) and possibly \(\partial\Sigma\). Note that at most one leaf would have a domain with boundary containing \(\partial\Sigma\), as we assumed \(a(-\overrightarrow{\rho})=\rho_{0123}\); call this the distinguished leaf. Therefore, all the other leaves, if they exist, would have positive one-cornered \(\alpha\)-bounded domains with \(n_{z}=0\) (as \(n_{z}(B)=1\) and the distinguished leaf already has multiplicity one at \(z\)); such domains do not exist since \(\mathcal{H}\) is unobstructed. Therefore, only the distinguished leaf exists, and its domain is a stabilized teardrop since \(\mathcal{H}\) is unobstructed.
Now denote the map restricting to the main component by \(u\). Let \(n\) denote the number of components of \(\mathbb{B}\) except the root. Denote the degenerate disc corresponding to the leaf by \(v_{n}\), and denote those connecting the root and the leaf by \(v_{i}\), \(i=1,\ldots,n-1\). We want to prove \(n=1\) and that the stabilized teardrop for the leaf has an acute corner. They follow from an index consideration as follows.
Note that the domains \(D_{i}\) corresponding to \(v_{i}\), \(i=1,\ldots,n-1\) are bigons: Such domains are two-cornered \(\alpha\)-bounded domains with \(n_{z}=0\) and we know these domains are bigons by the assumption that \(\mathcal{H}\) is unobstructed. Let \(\mathcal{N}^{D_{i}}\) denote the reduced moduli space of holomorphic curves with domain \(D_{i}\). Direct computation shows the virtual dimension \(\operatorname{vdim}(\mathcal{N}^{D_{i}})\) of the reduced moduli space satisfies \(\operatorname{vdim}(\mathcal{N}^{D_{i}})\geq g-1\), and the equality attained when both corners of \(D_{i}\) are acute. Here the term \(g-1\) comes from varying the constant value of the holomorphic map in \(\alpha_{i}^{a}\times\alpha_{1}^{c}\times\cdots\times\alpha_{g-2}^{c}\) for some \(i\in\{1,2\}\).
Now we move to consider \(D_{n}\). We already know it is a stabilized teardrop. Depending on whether the corner of \(D_{n}\) is acute or obtuse, the virtual dimension of the corresponding moduli space \(\mathcal{N}^{D_{n}}\) is \(g-1\) or \(g\).
Figure 5. The bubble tree \(\mathbb{B}\); the \(\sigma_{i}^{\prime}s\) are labels of the east punctures. Hypothetically, a bubble tree with many branches might appear, as shown in (a). In our case, we first prove the bubble tree must be of the form with a single branch as shown in (b), and then further show it must be of the simple form shown in (c).
Let \(D\) be the domain of \(u\), and let \(q\) denote the corner corresponding to the node. Then
\[\operatorname{vdim}(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{\rho}))= \operatorname{vdim}(\mathcal{M}^{D}(\boldsymbol{x},\boldsymbol{y} ;q))+\sum_{i=1}^{n-1}\operatorname{vdim}(\mathcal{N}^{D_{i}})+\operatorname{ vdim}(\mathcal{N}^{D_{n}})\] \[-(g-1)n+n.\]
Here, the term \(-(g-1)n\) comes from the evaluation map, and the term \(+n\) appears since we glued \(n\) times. Note \(\operatorname{vdim}(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y}; \overrightarrow{\rho}))=1\), and hence we have \(\operatorname{vdim}(\mathcal{M}^{D}(\boldsymbol{x},\boldsymbol{y};q))\leq 1-n\) since \(\operatorname{vdim}(\mathcal{N}^{D_{i}})\geq g-1\) for \(1\leq i\leq n\). Therefore, as long as we fix a generic path of nearly-symmetric almost complex structure \(\mathcal{J}_{s}\) so that \(\mathcal{M}^{D}(\boldsymbol{x},\boldsymbol{y};q)\) is transversally cut out, it being non-empty implies \(n=1\) (see Figure 5 (c)). This also forces \(\operatorname{vdim}(\mathcal{N}^{D_{n}})=g-1\), which implies the corner of \(D_{n}\) is acute.
The above analysis shows the index of a degenerate disk with a corner is greater than or equal to \(g-1\), and hence an index consideration rules out the possibility of several types of degeneration appearing simultaneously.
**Proposition 2.58**.: _Let \(B\in T(q)\) be a stabilized teardrop with an acute corner at \(q\), and let \(\overrightarrow{\rho}=(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3})\). For a generic nearly symmetric almost complex \(J\), the moduli space of degenerate disks \(\mathcal{N}^{B}_{J}(q;\overrightarrow{\rho})\) is a \((g-1)\)-manifold, and a generic fiber of the evaluation map \(ev_{J}:\mathcal{N}^{B}_{J}(q;\overrightarrow{\rho})\to\alpha_{1}^{a}\times \alpha_{1}^{c}\times\cdots\alpha_{g-2}^{c}\) consists of an odd number of points._
Proof.: The argument for seeing the moduli space is smoothly cut out for a generic almost complex structure is standard, and in this case, it closely follows that of Proposition 3.14 of [10].
We now study the parity of a generic fiber of the evaluation map. We first prove the relevant statements for the case where \(g(\Sigma)=2\). Note that if we fix a point \(p\in\alpha_{1}^{a}\), by standard arguments we may choose a generic almost complex structure \(J\) so that the fiber \(ev_{J}^{-1}(p)\) over \(p\) is smoothly cut out as a \(0\)-manifold. Standard consideration of degeneration shows \(ev_{J}^{-1}(p)\) is compact: limits of such maps cannot have further degenerate disks for index reasons, nor can a sequence of maps converge into a sphere bubbling as the domain is not \(\Sigma\). We now claim that \(|ev_{J}^{-1}(p)|\) is odd. A lemma is needed for this.
**Lemma 2.59**.: _Assume \(g(\Sigma)=2\). For a generic perturbation of the \(\alpha\)-curves, \(ev_{Sym^{2}(j)}^{-1}(p)\) is smoothly cut out, and \(|ev_{Sym^{2}(j)}^{-1}(p)|\) is odd._
Proof of Lemma 2.59.: First note \(ev_{Sym^{2}(j)}^{-1}(p)\) is smoothly cut out provided we perturb the \(\alpha\)-curves if necessary, as \(B\) being a stabilized teardrop guarantees any holomorphic disks representing \([B]\) is somewhere boundary injective; see Proposition 3.9 of [10], or see Theorem I and Theorem 3.2 of [11] for more details on the relation between boundary injectivity and regularity.
For the second part of the statement, note by the tautological correspondence it suffices to find all the pairs \((\hat{u},\pi)\) of holomorphic maps \(\hat{u}:F\to\Sigma\) and two-fold branched covers \(\pi:F_{\bar{e}}\to D_{\bar{e}}\), where \(F_{\bar{e}}\) stands for the surface obtained from \(F\) by filling in the east punctures and \(D_{\bar{e}}\) is the holomorphic disk with one boundary puncture. Examining the region \(B\) shows that \(\hat{u}\) is the obvious map from a unique holomorphic annulus \(F\) with boundary punctures. (One may think \(F\) is obtained by cutting \(B\) open along the alpha arcs when \(B\) is embedded; see Figure 6). Without
loss of generality, we may regard \(F\) is obtained from the annulus \(\{z\in\mathbb{C}|\frac{1}{r}\leq|z|\leq r\}\) for some positive number \(r\) by adding boundary punctures: we assume the outer boundary has only one boundary puncture and is asymptotic to \(q\) under \(\hat{u}\), and the inner boundary has five punctures, corresponding to \(p\), \(-\rho_{0}\), \(-\rho_{1}\), \(-\rho_{2}\), and \(-\rho_{3}\) (see Figure 6 (b)), whose relative positions depend on the complex structure induced from \(j\) on \(\Sigma\). There is only one involution \(\iota\) on \(F_{\bar{e}}\) interchanging the inner and outer boundary and swapping the boundary punctures labeled by \(p\) and \(q\); see [1, Lemma 9.3]. This involution induces \(\pi:F_{\bar{e}}\to F_{\bar{e}}/\iota\), and \(D\) is obtained from \(F_{\bar{e}}/\iota\) by removing the boundary points corresponding to the filled-in east punctures. In summary, \(ev_{Sym^{2}(j)}^{-1}(p)\) consists of a unique map, and hence it has odd cardinality.
Let \(J\) be a generic almost complex structure such that both \(\mathcal{N}_{J}^{B}(q;\overrightarrow{\rho})\) and \(ev_{J}^{-1}(p)\) are smoothly cut out. Then a generic path \(J_{s}\) of almost complex structures such that \(J_{0}=J\) and \(J_{1}=Sym^{2}(j)\) induces a cobordism between \(ev_{J}^{-1}(p)\) and \(ev_{Sym^{2}(j)}^{-1}(p)\). This shows \(|ev_{J}^{-1}(p)|\) is odd. For a generic point \(p^{\prime}\in\alpha_{1}^{a}\), let \(l\) be the sub-arc in \(\alpha_{1}^{a}\) connecting \(p\) and \(p^{\prime}\), then \(ev_{J}^{-1}(l)\) is a cobordism from \(ev_{J}^{-1}(p)\) to \(ev_{J}^{-1}(p^{\prime})\), implying \(|ev_{J}^{-1}(p^{\prime})|\) is odd as well. This finishes the proof of the theorem for \(g(\Sigma)=2\).
The relevant statements for \(g(\Sigma)>2\) can be proved inductively from the base case in which \(g(\Sigma)=2\) using the neck-stretching argument in Section 10 of [1], which relates moduli spaces built from Heegaard diagrams differ by a stabilization.
**Proposition 2.60**.: _Let \(B\in T(q)\) be a stabilized teardrop with an acute corner at \(q\) and let \(\overrightarrow{\rho}=(-\rho_{0},-\rho_{123})\) or \((-\rho_{012},-\rho_{3})\). For a generic nearly symmetric almost complex \(J\), the moduli space of degenerate disk \(\mathcal{N}_{J}^{B}(q;\overrightarrow{\rho})\) is a \((g-1)\)-manifold, and a generic fiber of the evaluation map \(ev_{J}:\mathcal{N}^{B}(q;\overrightarrow{\rho})\to\alpha_{1}^{a}\times\alpha_{ 1}^{c}\times\cdots\alpha_{g-2}^{c}\) consists of an even number of points._
Proof.: The regularities and dimensions of the moduli spaces are proved the same way as in Proposition 2.58.
We study the parity of a generic fiber \(ev_{J}^{-1}(p)\) of \(\mathcal{N}_{J}^{B}(q;\overrightarrow{\rho})\). Again, as in the previous proposition, we only need to study the case in which \(g(\Sigma)=2\). We first prove the following lemma.
Figure 6. (a) The domain \(B\). (b) The annulus \(F\) with punctures on the boundaries.
**Lemma 2.61**.: _Assume \(g(\Sigma)=2\). View \((\Sigma,\alpha_{1}^{a},\alpha_{2}^{a},\alpha_{im})=(E_{1},\alpha_{1}^{a},\alpha_ {2}^{a})\#(E_{2},\alpha_{im})\), where \(E_{1}\) is a punctured Riemann surface of genus one and \(E_{2}\) is a Riemann surface of genus one. If \(j\) is a sufficiently stretched complex structure on \(\Sigma\), then \(ev_{Sym^{2}(j)}^{-1}(p)\) is empty._
Proof of Lemma 2.61.: The proof is similar to Proposition 3.16 of [10]. We provide a sketch. Let \(j_{t}\) denote the complex structure on \(\Sigma\), corresponding to when the connected sum tube is isometric to \(S^{1}\times[-t,t]\). If the statement is not true, then there exists a sequence of \(u_{t}\in ev_{Sym^{2}(j_{t})}^{-1}(p)\) which converges to a holomorphic disk \(u_{\infty}\) in \(Sym^{2}(E_{1}\lor E_{2})\) by Gromov compactness. In particular, the main component of \(u_{\infty}\) is a holomorphic disk in \(E_{1}\times E_{2}\). Projecting this disk to \(E_{1}\), we would have a holomorphic disk in \(E_{1}\) with the east punctures prescribed by \(\overrightarrow{\rho}=(-\rho_{0},-\rho_{123})\) or \((-\rho_{012},-\rho_{3})\), which cannot exist by direct examination.
With the above lemma at hand, a cobordism argument as in Proposition 2.58 can be applied to conclude that when the stretching parameter \(t\) is sufficiently large, for a generic nearly \(j_{t}\)-symmetric almost complex structure \(J_{t}\), the parity of a generic fiber of \(ev_{J_{t}}\) is even. Then one can show the statement is independent of the stretching parameter \(t\) as in Theorem 3.15 of [10]: When \(t_{1}\) and \(t_{2}\) are sufficiently close, and \(J_{t_{i}}\) is a \(j_{t_{i}}\)-nearly symmetric almost complex structure sufficiently close to \(Sym^{2}(j_{t_{i}})\), \(i=1,2\), then the moduli spaces can be identified.
#### 2.7.5. Boundary degeneration without corners
**Proposition 2.62**.: _Under the assumption of Proposition 2.49, if a boundary degeneration without corners occurs, then:_
1. _The degenerate disk has domain_ \([B]=[\Sigma]\)_._
2. \(\boldsymbol{x}=\boldsymbol{y}\)_._
3. _Such degenerate disks do not occur simultaneously with other types of degeneration._
4. _The number of ends corresponding to such boundary degeneration is odd._
Proof.: We first prove (1),(2), and (3). Suppose the limit curve \(u_{\infty}\) has a nodal source being a disc bubble tree \(\mathbb{B}\). A schematic picture of \(\mathbb{B}\) to keep in mind would be an analogue of Figure 5 (a). The same analysis as in the proof of Proposition 2.57 shows there must be only one leaf in the bubble tree \(\mathbb{B}\), which is the one that contains the boundary puncture. (All the others are excluded as there are no corresponding positive domains.) The degenerate disk corresponding to the leaf cannot have a corner, for otherwise, we are back to the case considered in the previous subsection by index consideration. Denote the degenerate disk corresponding to the leaf by \(v\). Then, the domain \(B_{v}\) of \(v\) is a zero-cornered positive \(\alpha\)-bounded domain with \(n_{z}=1\). Hence \(B_{v}=\Sigma\) as \(\mathcal{H}\) is unobstructed. Note that a degenerate disk with domain \([\Sigma]\) has Maslov index \(2\), which implies the nodal curve must be of the form \(u\lor v\) with \(u\) being a constant curve and \(v\) being the degenerate disk. We finished proving (1), (2), and (3).
(4) follows from a standard gluing argument and Proposition 11.35 of [11], which states for a generic almost complex structure, the moduli space of degenerate discs at \(\boldsymbol{x}\) is transversally cut out, and it consists of an odd number of points if \(\overrightarrow{\rho}=(-\rho_{0},\ldots,-\rho_{3})\) and an even number of points for other
#### 2.7.6. Proof of Proposition 2.49
In this subsection, we synthesize the previous results to prove Proposition 2.49.
Proof of Proposition 2.49.: By Proposition 2.40, we know degenerations appearing in \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) are two-story splittings, simple holomorphic combs with a single split component, or boundary degenerations.
For a simple holomorphic comb \((u,v)\) with a single split component to appear, there must be two consecutive Reeb chords in \(\overrightarrow{\rho}\) such that the endpoint of the first one is equal to the start point of the second one, this excludes the possibility of \(\overrightarrow{\rho}=(-\rho_{0},\ldots,-\rho_{3})\). Moreover, when such a degeneration appears, as the split curve's domain is a single disk, the moduli space of the split curve \(\mathcal{N}(v)\) is a transversally cut-out \(0\)-dimensional manifold (by Proposition 2.21), and it consists of a single point by the Riemann mapping theorem. Therefore, the gluing result, Proposition 2.31, shows there is a neighborhood of such a holomorphic comb in \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) diffeomorphic to \((0,1]\); the count of such ends is equal to \(\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{1230})\) when \(\overrightarrow{\rho}=(-\rho_{0},-\rho_{123})\), and is equal to \(\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{3012})\) when \(\overrightarrow{\rho}=(-\rho_{012},-\rho_{3})\).
By Proposition 2.62 and 2.57, boundary degenerations are further divided into boundary degenerations with or without corners. In particular, different types of degeneration do not appear simultaneously. When boundary degeneration without corners appear, the situation is covered in Proposition 2.62. When boundary degenerations with corners appear, Proposition 2.58 and Proposition 2.60 show the moduli space of degenerate disks are smoothly cut out. In particular, the standard gluing results can be applied to show each boundary degeneration in \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};\overrightarrow{ \rho})\) has a neighborhood diffeomorphic to \((0,1]\). The number of such ends is equal to
\[\sum_{\{(q,B_{1})|\exists B_{2}\in T(q),B_{1}+B_{2}=B\}}\#(\mathcal{M}^{B_{1} }(\boldsymbol{x},\boldsymbol{y};q)\times_{ev}\mathcal{N}^{B_{2}}(q; \overrightarrow{\rho}))\]
(We have suppressed the almost complex structure \(J_{s}\), which can be chosen generically so that the evaluation maps are transversal to each other.) This quantity is even when \(\overrightarrow{\rho}\neq(-\rho_{0},\ldots,-\rho_{3})\) in view of Proposition 2.60. Otherwise, it has the same parity as
\[\sum_{\{(q,B_{1})|\exists B_{2}\in T(q),B_{1}+B_{2}=B\}}\#\mathcal{M}^{B_{1} }(\boldsymbol{x},\boldsymbol{y};q)\]
in view of Proposition 2.58.
### Ends of moduli spaces of 1-P curves
This subsection characterizes the ends of one-dimensional moduli spaces of 1-P holomorphic curves. Given a generator \(\boldsymbol{x}\), we say \(\iota(\boldsymbol{x})=\iota_{1}\) if and only if \(\boldsymbol{x}\) is in \(\mathbb{T}_{\alpha,1}\); otherwise \(\iota(\boldsymbol{x})=\iota_{0}\). The main result is the following.
**Proposition 2.63**.: _Let \(B\in\tilde{\pi}_{2}(\boldsymbol{x},\boldsymbol{y})\) such that \(\iota(\boldsymbol{x})=\iota_{1}\) and \(\text{ind}(B;U)=2\). Then fixing a generic almost complex structure, the compactified moduli space \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};U)\) is a compact 1-manifold with boundary. The boundaries are of the following types:_
1. _Two-story building_
2. _simple holomorphic combs_ \((u,v)\) _with_ \(v\) _being an orbit curve_
3. _boundary degeneration with corners_
4. _boundary degeneration without corners_
_Moreover,_
1. _The number of type (2) ends is_ \(\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{1230})+\#\mathcal{M}^{B }(\boldsymbol{x},\boldsymbol{y};-\rho_{3012})\)_._
2. _The number of type (_3_) ends is mod 2 congruent to_ \[\sum_{\{(B_{1},\ q)|B_{2}\in T(q),\ B_{1}+B_{2}=B\}}\#\mathcal{M}^{B_{1}}(\mathbf{x}, \mathbf{y};q)\]
3. _The number of type (_4_) ends is even._
_Remark 2.64_.: A similar proposition holds in the case when \(\iota(\mathbf{x})=\iota_{0}\). One simply needs to change the Reeb chords in (a) by a cyclic permutation of the digits in the subscript.
#### 2.8.1. Reformulation of the moduli spaces
We reformulate \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};U)\) in terms of holomorphic disks in \(Sym^{g}(\Sigma)\). Assume \(\iota(\mathbf{x})=\iota_{1}\) throughout the rest of the section.
**Definition 2.65**.: \(\mathcal{M}^{B}_{Sym}(\mathbf{x},\mathbf{y};U)\) is defined to be the space of holomorphic maps \(u:[0,1]\times\mathbb{R}\backslash\{(s_{0},0)\}\to Sym^{g}(\Sigma)\) such that:
1. \((s_{0},0)\) is in the interior of \([0,1]\times\mathbb{R}\) and is allowed to vary;
2. \(u(\{0\}\times\mathbb{R})\subset\mathbb{T}_{\beta}\);
3. \(u(\{1\}\times\mathbb{R})\subset\mathbb{T}_{\alpha,1}\). Moreover, \(u|_{\{1\}\times\mathbb{R}}\) lifts through \(f_{1}:(0,1)\times\mathbb{T}^{g-2}\times(\amalg S^{1})\to Sym^{g}(\Sigma)\);
4. \(\lim_{t\to\infty}u(s+it)=\mathbf{y}\), and \(\lim_{t\to-\infty}u(s+it)=\mathbf{x}\);
5. \(\lim_{(s,t)\to(s_{0},0)}u(s+it)\) is a closed \(\overrightarrow{R}\)-orbit \(\sigma\times\mathbf{w}\), where \(\mathbf{w}\in Sym^{g-1}(\Sigma)\) and \(\sigma\) stands for a closed Reeb orbit that traverses \(\partial\overline{\Sigma}\) once;
6. \(\frac{du}{ds}+J_{s}\frac{du}{dt}=0\);
7. \(u\) is in the homology class specified by \(B\).
Again, we have the tautological correspondence that identifies the moduli spaces defined here and the ones in Section 2.2. Therefore, we shall no longer keep the subscript "Sym" in the notation.
We also define the moduli spaces of one-punctured degenerate disks (with or without corners).
**Definition 2.66** (One-punctured degenerate disks without corners).: Let \(J\) be a nearly-symmetric almost complex structure. Let \(\mathbf{x}\in\mathbb{T}_{\alpha}\). \(\mathcal{N}_{J}(\mathbf{x};U)\) is the space of maps \(v:\mathbb{H}\backslash\{i\}\to Sym^{g}(\Sigma)\) such that
1. \(v(\mathbb{R})\subset\mathbb{T}_{\alpha,1}\). Moreover, the restriction of \(v|_{\mathbb{R}}\) lifts through \(f_{1}:(0,1)\times\mathbb{T}^{g-2}\times(\amalg S^{1})\to Sym^{g}(\Sigma)\);
2. \(\lim_{z\to\infty}v(z)=\mathbf{x}\), and the path obtained from \(v|_{\partial\mathbb{H}}\) by continuous extension at \(\infty\) lifts through \(\iota_{1}\);
3. \(\lim_{z\to i}v(z)\) is some closed \(\overrightarrow{R}\)-orbit \(\sigma\times\mathbf{w}\), where \(\mathbf{w}\in Sym^{g-1}(\Sigma)\) and \(\sigma\) stands for a closed Reeb orbit that traverses the \(\partial\overline{\Sigma}\) once;
4. \(\frac{du}{ds}+J\frac{du}{dt}=0\).
**Definition 2.67** (One-cornered one-punctured degenerate disks).: Let \(J\) be a nearly-symmetric almost complex structure. Let \(q\) be a self-intersection point of \(\alpha_{im}\). \(\mathcal{N}_{J}(q;U)\) is the space of maps \(v:\mathbb{H}\backslash\{i\}\to Sym^{g}(\Sigma)\) such that
1. \(v(\mathbb{R})\subset\mathbb{T}_{\alpha,1}\). Moreover, the restriction of \(v|_{\mathbb{R}}\) lifts through \(f_{1}:(0,1)\times\mathbb{T}^{g-1}\times(\amalg S^{1})\to Sym^{g}(\Sigma)\);
2. \(\lim_{z\to\infty}v(z)=(q,\mathbf{p})\) for some \(p\in\alpha_{1}^{a}\times\alpha_{1}^{c}\times\cdots\times\alpha_{g-2}^{c}\), and and the path obtained from \(v|_{\partial\mathbb{H}}\) by continuous extension at \(\infty\) does not lift through \(\iota_{1}\);
3. \(\lim_{z\to i}v(z)\) is some closed \(\overrightarrow{R}\)-orbit \(\sigma\times\boldsymbol{w}\), where \(\boldsymbol{w}\in Sym^{g-1}(\Sigma)\) and \(\sigma\) stands for a closed Reeb orbit that traverses \(\partial\overline{\Sigma}\) once;
4. \(\frac{du}{ds}+J\frac{du}{dt}=0\).
We call \(q\) the corner of such a degenerate disk. We also have an evaluation map \(ev_{J}:\mathcal{N}_{J}(q;U)\to\alpha_{i}^{a}\times\alpha_{1}^{c}\times\cdots \times\alpha_{g-2}^{c}\) defined by \(v\mapsto\boldsymbol{p}\) if \(\lim_{z\to\infty}v(z)=(q,\boldsymbol{p})\).
#### 2.8.2. One-punctured boundary degeneration with corners
**Proposition 2.68**.: _If a boundary degeneration with corners appears in the compactification of a one-dimensional moduli space \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};U)\), then the nodal comb is of simple form, and the domain for the degenerate disk is a stabilized teardrop with an acute corner._
Proof.: The proof is similar to the proof of Proposition 2.57. There is only one modification needed: We no longer have east boundary punctures when considering the bubble tree \(\mathbb{B}\) of the nodal curve; instead, there is one and only one interior puncture. With this, the rest of the proof follows exactly as in Proposition 2.57.
**Proposition 2.69**.: _Let \(q\) be a self-intersection point of \(\alpha_{im}\) and let \(B\in T(q)\) be a stabilized teardrop with acute corner. For a generic nearly symmetric almost complex structure \(J\), the moduli space of degenerate disks \(\mathcal{N}_{J}^{B}(q;U)\) is a \((g-1)\)-manifold, and a generic fiber of the evaluation map \(ev_{J}:\mathcal{N}_{J}^{B}(q;U)\to\alpha_{1}^{a}\times\alpha_{1}^{c}\times \cdots\times\alpha_{g-2}^{c}\) is a compact 0-dimensional manifold consisting of an odd number of points._
Proof.: The regularity of \(\mathcal{N}_{J}^{B}(q;U)\) and compactness of a generic fiber are proved in the same way as in Proposition 2.69.
The parity of the cardinality of a generic fiber follows from a similar neck-stretching and cobordism argument as in Proposition 2.69, using Lemma 2.70 below instead of Lemma 2.59.
**Lemma 2.70**.: _Assume \(g(\Sigma)=2\). Fix some point \(p\in\alpha_{1}^{a}\). For a sufficiently stretched almost complex structure \(j\) on \(\Sigma\), the fiber \(ev_{Sym^{2}(j)}^{-1}(p)\) is transversally cut out and consists of one point._
Proof.: View \(\Sigma\) be the connected sum \((E_{1},\alpha_{1}^{a},\alpha_{2}^{a})\) and \((E_{2},\alpha_{im})\), where \(E_{1}\) is the punctured Riemann surface of genus one and \(E_{2}\) is a closed Riemann surface of genus one. Let \(z^{\prime}\) denote the points on \(E_{1}\) and \(E_{2}\) where the connected sum is performed. The domain \(B\) gives rise to a teardrop domain \(B^{\prime}\) in \(E_{2}\) with \(n_{z^{\prime}}(B^{\prime})=1\). The Riemann mapping theorem implies that the moduli space \(\mathcal{N}^{B^{\prime}}(q)\) of holomorphic disks in \(E_{2}\) with corner \(q\) and domain \(B^{\prime}\) is smoothly cut out and has only one element. The gluing argument in Section 10 of [10] shows for a sufficiently stretched almost complex structure, maps in \(ev_{Sym^{2}(j)}^{-1}(p)\) are obtained by splicing the one-punctured holomorphic sphere in \(Sym^{2}(E_{1})\) passing through \((z^{\prime},p)\) and the holomorphic disk in \(\mathcal{N}^{B^{\prime}}(q)\).6 In particular, \(ev_{Sym^{2}(j)}^{-1}(p)\) is identified with \(\mathcal{N}^{B^{\prime}}(q)\) and hence consists of only one element.
#### 2.8.3. One-punctured boundary degeneration without corners
**Proposition 2.71**.: _In the assumption of Proposition 2.63, if a boundary degeneration without corner occurs, then:_
1. _There is only one degenerate disk, and its domain_ \([B]\) _is_ \([\Sigma]\)_._
2. \(\mathbf{x}=\mathbf{y}\)_._
3. _Such degenerate disk do not occur simultaneously with other types of degeneration._
4. _The number of ends corresponding to such boundary degeneration is even._
Proof.: The proof of (1), (2), and (3) are straightforward modifications of that of Proposition 2.62 and are omitted. (4) follows from the standard gluing result and Proposition 2.72 below, which differs from the counterpart in the 0-P case.
**Proposition 2.72**.: _For a generic almost complex structure \(J\), \(\mathcal{N}_{J}^{[\Sigma]}(\mathbf{x};U)\) is a compact, 0-dimensional manifold that consists of an even number of points._
Proof.: The argument for compactness and transversality is the same as in [14, Proposition 3.14], which is the counterpart of Proposition 2.72 when the Heegaard surface is closed; we will omit this part. By a similar cobordism argument used in Proposition 2.58, we can reduce understanding the parity of the moduli space to the base case \(g(\Sigma)=2\), which is addressed in Lemma 2.73 below.
**Lemma 2.73**.: _Assume \(g(\Sigma)=2\). View \((\Sigma,\alpha_{1}^{a},\alpha_{2}^{a},\alpha_{im})=(E_{1},\alpha_{1}^{a}, \alpha_{2}^{a})\#(E_{2},\alpha_{im})\), where \(E_{1}\) is a punctured Riemann surface of genus one and \(E_{2}\) is a Riemann surface of genus one. If \(j\) is a sufficiently stretched complex structure on \(\Sigma\), then \(\mathcal{N}_{Sym^{2}(j)}^{[\Sigma]}(\mathbf{x};U)\) is empty._
Proof.: Otherwise, the same neck-stretching procedure as in Lemma 2.61 produces a limit nodal holomorphic curve \(u_{\infty}:\mathbb{B}\to Sym^{2}(E_{1}\lor E_{2})\). It consists of a (possibly punctured) holomorphic disk \(v\) that maps to \(E_{1}\times E_{2}\) with boundary in \(\mathbb{T}_{\alpha,1}\) and possibly some (possibly punctured) sphere bubbles in \(Sym^{2}(E_{i})\), \(i=1,2\). We claim \(v\) must be a constant map. It is clear that \(Pr_{E_{1}}\circ v\) is constant, for \(\pi_{2}(E_{1,\bar{e}},\alpha_{1}^{a}\cup\{e\})=0\), where \(E_{1,\bar{e}}\) denote the Riemann surface obtained by filling in the east puncture. We move to see \(Pr_{E_{2}}\circ v\) is constant. Suppose \(Pr_{E_{2}}\circ v\) is not a constant map. Note the domain of \((Pr_{E_{2}}\circ v)\) is a zero-cornered \(\alpha\)-bounded domain \(D\) in \(E_{2}\). Stabilizing by \(E_{1}\), this domain induces a zero-cornered \(\alpha\)-bounded domain \(D^{\prime}\) in \(\Sigma\) with \(n_{z}(D^{\prime})\leq 1\). If \(n_{z}(D^{\prime})=0\), then \(D^{\prime}\) does not exist as \(\mathcal{H}\) is unobstructed, and hence \(D\) does not exist. So \(n_{z}(D^{\prime})=1\), and hence \(D^{\prime}=\Sigma\) since \(\mathcal{H}\) is unobstructed. This implies \(D=E_{2}\). Therefore, \(\partial(Pr_{E_{2}}\circ v)\) is null-homotopic in \(\alpha_{im}\). So \(Pr_{E_{2}}\circ v\) induces a nontrivial element in \(\pi_{2}(E_{2})\). This, however, contradicts that \(\pi_{2}(E_{2})=0\). Therefore, \(Pr_{E_{2}}\circ v\) is also constant, and hence \(v\) is the constant map with image \(\mathbf{x}\). Now \(\{\mathbf{x}\}\) intersects neither \(Sym^{2}(E_{i})\), \(i=1,2\), and hence there are no sphere bubbles in \(u_{\infty}\). So the Gromov limit \(u_{\infty}\) is a constant map. In particular, \(n_{z}(u_{\infty})=0\). However, \(n_{z}(u_{\infty})=1\) as it is the limit of a sequence of holomorphic maps whose multiplicity at \(z\) is one. This is a contradiction. Therefore, \(\mathcal{N}_{Sym^{2}(j)}^{[\Sigma]}(\mathbf{x};U)\) is empty provided \(j\) is sufficiently stretched.
Proof of Proposition 2.63.: In view of Proposition 2.40, Proposition 2.68, and Proposition 2.71 we know the degenerations that can appear in the boundary of the compactified moduli spaces are two-story curves, simple combs with orbit curve ends, or simple boundary degenerations with or without corners. In all cases, gluing arguments can be applied to see the compactified moduli space \(\overline{\mathcal{M}}^{B}(\boldsymbol{x},\boldsymbol{y};U)\) is a one-manifold with boundary.
For conclusion (a), note that ends of type (2) correspond to pairs of curves \((u,v)\) where \(u\) is in \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{1230})\) or \(\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{3012})\) and \(v\) is an orbit curve, but the moduli space of orbit curves consists of a single element by the Riemann mapping theorem so the count of type (2) boundaries agrees with \(\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y};-\rho_{1230})+\#\mathcal{M}^{ B}(\boldsymbol{x},\boldsymbol{y};-\rho_{3012})\). For conclusion (b), standard gluing results imply that the number of such ends is equal to
\[\sum_{\{(q,B_{1})|\exists B_{2}\in T(q),B_{1}+B_{2}=B\}}\#(\mathcal{M}^{B_{1} }(\boldsymbol{x},\boldsymbol{y};q)\times_{ev}\mathcal{N}^{B_{2}}(q;U))\]
This is mod 2 equal to
\[\sum_{\{(q,B_{1})|\exists B_{2}\in T(q),B_{1}+B_{2}=B\}}\#\mathcal{M}^{B_{1} }(\boldsymbol{x},\boldsymbol{y};q)\]
as a generic fiber of \(ev\) in \(\mathcal{N}^{B_{2}}(q;U)\) is odd by Proposition 2.69. For (c), note by gluing results the number of such ends is equal to \(\#\mathcal{N}^{[\Sigma]}(\boldsymbol{x};U)\). This is even by Proposition 2.72.
### Type D structures
We define type D structures from an immersed bordered Heeggard diagram \(\mathcal{H}=(\Sigma,\boldsymbol{\beta},\boldsymbol{\bar{\alpha}},z)\) in this subsection.
Let \(\mathcal{A}\) denote the _torus algebra_, which is isomorphic to the quiver algebra of the quiver in Figure 7 (left). For \(I\in\{1,2,3,12,23,123\}\), \(\rho_{I}\in\mathcal{A}\) is understood as the product of the \(\rho_{i}\)'s for those \(i\) appear in \(I\). This algebra arises naturally in the context of bordered Heegaard diagrams, where \(\mathcal{A}\) is associated to the pointed match circle determined by \(\mathcal{H}\) with the reversed boundary orientation (Figure 7 (right)); we refer the readers to [1, Chapter 11.1] for a detailed definition of the torus algebra in terms of pointed match circles, and we only point out that the element \(\rho_{I}\in\mathcal{A}\) for \(I\in\{1,2,3,12,23,123\}\) corresponds to the Reeb chord with the same label on the pointed match circle. Let \(\mathcal{I}=\langle\iota_{0}\rangle\oplus\langle\iota_{1}\rangle\) denote the ring of idempotents of \(\mathcal{A}\). We recall the definition of a type D structure.
Figure 7. The quiver presentation of the torus algebra (left) and the pointed match circle of \(\mathcal{H}\) with reversed boundary orientation (right).
**Definition 2.74**.: A type D structure over the torus algebra \(\mathcal{A}\) is a left \(\mathcal{I}\)-module \(N\) together with a linear map \(\delta:N\to\mathcal{A}\otimes N\) such that the map
\[\partial\coloneqq(\mu_{\mathcal{A}}\otimes\mathbb{I}_{N})\circ(\mathbb{I}_{ \mathcal{A}}\otimes\delta):\mathcal{A}\otimes N\to\mathcal{A}\otimes N\]
is a differential, i.e., \(\partial^{2}=0\). The left differential \(\mathcal{A}\)-module \(\mathcal{A}\otimes N\) is called the type D module of the type D structure \((N,\delta)\).
Next, we spell out the construction of a type D structure from an immersed bordered Heegaard diagram. Recall \(\mathbb{T}_{\beta}=\beta_{1}\times\cdots\times\beta_{g}\) and \(\mathbb{T}_{\alpha,i}=\alpha_{i}^{a}\times\alpha_{1}^{c}\times\cdots\times \alpha_{g-1}^{c}\), \(i=1,2\). Let \(\mathbb{T}_{\alpha}=\mathbb{T}_{\alpha,1}\cup\mathbb{T}_{\alpha,2}\). Let \(\mathcal{G}(\mathcal{H})=\{\boldsymbol{x}|\boldsymbol{x}\in\mathbb{T}_{\alpha }\cap\mathbb{T}_{\beta}\}\). Denote the local system on \(\alpha_{im}\) as a vector bundle \(\mathcal{E}\to\alpha_{im}\) together with a parallel transport \(\Phi\). Note that this induces a local system on \(\mathbb{T}_{\alpha}\), the tensor product of \(\mathcal{E}\) and the trivial local system on the other alpha curves (or arcs). Abusing notation, we still denote the local system on \(\mathbb{T}_{\alpha}\) by \((\mathcal{E},\Phi)\). Now define an \(\mathcal{I}\)-module \(X^{\mathcal{E}}(\mathcal{H})=\oplus_{\boldsymbol{x}\in\mathcal{G}(\mathcal{H })}\mathcal{E}|_{\boldsymbol{x}}\), where the \(\mathcal{I}\)-action on an element \(\eta\in\mathcal{E}|_{\boldsymbol{x}}\) is specified by
\[\iota_{i}\cdot\eta=\begin{cases}\eta,\ o(\boldsymbol{x})\equiv i\pmod{2}\\ 0,\ \text{otherwise}\end{cases}\]
Here \(o(\boldsymbol{x})=i\) if and only if \(\boldsymbol{x}\in\mathbb{T}_{\alpha,i}\), \(i=1,2\).
Given a sequence of Reeb chords \(\overrightarrow{\sigma}=(\sigma_{1},\ldots,\sigma_{k})\) of a pointed match circle \(\mathcal{Z}\), \(a(-\overrightarrow{\sigma})\) is defined to be \((-\sigma_{1})\cdot(-\sigma_{2})\cdot\ldots(-\sigma_{k})\in\mathcal{A}(- \mathcal{Z})\). Note given \(B\in\pi_{2}(\boldsymbol{x},\boldsymbol{y})\), the parallel transport restricted to the arc \(\partial_{\alpha_{im}}B\subset\alpha_{im}\) induces an isomorphism from \(\mathcal{E}|_{\boldsymbol{x}}\) to \(\mathcal{E}|_{\boldsymbol{y}}\), which we denote by \(\Phi^{B}_{\boldsymbol{x},\boldsymbol{y}}\).
**Definition 2.75**.: Let \(\mathcal{H}\) be an unobstructed, provincially admissible, immersed bordered Heegaard diagram. Fix a generic almost complex structure on \(\Sigma\times[0,1]\times\mathbb{R}\). The type D module \(\widehat{CFD}(\mathcal{H})\) is defined to be the \(\mathcal{A}\)-module
\[\mathcal{A}\otimes_{\mathcal{I}}X^{\mathcal{E}}(\mathcal{H})\]
together with a differential given by
\[\partial(a\otimes\eta)=a\cdot(\sum_{\boldsymbol{y}}\ \sum_{\{(B, \overrightarrow{\sigma})\mid\ n_{z}(B)=0,\ \operatorname{ind}(B,\overrightarrow{\sigma})=1\}}\#\mathcal{M}^{B}( \boldsymbol{x},\boldsymbol{y};\overrightarrow{\sigma})a(-\overrightarrow{ \sigma})\otimes\Phi^{B}_{\boldsymbol{x},\boldsymbol{y}}\eta),\]
where \(a\in\mathcal{A}\), \(\eta\in\mathcal{E}|_{\boldsymbol{x}}\), and the pairs \((B,\overrightarrow{\sigma})\) are compatible. The underlying type D structure is the pair \((X^{\mathcal{E}}(\mathcal{H}),\delta)\) where \(\delta(\eta)\coloneqq\partial(1\otimes\eta)\) for any \(\eta\in X^{\mathcal{E}}(\mathcal{H})\).
Abusing notation, we will also use \(\widehat{CFD}(\mathcal{H})\) to denote its underlying type D structure.
_Remark 2.76_.: Note when the local system is trivial, we can identify \(\boldsymbol{x}\) with \(\mathcal{E}|_{\boldsymbol{x}}\), and the differential defined above can be more conveniently written as
\[\partial(a\otimes\boldsymbol{x})=a\cdot(\sum_{\boldsymbol{y}}\ \sum_{\{(B, \overrightarrow{\sigma})\mid\ n_{z}(B)=0,\ \operatorname{ind}(B,\overrightarrow{\sigma})=1\}}\#\mathcal{M}^{B}( \boldsymbol{x},\boldsymbol{y};\overrightarrow{\sigma})a(-\overrightarrow{ \sigma})\otimes\boldsymbol{y}).\]
**Proposition 2.77**.: _The operator \(\partial\) in Definition 2.75 is well-defined and \(\partial^{2}=0\)._
Proof.: We first point out \(\partial\) is well-defined, i.e., the sum defining \(\partial\) is finite. This reduces to the provincial admissibility of \(\mathcal{H}\), which implies there are only finitely many positive domains with prescribed Reeb chords connecting any given pair of generators. The proof is standard, and we do not repeat it here.
We move to see \(\partial^{2}(\mathbf{x})=0\). For ease of explanation, we begin with the case of trivial local systems. Let \(a\) be a non-zero element of \(\mathcal{A}\), and let \(\langle\partial^{2}\mathbf{x},a\mathbf{y}\rangle\in\mathbb{F}\) denote the coefficient of the term \(a\mathbf{y}\) in \(\partial^{2}\mathbf{x}\). Then
\[\langle\partial^{2}\mathbf{x},a\mathbf{y}\rangle=\sum_{\mathbf{w}\in\mathcal{G}}\sum\# \mathcal{M}^{B_{1}}(\mathbf{x},\mathbf{w};\overrightarrow{\sigma_{1}})\#\mathcal{M}^{ B_{2}}(\mathbf{w},\mathbf{y};\overrightarrow{\sigma_{2}}), \tag{2.7}\]
where the second sum is over all the index-one compatible pairs \((B_{i},\overrightarrow{\sigma_{i}})\) (\(i=1,2\)) with \(a(-\overrightarrow{\sigma_{1}})\cdot a(-\overrightarrow{\sigma_{2}})=a\). In view of Proposition 2.47 and the gluing result, the right-hand side of Equation (2.7) is
\[\sum_{\{(B,\overrightarrow{\sigma})|\text{ind}(B,\overrightarrow{\sigma})=2, \ \ a(-\overrightarrow{\sigma})=a\}}\#\partial\overline{\mathcal{M}}^{B}(\mathbf{x}, \mathbf{y};\overrightarrow{\sigma})\equiv 0\pmod{2}\]
This finishes the proof in the case of trivial local systems. For the case of non-trivial local systems, the proof is a slight modification of the above argument. One needs to note that given \(B_{1}\in\pi_{2}(\mathbf{x},\mathbf{w})\) and \(B_{2}\in\pi_{2}(\mathbf{w},\mathbf{y})\), we have \(\Phi^{B_{1}}_{\mathbf{x},\mathbf{w}}\circ\Phi^{B_{2}}_{\mathbf{w},\mathbf{y}}=\Phi^{B_{1}+B_{ 2}}_{\mathbf{x},\mathbf{y}}\). Therefore, given an \(\eta\in\mathcal{E}|_{\mathbf{x}}\), the terms in \(\partial^{2}(\eta)\) corresponding to two-story ends of a one-dimensional moduli space \(\mathcal{M}^{B}(\mathbf{x},\mathbf{y};\overrightarrow{\sigma})\) are multiples of the same element in \(\mathcal{E}|_{\mathbf{y}}\), namely \(\Phi^{B}_{\mathbf{x},\mathbf{y}}(\eta)\), and hence the coefficient is zero mod 2.
### Weakly extended Type D structures
We define the weakly extended type D structure \(\widetilde{CFD}(\mathcal{H})\) in this subsection. The weakly extended torus algebra \(\tilde{\mathcal{A}}\) can be represented by the quiver with relations shown in Figure 8.
Note as in the torus algebra, we have the idempotent ring \(\mathcal{I}=\langle\iota_{0}\rangle\oplus\langle\iota_{1}\rangle\). Let \(\mathbf{U}\) be \(\rho_{0123}+\rho_{1230}+\rho_{2301}+\rho_{3012}\), which is a central element of \(\tilde{\mathcal{A}}\).
**Definition 2.78**.: A weakly extended type D structure over \(\tilde{\mathcal{A}}\) is a left \(\mathcal{I}\)-module \(N\) together with a linear map \(\tilde{\delta}:N\to\tilde{\mathcal{A}}\otimes N\) such that the map
\[\tilde{\partial}\coloneqq(\mu_{\tilde{\mathcal{A}}}\otimes\mathbb{I}_{N}) \circ(\mathbb{I}_{\tilde{\mathcal{A}}}\otimes\tilde{\delta}):\tilde{\mathcal{ A}}\otimes N\to\tilde{\mathcal{A}}\otimes N\]
squares to \(\mathbf{U}\), i.e. \(\tilde{\partial}^{2}=\mathbf{U}\). The curved left \(\tilde{\mathcal{A}}\)-module \(\tilde{\mathcal{A}}\otimes N\) is called the weakly extended type D module of the weakly extended type D structure \((N,\tilde{\delta})\).
Let \(X^{\mathcal{E}}(\mathcal{H})\) be the \(\mathcal{I}\)-module defined the same way as in Section 2.9.
**Definition 2.79**.: Let \(\mathcal{H}\) be an unobstructed, provincially admissible, immersed bordered Heegaard diagram. Fix a generic admissible almost complex structure on \(\Sigma\times[0,1]\times\mathbb{R}\). The weakly extended type D module \(\widetilde{CFD}(\mathcal{H})\) is defined to be the \(\tilde{\mathcal{A}}\)-module
\[\tilde{\mathcal{A}}\otimes_{\mathcal{I}}X^{\mathcal{E}}(\mathcal{H})\]
Figure 8. The weakly extended torus algebra. The subscripts in the relation are understood mod 4.
together with a differential given by
\[\tilde{\partial}(a\otimes\eta)=a\cdot(\sum_{\boldsymbol{y}}\sum_{\{(B,\overrightarrow {\sigma})|\text{ ind}(B,\overrightarrow{\sigma})=1\}}\#\mathcal{M}^{B}( \boldsymbol{x},\boldsymbol{y};\overrightarrow{\sigma})a(-\overrightarrow{ \sigma})\otimes\Phi^{B}_{\boldsymbol{x},\boldsymbol{y}}\eta),\]
where \(a\in\tilde{\mathcal{A}}\), \(\eta\in\mathcal{E}|_{\boldsymbol{x}}\), \(\overrightarrow{\sigma}\) is a sequence of Reeb chords that include the case of a single closed Reeb orbit \(\{U\}\) (in which case the corresponding moduli space consists of 1-P holomorphic curves), and the pairs \((B,\overrightarrow{\sigma})\) are compatible. When \(\overrightarrow{\sigma}=\{U\}\), we define \(a(-U)=\boldsymbol{U}\). The underlying weakly extended type D structure is \((X^{\mathcal{E}}(\mathcal{H}),\tilde{\delta})\) where \(\tilde{\delta}(\eta)\coloneqq\tilde{\partial}(1\otimes\eta)\).
_Remark 2.80_.: Again, by abusing notation, we also use \(\widetilde{CFD}(\mathcal{H})\) to denote the underlying weakly extended type D structure. When the local system is trivial, we have the following more familiar formula for the differential:
\[\tilde{\partial}(a\otimes\boldsymbol{x})=a\cdot(\sum_{\boldsymbol{y}}\sum_{ \{(B,\overrightarrow{\sigma})|\text{ ind}(B,\overrightarrow{\sigma})=1\}}\#\mathcal{M}^{B}(\boldsymbol{x}, \boldsymbol{y};\overrightarrow{\sigma})a(-\overrightarrow{\sigma})\otimes \boldsymbol{y}).\]
**Proposition 2.81**.: _The operator \(\tilde{\partial}\) in Definition 2.79 is well-defined and \(\tilde{\partial}^{2}=\boldsymbol{U}\)._
Proof.: A standard argument shows that the provincial admissibility of \(\mathcal{H}\) implies the sum defining \(\tilde{\partial}\) in Definition 2.79 is finite, and hence \(\tilde{\partial}\) is well-defined.
Next, we show \(\tilde{\partial}^{2}=\boldsymbol{U}\). Once again, we first give the proof when the local system is trivial for conciseness. Recall the length of an element \(a\in\tilde{\mathcal{A}}\) is the number of factors \(\rho_{i}\in\{\rho_{0},\rho_{1},\rho_{2},\rho_{3}\}\) when we write \(a\) as a product of the generators \(\{\iota_{0},\iota_{1},\rho_{0},\rho_{1},\rho_{2},\rho_{3}\}\). (For example, \(\rho_{123}\) has length 3 and \(\iota_{0}\) has length 0.)
For an element \(a\in\tilde{\mathcal{A}}\) whose length is less than or equal to 3, the proof of Proposition 2.77 carries over to show \(\langle\tilde{\partial}^{2}\boldsymbol{x},a\boldsymbol{y}\rangle=0\) for any \(\boldsymbol{x}\) and \(\boldsymbol{y}\) (by permuting the region we where we put the base point \(z\)).
We are left to consider the case where the algebra element is of length 4. We claim that for a generator \(\boldsymbol{x}\) such that \(\iota_{1}\cdot\boldsymbol{x}=\boldsymbol{x}\), we have
\[\langle\tilde{\partial}^{2}\boldsymbol{x},\rho_{0123}\boldsymbol{y}\rangle= \begin{cases}0,\ if\ \boldsymbol{x}\neq\boldsymbol{y},\\ 1,\ if\ \boldsymbol{x}=\boldsymbol{y}.\end{cases}\]
Assuming this claim, by permuting the subscripts we also have that \(\langle\tilde{\partial}^{2}\boldsymbol{x},\rho_{2301}\boldsymbol{y}\rangle\) is 1 if \(\boldsymbol{x}=\boldsymbol{y}\) and 0 otherwise, and an idempotent consideration shows \(\langle\tilde{\partial}^{2}\boldsymbol{x},a\boldsymbol{y}\rangle=0\) when \(a\in\{\rho_{1230},\rho_{3012}\}\). These together imply \(\tilde{\partial}^{2}\boldsymbol{x}=\boldsymbol{U}\cdot\boldsymbol{x}\) when \(\iota(\boldsymbol{x})=\iota_{1}\). A similar consideration shows this is true for \(\boldsymbol{x}\) with \(\iota(\boldsymbol{x})=\iota_{0}\) as well. This finishes the proof of the proposition modulo the claim.
Next, we prove the claim. Note
\[\langle\tilde{\partial}^{2}\boldsymbol{x},\rho_{0123}\boldsymbol{y}\rangle= \sum_{\boldsymbol{w}}\sum_{\begin{subarray}{c}\text{ind}(B,\overrightarrow{ \sigma}_{1}^{\prime})=1,\\ i=1,2\end{subarray}}\#\mathcal{M}^{B_{1}}(\boldsymbol{x},\boldsymbol{w}; \overrightarrow{\sigma_{1}})\#\mathcal{M}^{B_{2}}(\boldsymbol{w},\boldsymbol{y };\overrightarrow{\sigma_{2}}), \tag{2.8}\]
where \((B_{i},\overrightarrow{\sigma_{i}})\) (\(i=1,2\)) is compatible and \(a(-\overrightarrow{\sigma_{1}})a(-\overrightarrow{\sigma_{1}})=\rho_{0123}\) or \(\mathbf{U}\); the possible pairs of \((\overrightarrow{\sigma_{1}},\overrightarrow{\sigma_{2}})\) are listed below:
\[\bigg{\{}(\emptyset,\{-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3}\}),(\{-\rho_{0} \},\{-\rho_{1},-\rho_{2},-\rho_{3}\}),(\{-\rho_{0},-\rho_{1},-\rho_{2}\},\{- \rho_{3}\}),\\ (\{-\rho_{0},-\rho_{1}\},\{-\rho_{2},-\rho_{3}\}),(\{-\rho_{0},- \rho_{1},-\rho_{2},-\rho_{3}\},\emptyset),(\emptyset,\{-\rho_{0},-\rho_{123}\}),\\ (\{-\rho_{0}\},\{-\rho_{123}\}),(\{-\rho_{0},-\rho_{123}\}, \emptyset),(\emptyset,\{-\rho_{012},-\rho_{3}\}),(\{-\rho_{012}\},\{-\rho_{3} \}),\\ (\{-\rho_{012},-\rho_{3}\},\emptyset),(\emptyset,\{U\}),(\{U\}, \emptyset)\bigg{\}}.\]
Let
\[\overline{\mathcal{M}}_{0}\coloneqq\cup_{\operatorname{ind}(B,\overrightarrow {\sigma})=2}\overline{\mathcal{M}}^{B}(\mathbf{x},\mathbf{y};\overrightarrow{\sigma}),\]
where \(\overrightarrow{\sigma}\in\{(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3}),(-\rho_ {0},-\rho_{123}),(-\rho_{012},-\rho_{3})\}\). Let
\[\overline{\mathcal{M}}_{1}:=\cup_{\operatorname{ind}(B,U)=2}\overline{ \mathcal{M}}^{B}(\mathbf{x},\mathbf{y};U).\]
Equation 2.8 and the gluing result imply that \(\langle\tilde{\partial}^{2}\mathbf{x},\rho_{0123}\mathbf{y}\rangle\) is equal to the number of two-story ends of the moduli space \(\overline{\mathcal{M}}_{0}\cup\overline{\mathcal{M}}_{1}\).
According to Proposition 2.49, the other elements in \(\partial\overline{\mathcal{M}}_{0}\) are:
1. simple holomorphic combs with a single split component;
2. simple boundary degenerations with one corner;
3. simple boundary degenerations without corners.
Proposition 2.63 shows the other boundary points in \(\overline{\mathcal{M}}_{1}\) in addition to two-story ends are:
1. simple holomorphic combs with an orbit curve;
2. simple boundary degenerations with one corner;
3. simple boundary degenerations without corners.
Note by Proposition 2.49 and Proposition 2.63, the number of boundary points of type (A-1) is equal to that of type (B-1), both of which is
\[\sum_{\operatorname{ind}(B,-\rho_{3012})=1}\#\mathcal{M}^{B}(\mathbf{x},\mathbf{y};- \rho_{3012})+\sum_{\operatorname{ind}(B,-\rho_{1230})=1}\#\mathcal{M}^{B}( \mathbf{x},\mathbf{y};-\rho_{1230}).\]
The parity of the number of boundary points of type (A-2) is equal to that of type (B-2), which are both mod 2 equal to
\[\sum_{q}\sum_{\{(B_{1},B_{2})|B_{2}\in T(q),\ \operatorname{ind}(B_{1}+B_{2};U)=2\}} \#\mathcal{M}^{B_{1}}(\mathbf{x},\mathbf{y};q),\]
where \(q\) ranges over self-intersection points of \(\alpha_{im}\), and \(T(q)\) denotes the set of stabilized teardrops at \(q\).
The parity of the number of boundary points of type (B-3) is even according to Proposition 2.63.
In summary, the parity of the number of boundary points of \(\overline{\mathcal{M}}_{0}\cup\overline{\mathcal{M}}_{1}\) corresponding to two-story ends is equal to that of type (A-3), which is odd if and only if \(\mathbf{x}=\mathbf{y}\) by Proposition 2.49. Therefore, \(\langle\tilde{\partial}^{2}\mathbf{x},\rho_{0123}\mathbf{y}\rangle\) is odd if and only if \(\mathbf{x}=\mathbf{y}\), finishing the proof of the claim.
In the presence of non-trivial local systems, we simply need to consider the above argument for each domain. For a domain \(B\), let \(\overline{\mathcal{M}}_{0}^{B}\) be the subset of \(\overline{\mathcal{M}}_{0}\) consisting of holomorphic curves with domain \(B\), and similarly define \(\overline{\mathcal{M}}_{1}^{B}\). The two-story ends
in \(\overline{\mathcal{M}}_{0}^{B}\cup\overline{\mathcal{M}}_{1}^{B}\) all correspond to the same parallel transport. When ends of type (A-3) do not occur, the two-story ends cancel in pairs by the same argument as above. When ends of type (A-3) appear, we have \(B=[\Sigma]\) and \(\sigma=(-\rho_{0},-\rho_{1},-\rho_{2},-\rho_{3})\). In particular, \(\partial_{\alpha_{im}}B=\emptyset\), which induces the identity endomorphism of \(\mathcal{E}|_{\boldsymbol{x}}\). Also, the number of two-story ends is odd as the number of (A-3) ends is odd. The claim follows from these.
There is a canonical quotient map \(\pi:\tilde{\mathcal{A}}\to\mathcal{A}\). We say a weakly extended type D structure \((N,\tilde{\delta})\) extends a type D structure \((N^{\prime},\delta)\) if \((N^{\prime},\delta)\) is isomorphic to \((N,(\pi\otimes\mathbb{I}_{N})\circ\tilde{\delta})\). Clearly, \(\widetilde{CFD}(\mathcal{H})\) extends \(\widehat{CFD}(\mathcal{H})\) when both are defined.
### Invariance
In this subsection, we address the invariance of the (weakly extended) type D structures.
**Proposition 2.82**.: _The homotopy type of the type D structure defined in Definition 2.75 is independent of the choice of the almost complex structure and is invariant under isotopy of the \(\alpha\)- or \(\beta\)-curves._
_Remark 2.83_.: We do not need the invariance under handleslides and stabilizations for our applications. We only need to prove invariance when perturbing diagrams to obtain nice diagrams, and this only requires isotopies.
Proof of Proposition 2.82.: The standard proof in Section 6.3 of [1] carries over. For instance, to prove independence of almost complex structures, one first constructs a continuation map by counting holomorphic curves in \(\Sigma\times[0,1]\times\mathbb{R}\) for a generic almost complex structure \(J\) that interpolates two admissible almost complex structures \(J_{0}\) and \(J_{1}\). Then, one proves the continuation map is a chain map by analyzing the ends of one-dimensional moduli spaces. The only possible complication comes from boundary degenerations since \(\alpha_{im}\) is immersed. However, this does not happen as \(\mathcal{H}\) is unobstructed and the holomorphic curves have \(n_{z}=0\). Therefore, no new phenomenon appears in the degeneration of moduli spaces, and hence the proof stays the same.
**Proposition 2.84**.: _The homotopy type of the weakly extended type D structure defined in Definition 2.79 is independent of the choice of the almost complex structure and is invariant under isotopy of the \(\alpha\)- and \(\beta\)-curves._
Proof.: One could prove this proposition similarly to the previous one. However, such an approach would require generalizing the analysis of the ends of moduli spaces in Proposition 6.20 of [1] and hence is slightly tedious to write down. Here we give a different approach. Let \(\mathcal{H}\) denote the immersed bordered Heegaard diagram. By Proposition 2.82, we know the homotopy type of \(\widehat{CFD}(\mathcal{H})\) is independent of the choice of almost complex structures and isotopy of the \(\alpha\)- or \(\beta\)-curves. Since \(\widehat{CFD}(\mathcal{H})\) extends \(\widehat{CFD}(\mathcal{H})\) and that such extension is unique up to homotopy by Proposition 38 of [13], we know the homotopy type of \(\widehat{CFD}(\mathcal{H})\) is also independent of the choice of almost complex structures and isotopy of the \(\alpha\)- and \(\beta\)-curves.
## 3. Knot Floer homology of immersed Heegaard diagrams
This section defines knot Floer chain complexes of immersed Heegaard diagrams and proves the homotopy invariance under Heegaard moves.
### Immersed doubly-pointed Heegaard diagram
**Definition 3.1**.: An _immersed doubly-pointed Heegaard diagram_ is a \(5\)-tuple \(\mathcal{H}_{w,z}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},w,z)\) where
1. \(\Sigma\) is a closed oriented surface of genus \(g\).
2. \(\boldsymbol{\alpha}=\{\alpha_{1},\ldots,\alpha_{g-1},\alpha_{g}\}\), where \(\alpha_{1},\ldots,\alpha_{g-1}\) are embedded disjoint curves in \(\Sigma\) and \(\alpha_{g}=\{\alpha_{g}^{1},\ldots,\alpha_{g}^{n}\}\) is a collection of immersed curves decorated with local systems. Moreover, \(\alpha_{i}\) (\(i=1,\ldots,g-1\)) are disjoint from \(\alpha_{g}\), \(\alpha_{g}^{1}\) has the trivial local system, and \(\{\alpha_{1},\ldots,\alpha_{g-1},\alpha_{g}^{1}\}\) induce linearly independent elements in \(H_{1}(\Sigma,\mathbb{Z})\). We also assume that \(\alpha_{g}^{i}\) is trivial in \(H_{1}(\Sigma,\mathbb{Z})/\langle\alpha_{1},\ldots,\alpha_{g-1}\rangle\) for \(i>1\). For convenience, we also denote \(\alpha_{g}\) by \(\alpha_{im}\).
3. \(\boldsymbol{\beta}=\{\beta_{1},\ldots,\beta_{g}\}\) are embedded disjoint curves in \(\Sigma\) which induce linearly independent elements in \(H_{1}(\Sigma,\mathbb{Z})\).
4. \(w\) and \(z\) are base points such that they both lie in a single connected region in the complement of \(\alpha\)-curves as well as a single region in the complement of \(\beta\)-curves.
Domains, periodic domains, and \(\alpha\)-bounded domains are defined similarly in this setting as for bordered Heegaard diagrams (by ignoring \(\alpha\)-arcs and surface boundary). We make a similar but slightly different definition of unobstructedness and admissibility below.
**Definition 3.2**.: Given an immersed doubly-pointed Heegaard diagram, \(\boldsymbol{\alpha}\) is called _unobstructed_ if there are no nontrivial zero- or one-cornered \(\alpha\)-bounded domains \(B\) with \(n_{z}(B)=0\) (or equivalently \(n_{w}(B)=0\)). An immersed doubly-pointed Heegaard diagram is called unobstructed if \(\boldsymbol{\alpha}\) is unobstructed.
**Definition 3.3**.: An immersed doubly-pointed Heegaard diagram is _bi-admissible_ if any nontrivial periodic domain \(B\) with \(n_{z}(B)=0\) or \(n_{w}(B)=0\) has both positive and negative coefficients.
We remark that the restriction to having only one immersed multicurve in the definition of immersed doubly-pointed Heegaard diagrams is not essential.
### The knot Floer chain complex
We define the knot Floer chain complex of an immersed Heegaard diagram similar to that in the ordinary setup. The only modification is that we only count stay-on-track holomorphic curves. The definition and analysis of moduli spaces in this setup is a straightforward modification of that in the previous section; it is even simpler as we do not need to care about east punctures. We hence do not repeat the moduli space theory but only mention the key properties when we need them. We will let \(\mathcal{G}(\mathcal{H}_{w,z})\) denote the set of generators, which are \(g\)-tuples \((x_{1},\ldots,x_{g})\) such that \(x_{i}\in\alpha_{i}\cap\beta_{\sigma(i)}\) (\(i=1,\ldots,g\)) where \(\sigma\) is a permutation of \(\{1,\ldots,g\}\). Let \(\mathcal{R}=\mathbb{F}[U,V]/(UV)\). Implicit in the definition below is that we choose a generic admissible almost complex structure \(J\) on \(\Sigma\times[0,1]\times\mathbb{R}\).
**Definition 3.4**.: Let \(\mathcal{H}_{w,z}\) be an unobstructed and bi-admissible immersed doubly-pointed Heegaard diagram. \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z})\) is the free \(\mathcal{R}\)-module generated over \(\mathcal{G}(\mathcal{H}_{w,z})\) with differential \(\partial\) defined as
\[\partial\boldsymbol{x}=\sum_{y}\sum_{B\in\pi_{2}(\boldsymbol{x},\boldsymbol{y }),\ \operatorname{ind}(B)=1}\#\mathcal{M}^{B}(\boldsymbol{x},\boldsymbol{y})U^{n_{w }(B)}V^{n_{z}(B)}\boldsymbol{y},\]
where \(\mathbf{x},\mathbf{y}\in\mathcal{G}\).
_Remark 3.5_.: Here we only give the definition assuming the local system on \(\alpha_{im}\) is trivial. The case in which the local system is non-trivial is only notationally more complicated, and we leave it for the interested readers to work out. See Definition 2.75 for an example.
**Proposition 3.6**.: \((CFK_{\mathcal{R}}(\mathcal{H}_{w,z}),\partial)\) _is a chain complex, i.e., \(\partial^{2}=0\)._
Proof.: The same proof for Proposition 2.77 works here. Note we will only use moduli spaces with domains \(B\) such that \(n_{w}(B)=0\) or \(n_{z}(B)=0\), and the un-obstructedness of \(\mathcal{H}_{w,z}\) excludes the possibility of boundary degeneration in the compactified 1-dimensional moduli space supported in such domains. Hence, an analogue version of Proposition 2.47 holds. With this observation, the proof of Proposition 2.77 carries over.
### Bi-grading
We would like to consider gradings on knot Floer chain complexes.
**Definition 3.7**.: A (possibly immersed) doubly-pointed Heegaard diagram is gradable if all non-trivial periodic domain \(P\) satisfies \(\operatorname{ind}(P)-2n_{z}(P)=0\) and \(\operatorname{ind}(P)-2n_{w}(P)=0\), where \(\operatorname{ind}(-)\) is defined in Definition 2.43.
If \(\mathcal{H}_{w,z}\) is gradable then the knot Floer chain complex \((CFK_{\mathcal{R}}(\mathcal{H}_{w,z}),\partial)\) admits a relative \(\mathbb{Z}\oplus\mathbb{Z}\)-grading, as described below. We will be interested in diagrams \(\mathcal{H}_{w,z}\) for which \(\widehat{HF}(\mathcal{H}_{w})\cong\widehat{HF}(\mathcal{H}_{z})\cong\mathbb{F}\), where \(\widehat{HF}(\mathcal{H}_{w})\) and \(\widehat{HF}(\mathcal{H}_{z})\) are homology groups of the chain complexes obtained from \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z})\) by setting \(V=0\) and \(U=1\) or \(U=0\) and \(V=1\), respectively. In this case we say that the horizontal and vertical homology has rank one. Gradable diagrams with this property can be given an absolute grading, as follows.
**Definition 3.8**.: Let \(\mathbf{x},\mathbf{y}\in\mathcal{G}(\mathcal{H}_{w,z})\) be two generators. Let \(B\in\tilde{\pi}_{2}(\mathbf{x},\mathbf{y})\) be a domain. Then the \(w\)-grading difference between \(\mathbf{x}\) and \(\mathbf{y}\) is given by
\[gr_{w}(\mathbf{x})-gr_{w}(\mathbf{y})=\operatorname{ind}(B)-2n_{w}(B),\]
and the \(z\)-grading difference between \(\mathbf{x}\) and \(\mathbf{y}\) is given by
\[gr_{z}(\mathbf{x})-gr_{z}(\mathbf{y})=\operatorname{ind}(B)-2n_{z}(B).\]
If the horizontal and vertical homology of \(\mathcal{H}_{w,z}\) is rank one, then the absolute \(w\)-grading is normalized so that \(\widehat{HF}(\mathcal{H}_{w})\) is supported in \(w\)-grading 0, and absolute \(z\)-grading is normalized so that \(\widehat{HF}(\mathcal{H}_{z})\) is supported in \(z\)-grading 0.
Equivalently, one can equip \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\) with the _Maslov grading_ and the _Alexander grading_. These two gradings can be expressed in terms of the \(w\)-grading and \(z\)-grading: The Maslov grading is equal to the \(z\)-grading, and the Alexander grading is given by \(\frac{1}{2}(gr_{w}-gr_{z})\).
_Remark 3.9_.: The normalization conditions for the absolute gradings are chosen so that the bi-graded chain complexes model those associated to knots in the 3-sphere.
### Invariance
We will show knot Floer chain complexes defined over immersed Heegaard diagrams satisfy similar invariance properties when varying the almost complex structure or modifying the Heegaard diagram by isotopy, handleslides, and stabilizations. While the meaning of isotopy and stabilization are obvious for immersed Heegaard diagrams, we give a remark on handleslides.
_Remark 3.10_.: When speaking of handleslides of an immersed Heegaard diagram \(\mathcal{H}_{w,z}\), we only allow an \(\alpha\)-curve to slide over another _embedded_\(\alpha\)-curve, not over an immersed \(\alpha\)-curve. Furthermore, we point out that handle-slides do not change the unobstructedness, bi-admissibility, and gradability of the diagram. To see this, note periodic domains of two Heegaard diagrams before and after a handleslide are related. A periodic domain in the old Heegaard diagram with boundary on the arc that moves in the handleslide give rise to a periodic domain in the new Heegaard diagram by boundary summing a thin annulus (whose multiplicity can be one or negative one). In particular, if we started from a somewhere negative domain \(B\), then the new domain \(B^{\prime}\) after this procedure is still somewhere negative; it is also easy to see \(\operatorname{ind}(B)=\operatorname{ind}(B^{\prime})\), \(n_{z}(B)=n_{z}(B^{\prime})\), and \(n_{w}(B)=n_{w}(B^{\prime})\), which implies the gradability of two diagrams are the same as well.
**Proposition 3.11**.: _Let \(\mathcal{H}_{w,z}\) be an unobstructed, bi-admissible, and gradable immersed doubly-pointed Heegaard diagram. The bigraded chain homotopy type of \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z})\) is invariant under varying the almost complex structure, isotopy of the \(\alpha\)- and \(\beta\)-curves, handleslides, and stabilization/destabilization._
Proof.: The proof of the bigraded homotopy invariance under the variation of the almost complex structure, isotopy, and stabilization is the same as the corresponding results in the embedded-\(\alpha\)-curve set-up in [10]. In fact, changing the \(\alpha\)-curves from embedded to immersed can only complicate the arguments in that boundary degeneration might appear as ends of the moduli spaces involved, yet the unobstructedness dispels such worries.
The handleslide invariance can also be proved using the same strategy as in the embedded-\(\alpha\)-curve case with slightly more caution. The main difference is that in the embedded-\(\alpha\)-curve case, there is a unique maximal graded generator in the Heegaard Floer homology of a Heegaard diagram where the set of \(\alpha\)-curves is a small Hamiltonian perturbation of the \(\beta\)-curves. In contrast, such a generator needs to be specified more carefully in our case. We spell this out in more detail.
Denote \(\mathcal{H}=(\Sigma,\boldsymbol{\alpha},\boldsymbol{\beta},w,z)\). For clarity of exposition, assume \(\alpha_{im}\) consists of a single component with a trivial local system and \(n\) self-intersection points. We also restrict to the interesting case, in which the handleslide is sliding \(\alpha_{im}\) over an embedded \(\alpha\)-curve. Let \(\boldsymbol{\alpha^{\prime}}\) denote a small Hamiltonian perturbation of \(\boldsymbol{\alpha}\) so that \(\alpha_{i}\cap\alpha_{j}=\emptyset\) for \(i\neq j\); for \(i=1,\dots,g-1\), the embedded curves \(\alpha_{i}\) and \(\alpha_{i}^{\prime}\) intersects exactly at two points \(\{\theta_{i}^{+},\theta_{i}^{-}\}\); \(\alpha_{im}\) intersects \(\alpha_{im}^{\prime}\) at \(2+2n\) points \(\{\theta_{g}^{+},\theta_{g}^{-},\xi_{1}^{+},\xi_{1}^{-},\dots,\xi_{n}^{+}, \xi_{n}^{-}\}\), where \(\xi_{i}^{\pm}\) are intersection points corresponding to the self-intersection points of \(\alpha_{im}\). We label the \(\theta\)-intersection points using the convention so that \((\theta_{i}^{+},*)\) is of higher grading than \((\theta_{i}^{-},*)\) in \(CFK_{\mathcal{R}}(\Sigma,\boldsymbol{\alpha^{\prime}},\boldsymbol{\alpha},w,z)\), \((i=1,\dots,g)\) (see Figure 9 (a)). Let \(\alpha_{im}^{H}\) denote the curve obtained by sliding \(\alpha_{im}\) over, say, \(\alpha_{g-1}\), so that \(\alpha_{im}^{H}\) intersects each of \(\alpha_{im}\) and \(\alpha_{im}^{\prime}\) in \(2+2n\) points; denote the \(\theta\)-intersection points by \(\{\theta_{g}^{H,+},\theta_{g}^{H,-}\}\) and \(\{\theta_{g}^{\prime+},\theta_{g}^{\prime-}\}\), respectively. Let \(\alpha_{i}^{H}\) (\(i=1,\dots,g-1\)) be small Hamiltonian perturbations of \(\alpha_{i}^{\prime}\) so that \(\alpha_{i}^{H}\) intersects each of \(\alpha_{i}\) and \(\alpha_{i}^{\prime}\) at exactly two points, denoted by \(\{\theta_{i}^{H,+},\theta_{i}^{H,-}\}\) and
\(\{\theta_{i}^{{}^{\prime}+},\theta_{i}^{{}^{\prime}-}\}\), respectively. Let \(\Theta_{\alpha^{\prime},\alpha}=(\theta_{1}^{+},\ldots,\theta_{g}^{+})\), \(\Theta_{\alpha^{H},\alpha}=(\theta_{1}^{H,+},\ldots,\theta_{g}^{H,+})\), and \(\Theta_{\alpha^{\prime},\alpha^{H}}=(\theta_{1}^{{}^{\prime}+},\ldots,\theta_{ g}^{{}^{\prime}+})\). These correspond to the maximal graded intersection points used in the embedded case.7
Footnote 7: A straightforward computation would show \(\Theta_{\alpha,\alpha^{\prime}}\) are indeed cycles in the Floer chain complex associated to the immersed Heegaard diagram \((\Sigma,\boldsymbol{\alpha},\,\boldsymbol{\alpha^{\prime}},w,z)\); similar statements hold for \(\Theta_{\alpha^{H},\alpha}\) and \(\Theta_{\alpha^{\prime},\alpha^{H}}\).
The rest of the proof is similar to the embedded case. We provide a sketch. Let \(\mathcal{H}^{H}=(\Sigma,\boldsymbol{\alpha}^{H},\boldsymbol{\beta},w,z)\) and \(\mathcal{H}^{\prime}=(\Sigma,\boldsymbol{\alpha^{\prime}},\boldsymbol{\beta}, w,z)\). By counting holomorphic triangles (with stay-on-track boundaries), one can define chain maps
\[F(\Theta_{\alpha^{H},\alpha}\otimes-):CFK_{\mathcal{R}}(\mathcal{H})\to CFK_{ \mathcal{R}}(\mathcal{H}^{H})\]
and
\[F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes-):CFK_{\mathcal{R}}(\mathcal{H}^ {H})\to CFK_{\mathcal{R}}(\mathcal{H}^{\prime})\]
Again, the usual proof which shows the above maps are chain maps carries through, as the unobstructedness excludes boundary degeneration when analyzing the ends of one-dimensional moduli spaces of holomorphic triangles. Similarly, by analyzing ends one-dimensional moduli spaces of holomorphic quadrilaterals, one can show the composition of these two maps is chain homotopic equivalent to \(F(F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes\Theta_{\alpha^{H},\alpha}) \otimes-)\). One can show this map is homotopic equivalent to
\[F(\Theta_{\alpha^{\prime},\alpha}\otimes-):CFK_{\mathcal{R}}(\mathcal{H})\to CFK _{\mathcal{R}}(\mathcal{H}^{\prime})\]
by a standard computation which shows \(F(\Theta_{\alpha^{\prime},\alpha^{H}}\otimes\Theta_{\alpha^{H},\alpha})=\Theta_{ \alpha^{\prime},\alpha}\) (see Figure 9 (b)). One can show that the map \(F(\Theta_{\alpha^{\prime},\alpha}\otimes-)\) is a chain isomorphism (using the area-filtration technique in [10], Proposition 9.8).
## 4. Paring theorems
In Section 4.1-4.2, we introduce a pairing construction which merges a (non-immersed) bordered Heegaard diagram and an immersed multicurve to produce an immersed Heegaard diagram. After that, we establish the unobstructedness and admissibility of these pairing diagrams in Section 4.3-4.5, and then we prove the bordered invariant of such pairing diagrams admits a box-tensor product interpretation in Section 4.6. Finally, in Section 4.7 we prove a pairing theorem for gluing a particular type of doubly-pointed bordered Heegaard diagram and an immersed bordered Heegaard diagram; this theorem will be useful in Section 5.
### Immersed curves in the marked torus
**Definition 4.1**.: The _marked torus_\(T^{2}\) is the oriented surface \(\mathbb{R}^{2}/\mathbb{Z}^{2}\) together with a base point \(z\) located at \((1-\epsilon,1-\epsilon)\) for some sufficiently small \(\epsilon>0\). The images of the positively oriented \(x\)-axis and \(y\)-axis are called the _preferred longitude_ and _preferred meridian_ respectively.
We will consider immersed multicurves with local systems in the marked torus. Two immersed multicurves are _equivalent_ if they are regularly homotopic in \(T^{2}\backslash z\) and the local systems are isomorphic. Throughout this paper, we restrict to immersed multicurves \(\alpha_{im}\) satisfying the following assumptions:
1. No component of \(\alpha_{im}\) is a circle enclosing the base point \(z\) once.
2. No component of the immersed multicurve is null-homotopic in \(T^{2}\backslash\{z\}\), and the immersed multicurve is _unobstructed_ in the sense that it does not bound any teardrops in \(T^{2}\backslash\{z\}\).
3. The immersed multicurve is _reduced_, i.e., if we let \([0,1]\times[0,1]\) be the square obtained by cutting the marked torus open along the preferred meridian and longitude, then no sub-arcs of \(\alpha_{im}\) contained in \([0,1]\times[0,1]\) have both ends on the same edge of the square.
4. Let \(\pi\) denote the projection map from \(\mathbb{R}^{2}\) to \(T^{2}\). Using regular homotopy, we assume all immersed curves in the marked torus are contained in the complement of \(\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}])\) in \(T^{2}\), the strands contained in \(\pi([-\frac{1}{4},\frac{1}{4}]\times[\frac{1}{4},\frac{3}{4}])\) are horizontal, and the strands contained in the image of \(\pi([\frac{1}{4},\frac{3}{4}]\times[-\frac{1}{4},\frac{1}{4}])\) are vertical.
An immersed multicurve in the marked torus determines a type D structure over the torus algebra as follows. First, we introduce some terminology.
**Definition 4.2**.: An _elementary arc_ is an embedded arc in the marked torus \(T^{2}\) such that it only intersects the preferred meridian or longitude at the endpoints. There are six types of elementary arc based on the position of the endpoints, each of which is labeled by a Reeb chord in \(\{\rho_{1},\rho_{2},\rho_{3},\rho_{12},\rho_{23},\rho_{123}\}\) as shown in Figure 10.
If we ignore the local systems, then any immersed multicurve is comprised of a collection of elementary arcs; one can see this by cutting \(T^{2}\) open along the preferred longitude and meridian. Sometimes we also need to consider oriented elementary arcs.
**Definition 4.3**.: An orientation of an elementary arc is called the correct orientation if it is the one shown in Figure 10.
Next, we describe how to obtain a type D structure from an immersed multicurve in terms of elementary arcs. Denote the local system on \(\alpha_{im}\) by \((\mathcal{E},\Phi)\), where \(\mathcal{E}\) is a vector bundle over \(\alpha_{im}\) and \(\Phi\) is a parallel transport. Let \(\mathcal{G}(\alpha_{im})=\mathcal{G}_{m}\cup\mathcal{G}_{l}\), where \(\mathcal{G}_{m}\) (respectively, \(\mathcal{G}_{l}\)) is the set of intersection points of \(\alpha_{im}\) and the preferred meridian (respectively, longitude). Let \(\mathcal{X}\) be the vector space \(\oplus_{x\in\mathcal{G}(\alpha_{im})}\mathcal{E}|_{x}\). Next, we define an \(\mathcal{I}\)-action on \(\mathcal{X}\), where \(\mathcal{I}\) is the ring of idempotent of the torus algebra. If \(x\in\mathcal{G}_{m}\), for any \(\tilde{x}\in\mathcal{E}|x\), \(\iota_{0}\cdot\tilde{x}=\tilde{x}\) and \(\iota_{1}\cdot\tilde{x}=0\); if \(x\in\mathcal{G}_{l}\), for any \(\tilde{x}\in\mathcal{E}|x\), \(\iota_{0}\cdot\tilde{x}=0\) and \(\iota_{1}\cdot\tilde{x}=\tilde{x}\). The underlying \(\mathcal{A}\)-module for \(\widehat{CFD}(\alpha_{im})\) is \(\mathcal{A}\otimes_{\mathcal{I}}\mathcal{X}\). Finally, the differential on \(\widehat{CFD}(\alpha_{im})\) decomposes linearly as maps between \(\mathcal{E}|_{x}\) for \(x\in\mathcal{G}(\alpha_{im})\). Given \(x,y\in\mathcal{G}(\alpha_{im})\) and \(\rho_{I}\) a Reeb element, there is a differential map \(\mathcal{E}|_{x}\to\rho_{I}\otimes\mathcal{E}|_{y}\) if and only if \(x\) and \(y\) are connected by a \(\rho_{I}\)-elementary arc whose correct orientation goes from \(x\) to \(y\), in which case the differential is given by \(\partial(\tilde{x})=\rho_{I}\otimes\Phi(\tilde{x})\) for \(\tilde{x}\in\mathcal{E}|_{x}\). In particular, when the local system of \(\alpha_{im}\) is trivial, then the generators of \(\widehat{CFD}(\alpha_{im})\) are in one-to-one correspondence with the intersection points of \(\alpha_{im}\) with the preferred longitude/meridian, and the differentials are in one-to-one correspondence with the elementary sub-arcs of \(\alpha_{im}\).
The immersed-curve presentation of type D structures is empowered by the following result.
**Theorem 4.4** ([14]).: _Each Type D structure of a bordered 3-manifold with torus boundary is homotopic to a type D structure determined by some immersed multicurve (with local systems) in the marked torus._
_Remark 4.5_.: All immersed multicurves arising from 3-manifolds with torus boundary satisfies the assumptions (C-1)-(C-4): (C-4) is straightforward, (C-2) and (C-3) follows from the algorithm of converting type D structures to immersed multicurves in [14], and for (C-4) see the discussion around Figure 31 and 32 in [14].
We will mainly be interested in the immersed multicurves corresponding to type D structures of knot complements for knots in the 3-sphere; these immersed multicurves satisfy some further properties that we specify in Definition 4.6 below, and the proofs of these properties can be found in [14, Section 4].
**Definition 4.6**.: An immersed multicurve \(\alpha_{im}=\{\alpha_{im}^{0},\ldots,\alpha_{im}^{n-1}\}\) of \(n\) components (for some \(n\geq 1\)) with a local system is called knot-like if the local system restricted to \(\alpha_{im}^{0}\) is trivial, \(\alpha_{im}^{0}\) (with some orientation) is homologous to the preferred longitude in \(T^{2}\), and \([\alpha_{im}^{i}]\) for \(i\geq 1\) is trivial in \(H_{1}(T^{2},\mathbb{Z})\).
From now on, we assume all immersed multicurves are knot-like.
Figure 10. Six types of elementary arcs. The orientations are the so-called correct orientations.
### Pairing diagrams
We introduce a class of immersed bordered Heegaard diagrams and doubly pointed Heegaard diagrams. They are respectively obtained from two types of pairing constructions that we will define:
1. Pairing an immersed multicurve in the marked torus and an _arced bordered Heegaard diagram with two boundary components_ to construct an immersed bordered Heegaard diagram.
2. Paring an immersed multicurve in the marked torus with a doubly pointed bordered Heegaard diagram to construct a closed immersed doubly pointed Heegaard diagram.
We begin with the first type. For convenience, we first recall the definition of arced bordered Heegaard diagrams below (in the special case where both boundaries of the corresponding bordered manifold are tori).
**Definition 4.7**.: An arced bordered Heegaard diagram with two boundary components is a quadruple \(\mathcal{H}^{a}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta}, \boldsymbol{z})\) where
1. \(\bar{\Sigma}\) is a compact, oriented surface of genus \(g\) with two boundary components \(\partial\bar{\Sigma}=\partial_{L}\bar{\Sigma}\cup\partial_{R}\bar{\Sigma}\);
2. \(\bar{\boldsymbol{\alpha}}\) is a collection of pairwise disjoint properly embedded arcs and curves \(\{\alpha_{1}^{a,L},\alpha_{2}^{a,L},\alpha_{1}^{a,R},\alpha_{2}^{a,R},\alpha_ {1}^{c},\ldots,\alpha_{g-2}^{c}\}\). Here, \(\alpha_{1}^{a,L}\) and \(\alpha_{2}^{a,L}\) are two arcs with endpoints on \(\partial_{L}\bar{\Sigma}\), \(\alpha_{1}^{a,R}\) and \(\alpha_{2}^{a,R}\) are two arcs with endpoints on \(\partial_{R}\bar{\Sigma}\), and the \(\alpha_{i}^{c}\)'s (\(i=1,\ldots,g-2\)) are embedded circles. Moreover, elements in \(\bar{\boldsymbol{\alpha}}\) induce linearly independent elements in \(H_{1}(\bar{\Sigma},\partial\bar{\Sigma};\mathbb{Z})\);
3. \(\boldsymbol{\beta}\) is a set of \(g\) pairwise disjoint embedded circles \(\{\beta_{1},\ldots,\beta_{g}\}\) in the interior of \(\bar{\Sigma}\) that are linearly independent as elements in \(H_{1}(\bar{\Sigma},\partial\bar{\Sigma};\mathbb{Z})\);
4. \(\boldsymbol{z}\) is a properly embedded arc in \(\bar{\Sigma}\backslash(\bar{\boldsymbol{\alpha}}\cup\boldsymbol{\beta})\) with one endpoint \(z_{L}\) on \(\partial_{L}\bar{\Sigma}\) and the other endpoint \(z_{R}\) on \(\partial_{R}\bar{\Sigma}\).
Periodic and provincially period domains for arced bordered Heegaard diagrams with two boundary components are defined similarly to the case of a single boundary component. In the two boundary case we will also consider periodic domains that are adjacent to only one of the boundaries.
**Definition 4.8**.: A domian is _left provincial_ if the multiplicity in the regions adjacent to \(\partial_{L}\bar{\Sigma}\) are zero. We say an arced bordered Heegaard diagrams with two boundary components is _left provincially admissible_ if all left provincial periodic domains have both positive and negative multiplicities.
The pairing construction is illustrated in Figure 11, and is spelled out in Definition 4.9.
**Definition 4.9**.: Let \(\mathcal{H}^{a}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta}, \boldsymbol{z})\) be an arced bordered Heegaard diagram with two boundary components and let \(\alpha_{im}\) be an immersed multicurve in the marked torus \(T^{2}\). The _pairing diagram of \(\mathcal{H}^{a}\) and \(\alpha_{im}\)_, denoted by \(\mathcal{H}^{a}(\alpha_{im})\), is a bordered Heegaard diagram obtained through the following steps.
1. Form \(\bar{\Sigma}^{\prime}\) from \(\bar{\Sigma}\) by collapsing \(\partial_{R}\bar{\Sigma}\). Let \(\alpha_{i}^{\prime a}\) be the image of \(\alpha_{i}^{a,L}\) (\(i=1,2\)), \(\alpha_{i}^{\prime c}\) be the image of \(\alpha_{i}^{c}\) (\(i=1,\ldots,g-2\)), \(\boldsymbol{\beta}^{\prime}\) be the image of \(\boldsymbol{\beta}\), and \(z_{L}^{\prime}\) be the image of \(z_{L}\). The images of \(\alpha_{i}^{a,R}\) (\(i=1,2\)), denoted by \(\tilde{\alpha}_{i}\), are two circles intersecting at a single point \(z_{R}^{\prime}\), the image of \(z_{R}\).
2. Take a neighborhood \(U\) of \(\tilde{\alpha}_{1}\cup\tilde{\alpha}_{2}\) which admits a homeomorphism \(h:U\to T^{2}\backslash\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{ 1}{4}])\) such that \(h(\tilde{\alpha}_{1})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\{\frac{1}{2}\}\times[0,1])\), \(h(\tilde{\alpha}_{2})=\pi(\
\(\pi([0,1]\times\{\frac{1}{2}\})\), and each connected component of \(h(\boldsymbol{\beta}^{\prime}\cap U)\) is an arc of the form \(\pi(\{x\}\times[\frac{1}{4},\frac{3}{4}])\) or \(\pi([\frac{1}{4},\frac{3}{4}]\times\{y\})\) for some \(x\) or \(y\) in \((2\epsilon,\frac{1}{4})\).
3. Let \(\alpha^{\prime}_{im}=h^{-1}(\alpha_{im})\). Let \(\bar{\boldsymbol{\alpha}}^{\prime}=\{\alpha^{\prime a}_{1},\alpha^{\prime a}_{ 2},\alpha^{\prime c}_{1},\ldots,\alpha^{\prime c}_{g-1},\alpha^{\prime}_{im}\}\).
4. Let \(\mathcal{H}^{a}(\alpha_{im})=(\bar{\Sigma}^{\prime},\bar{\boldsymbol{\alpha}} ^{\prime},\boldsymbol{\beta}^{\prime},z^{\prime}_{L})\).
Recall a _doubly pointed bordered Heegaard diagram_ is a bordered Heegaard diagram with an extra basepoint in the complement of the \(\alpha\)- and \(\beta\)-curves. It encodes a knot in a bordered \(3\)-manifold. There is an entirely similar pairing construction for a doubly-pointed bordered Heegaard diagram and an immersed multicurve in the marked torus. We do not spell out the wordy definition and simply refer the readers to Figure 12 for an example.
We want to establish the unobstructedness and admissibility of the immersed Heegaard diagrams obtained from pairing constructions. For that we need two tools, namely _z-adjacency_ and _the collapsing map_ introduced in
Figure 11. Left: an arced bordered Heegaard diagram. Middle: an immersed multicurve in the marked torus. The dashed lines are the boundary of \(\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}])\). Right: a bordered Heegaard diagram obtained by the pairing construction.
Figure 12. Pairing construction that gives rise to a doubly pointed Heegaard diagram.
### z-adjacency
We will consider a diagrammatic condition for immersed multi-curves that guarantees the unobstructedness of the paring diagram; this condition can be achieved easily by finger moves. We begin by introducing some terminology for convenience.
In the definition below, we orient the curves in \(\alpha_{im}\) arbitrarily and orient the four edges of the cut-open torus using the boundary orientation. For each edge of the cut-pen torus, let \(k_{+}\) and \(k_{-}\) denote the number of elementary arcs intersecting a given edge positively and negatively, respectively.
**Definition 4.10**.: Let \(\alpha_{im}\) be an immersed multicurve in the marked torus. Then \(\alpha_{im}\) is _\(z\)-adjacent_ if, for each of the four edges of the cut-open torus, there exist four open disks \(U_{\pm}^{R}\) and \(U_{\pm}^{L}\) in \(T^{2}\) such that
1. \((U^{L},U^{L}\cap(\alpha_{im}\cup\{z\})\), \((U^{R}_{-},U^{R}_{-}\cap(\alpha_{im}\cup\{z\})\), \((U^{L}_{+},U^{L}_{+}\cap(\alpha_{im}\cup\{z\})\) and \((U^{R}_{+},U^{R}_{+}\cap(\alpha_{im}\cup\{z\})\) are homeomorphic to the corresponding disks in Figure 13, where the arcs in the disks are sub-arcs on the \(k_{-}\) distinct elementary arcs intersecting the given edge negatively for discs with subscript \(-\) or sub-arcs on the \(k_{+}\) distinct elementary arcs intersecting the given edge positively for discs with subscript \(+\);
2. if the given edge is the top edge, then \(U^{L}_{-}\) and \(U^{R}_{+}\) are contained in \([0,1]\times[0,1]\);
3. if the given edge is the right edge, then \(U^{R}_{-}\) and \(U^{L}_{+}\) are contained in \([0,1]\times[0,1]\).
**Proposition 4.11**.: _Every immersed multicurve in the marked torus is regularly homotopic to a \(z\)-adjacent multicurve._
Proof.: Orient \(\alpha_{im}\) arbitrarily. We first define an operation on a collection of oriented parallel arcs. Assume there are \(k_{+}+k_{-}\) arcs, where \(k_{+}\)-many of the arcs are oriented in one direction, and the rest are oriented in the opposite direction. The operation is shown in Figure 14: First, by performing the finger moves in Figure 14
Figure 13. The disks \(U^{R}_{-}\) and \(U^{L}_{-}\). The superscript is chosen to suggest whether \(z\) is one the left or on the right of the strands when we traverse an arc in the indicated direction.
(a) repeatedly, we can arrive at a collection of arcs as shown in the left of Figure 14 (b): the \(P\)- and \(P^{-1}\)-boxes indicate a pair of mutually inverse permutations, and between the \(P\)- and \(P^{-1}\)-boxes the arcs are arranged so that all \(k_{-}\) arcs with parallel orientations are grouped on the left and all the other \(k_{+}\) arcs with the opposite orientations are grouped on the right. Next, do a sequence of finger moves to the diagram on the left of Figure 14 (b) to arrive at the right-hand-side diagram of Figure 14 (b). Now perform this operation to the arcs of \(\alpha_{im}\) near all four edges in the cut-open marked torus, then we have a z-adjacent immersed multicurve; see Figure 15 for the desired open disks. Note that conditions (2) and (3) are obviously satisfied because \(z\) is in the top right corner of the cut open torus.
We shall need a technical lemma. Let \(l\) be a one-cornered sub-loop of \(\alpha_{im}\) with a corner \(q\). If we traverse \(l\) in either direction, we see it begins with an arc starting from \(q\) to the meridian or longitude, then a sequence of elementary arcs, and finally, an arc starting from the meridian or longitude and ending at \(q\). We call the starting and ending arcs the _non-elementary sub-arcs of \(l\)_, and the other sub-arcs _the elementary sub-arcs of \(l\)_.
Figure 14. Finger moves on parallel strands.
Figure 15. A \(z\)-adjacent immersed curve.
**Lemma 4.12**.: _Let \(\alpha_{im}\) be a \(z\)-adjacent immersed curve. Let \(D\) be a positive domain in \(T^{2}\) bounded by a \(k\)-cornered (sub)loop of \(\alpha_{im}\)._
1. _If_ \(n_{z}(D)=n\) _for some_ \(n\geq 0\) _and_ \(k=0\) _or_ \(1\)_, then for any side of the cut-open marked torus_ \([0,1]\times[0,1]\) _and any sign, the number of elementary sub-arcs in_ \(\partial D\) _intersecting the given side with the given sign is less than or equal to_ \(n\)_._
2. _If_ \(n_{z}(D)=0\)_, then for arbitrary_ \(k\geq 0\)_, there are no elementary subarcs contained in_ \(\partial D\)_._
Proof.: We prove (1) first. We will only consider the case in which the elementary sub-arcs intersect the given edge negatively and remark that the other case is similar.
We prove by contradiction. Suppose there are \(k_{-}>n\) elementary sub-arcs contained in \(\partial D\) intersecting the given edge negatively. Since \(\partial D\) is \(0\)- or \(1\)-cornered it has an orientation induced by the orientation on \(\alpha_{im}\). Examining the local diagram \((U^{L}_{-},U^{L}_{-}\cap(\partial D\cup\{\boldsymbol{z}\}))\) in Figure 13 one sees \(D\) has negative multiplicity \(n-k_{-}\) in the left-most region, which contradicts our assumption that \(D\) is a positive domain. Therefore, \(k_{-}\leq n\).
Next, we prove (2). Assume there is an elementary sub-arc in \(\partial D\). Then no matter how this sub-arc is oriented, \(z\) is on both the left and right of it. As \(n_{z}(D)=0\), there is a region with \(-1\) multiplicity, which contradicts that \(D\) is positive.
### The collapsing operation
To relate the domains of the pairing diagram \(\mathcal{H}^{a}(\alpha_{im})\) and the arced bordered diagram \(\mathcal{H}^{a}\), we define the so-called _collapsing operation_. This operation was previously defined in the case of paring genus one bordered Heegaard diagrams with immersed curves [10], and we give the general case here. The operation is pictorially shown in Figure 16, and the definition is given below.
**Definition 4.13**.: The collapsing operation on \(\mathcal{H}^{a}(\alpha_{im})\) is defined to be the composition of the following modifications of the diagram:
1. Extend the map \(h\) in Definition 4.9 to identify \(T^{2}-\pi([-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]\times[-\frac{1}{4}+ \epsilon,\frac{1}{4}-\epsilon])\) with a slightly larger neighborhood of \(U=h^{-1}(T^{2}-\pi([-\frac{1}{4},\frac{1}{4}]\times[-\frac{1}{4},\frac{1}{4}]))\). Here \(\epsilon\) is a sufficiently small positive number.
2. Puncture \(h^{-1}((\frac{3}{4},\frac{3}{4}))\), and enlarge it to a hole so that under the identification map \(h\), the boundary of the hole is a square of side length \(\frac{1}{2}+2\epsilon\) and with rounded corner modeled on a quarter of a circle of radius \(\epsilon\). While enlarging the hole, we push immersed curves it encountered along the way so that part of the immersed curves are squeezed to the boundary of the hole.
3. Collapse \(h^{-1}(\pi([-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]\times[\frac{1}{4}, \frac{3}{4}]))\) to the core \(h^{-1}(\pi([-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]\times\{\frac{1}{2}\}))\), which is denoted \(a_{1}^{a,R}\). Collapse \(h^{-1}(\pi([\frac{1}{4},\frac{3}{4}]\times[-\frac{1}{4}+\epsilon,\frac{1}{4}- \epsilon]))\) to the core \(h^{-1}(\pi(\{\frac{1}{2}\}\times[-\frac{1}{4}+\epsilon,\frac{1}{4}-\epsilon]))\), which is denoted \(a_{2}^{a,R}\).
_Remark 4.14_.:
1. Clearly, the outcome of the collapsing operation on \(\mathcal{H}^{a}(\alpha_{im})\) can be identified with \(\mathcal{H}^{a}\).
2. Each elementary arc in \(\alpha_{im}\) standing for \(\rho_{I}\in\{\rho_{1},\rho_{2},\rho_{3},\rho_{12},\rho_{23},\rho_{123}\}\) is mapped under the collapsing map to an arc that passes the Reeb chord \(\rho_{I}\) in \(\mathcal{Z}^{R}\) of \(\mathcal{H}^{a}\). Note that an oriented elementary sub-arc is _correctly oriented_ if
it induces a Reeb chord \(\mathcal{Z}^{R}\) under the collapsing map, i.e., the orientations coincide.
3. The intersection points in \(\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))\) are of the form \(\boldsymbol{x}\otimes a\) are in one-to-one correspondence with \(\mathcal{G}(\mathcal{H}^{a})\otimes_{\mathcal{I}_{R}}\mathcal{G}(\alpha_{im})\), where the tensor product is taken over \(\mathcal{I}_{R}\subset\mathcal{A}(\mathcal{Z}_{R})\). Indeed, given an intersection point \(\xi\in\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))\), its image under the collapsing map yields an intersection point \(\boldsymbol{x}\) in \(\mathcal{H}^{a}\). Also, the component of \(\xi\) on \(\alpha_{im}\) uniquely gives rise to an intersection point \(a\) of \(\alpha_{im}\) as follows. By the definition of the pairing operation, when we pull back the intersection point on \(\alpha_{im}\) to the marked torus, it lies in a horizontal or vertical arc as described in assumption (C-4) on immersed multicurves, which uniquely corresponds to an intersection point of \(\alpha_{im}\) with the longitude or meridian. Therefore, every intersection point \(\xi\) in \(\mathcal{H}^{a}(\alpha_{im})\) can be written as \(\boldsymbol{x}\otimes y\). It is easy to see this induces a one-to-one correspondence between \(\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))\) and \(\mathcal{G}(\mathcal{H}^{a})\otimes_{\mathcal{I}_{R}}\mathcal{G}(\alpha_{im})\).
We will give a proposition relating the domains of \(\mathcal{H}^{a}(\alpha_{im})\) and \(\mathcal{H}^{a}\). Let \(l\) be an oriented arc \(l\) on \(\alpha_{im}\) such that all the elementary sub-arcs are oriented correctly. We use \(\overrightarrow{\rho}(l)\) to denote the sequence of Reeb chords determined by \(l\).
**Proposition 4.15**.: _Assume the immersed multicurve \(\alpha_{im}\) is \(z\)-adjacent. Let \(B\) be a positive domain in \(\mathcal{H}^{a}(\alpha_{im})\) corresponding to a homology class in \(\pi_{2}(\boldsymbol{x}\otimes a,\boldsymbol{y}\otimes b,\overrightarrow{ \sigma})\) with \(n_{z}(B)=0\). Then the image of \(B\) under the collapsing map is a positive domain \(B^{\prime}\) in \(\mathcal{H}^{a}\) corresponding to a homology class \(\pi_{2}(\boldsymbol{x},\boldsymbol{y},\overrightarrow{\rho}(\partial_{ \alpha_{im}}B),\overrightarrow{\sigma})\) with \(n_{z}(B^{\prime})=0\). Here, \(\partial_{\alpha_{im}}B\) refers to the arc on \(\alpha_{im}\) connecting the corresponding components of \(\boldsymbol{x}\otimes a\) and \(\boldsymbol{y}\otimes b\). Moreover,_
\[e(B^{\prime})=e(B)-\frac{|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|}{ 2}.\]
Proof.: It is clear that \(B^{\prime}\) is positive and \(n_{z}(B^{\prime})=0\). It is also clear that \(B^{\prime}\) give rise to a domain connecting \(\boldsymbol{x}\) and \(\boldsymbol{y}\). We need to show that \(B^{\prime}\) has the Reeb chords \(\overrightarrow{\rho}(\partial_{\alpha_{im}}B)\) at the east infinity. We claim all the elementary arcs appear in \(\partial_{\alpha_{im}}B\) are correctly oriented, and hence \(\partial_{\alpha_{im}}B\) gives rise to a monotonic arc (in the sense that all the Reeb chords appearing on the arc respect the boundary orientation) connecting (the components on the \(\alpha\) arc of) \(\boldsymbol{x}\) to \(\boldsymbol{y}\) under the collapsing map. The sequence of Reeb chords appearing in this arc are exactly \(\overrightarrow{\rho}(\partial_{\alpha_{im}}B)\) in view of Remark 4.14 (2). To see the claim, note \(\alpha_{im}\) is \(z\)-adjacent and \(B\) is positive with \(n_{z}(B)=0\). Therefore, if an elementary arc on \(\partial_{im}B\) intersects the top edge or the right edge, then its orientation is forced by the positivity of domains and condition (2) and (3) in Definition 4.10, and the orientation is the correct orientation. The only type of elementary arcs that intersects neither the top edge nor the right edge corresponds to \(\rho_{2}\). If an elementary arc corresponding to \(\rho_{2}\) on \(\partial_{\alpha_{im}}B\) has a successor or precursor, then correct orientation on the successor or the precursor would induce the correct orientation on it. Otherwise, \(\partial_{\alpha_{im}}B\) has only one elementary arc corresponding to \(\rho_{2}\), in which case it is clear that the elementary arc is correctly oriented.
Next, we compare the Euler measures. Divide the domain \(B\) into two parts \(B_{1}\) and \(B_{2}\), along the square with rounded corners, which is the boundary of the hole in Step 2 of the collapsing operation. (See Figure 17.) This time, we do not puncture the interior of the square. Let \(B_{1}\) denote the part of \(B\) outside of the square, and let \(B_{2}\) denote the part inside the square (which is pushed onto the boundary
circle under the collapsing map). Then \(e(B_{1})=e(B^{\prime})\) since these two domains differ by a bunch of rectangles whose Euler measure are zero; these rectangles are collapsed in Step 3 of the collapsing operation. As \(\alpha_{im}\) is \(z\)-adjacent, \(B_{2}\) is positive, and \(n_{z}(B)=0\), we see \(B_{2}\) can be further expressed as a sum of of simple domains determined by the elementary arcs appearing in \(\partial_{\alpha_{im}}B\) (counted with multiplicity). (See Figure 18.) Each simple domain of multiplicity one has Euler measure \(\frac{1}{2}\), and there are \(|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|\) many of them being collapsed (in Step 2 of the collapsing operation) in order to obtain \(B^{\prime}\). Therefore, \(e(B^{\prime})=e(B)-\frac{|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|}{2}\).
### Unobstructedness and admissibility of paring diagrams
**Proposition 4.16**.: _Let \(\alpha_{im}\subset T^{2}\) be a \(z\)-adjacent immersed multicurve. Then the pairing diagram \(\mathcal{H}^{a}(\alpha_{im})\) of an arced bordered Heegaard diagram \(\mathcal{H}^{a}\) and \(\alpha_{im}\) is
Figure 16. The collapsing operation.
unobstructed. Furthermore, \(\mathcal{H}^{a}(\alpha_{im})\) is provincially admissible provided \(\mathcal{H}^{a}\) is left provincially admissible. (See Definition 2.8 and Definition 2.9.)_
Proof of Proposition 4.16.: Consider the bordered Heegaard diagram \(\mathcal{H}^{a}(\alpha_{im})=(\bar{\Sigma}^{\prime},\bar{\boldsymbol{\alpha}}^ {\prime},\boldsymbol{\beta}^{\prime},z^{\prime})\) obtained from pairing an arced bordered Heegaard diagram \(\mathcal{H}^{a}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta}, \boldsymbol{z})\) and a \(z\)-adjacent immersed multicurve \(\alpha_{im}\).
We begin by showing \(\bar{\boldsymbol{\alpha}}^{\prime}\) is unobstructed in the sense of Definition 2.8. Let \(B\) be a zero- or one-cornered \(\alpha\)-bounded domain. Since the curves \(\{\alpha_{1}^{a},\alpha_{2}^{a},\alpha_{1},\ldots,\alpha_{g-2},\alpha_{im}^{0}\}\) are pairwise disjoint and homologically independent in \(H_{1}(\bar{\Sigma},\partial)\), \([\partial B]\) (as a homology class) is equal to a linear combination of at most one copy of \(\partial\bar{\Sigma}\) and some homologically trivial zero- or one-cornered loop contained in a single connected component of \(\alpha_{im}\).
We first show there are no positive zero- or one-cornered \(\alpha\)-bounded domains \(B\) with \(n_{z^{\prime}}(B)=0\). In this case, \(\partial B\) is a homologically trivial zero- or one-cornered loop contained in a single connected component of \(\alpha_{im}\), i.e., \(\partial\bar{\Sigma}\) does not appear in \(\partial B\). As the \(\boldsymbol{\beta}\)-curves are irrelevant to our consideration, we may assume the \(\alpha\)-curves of \(\mathcal{H}^{a}\) are in standard position. Therefore, there is an obvious circle \(C\subset\bar{\Sigma}\) that splits \(\bar{\Sigma}\) into a genus-\((g-1)\) surface \(E_{1}\) containing \(\{\alpha_{1}^{a,L},\alpha_{2}^{a,L},\alpha_{1}^{c},\ldots,\alpha_{g-2}^{c}\}\) and a genus-one surface \(E_{2}\) containing \(\alpha_{1}^{a,R}\) and \(\alpha_{2}^{a,R}\). Let \(C^{\prime}\) be the corresponding curve on \(\bar{\Sigma}^{\prime}\). Then after surgery along \(C^{\prime}\), \(B\) induces a positive domain \(D\) in the marked torus \(T^{2}\) (obtained from \(E_{2}\) in an obvious way), and \(D\) is bounded by a zero- or one-cornered (sub)loop of \(\alpha_{im}\). According to Lemma 4.12, \(\partial D\) contains no elementary sub-arcs, so \(D\) cannot exist.
Next we show that if \(n_{z^{\prime}}(B)=1\), \(B\) is a stabilized teardrop or \([\Sigma^{\prime}]\) depending on whether \(\partial B\) is one-cornered or zero-cornered. In this case, after performing surgery along the same \(C^{\prime}\) as in the previous paragraph, \(B\) gives rise to two domains: one is \([E_{1}^{\prime}]\), where \(E_{1}^{\prime}\) is the genus-\((g-1)\) surface, and the other is a positive domain \(D\) contained in the marked torus \(T^{2}\) with \(n_{z}(D)=1\). We first consider the case in which \(\partial D\) is zero-cornered. If \(\partial D=\emptyset\), then \(D=E_{2}\) and hence \(B=[\Sigma^{\prime}]\). If \(\partial D\neq\emptyset\),
Figure 17. Left: \(B=B_{1}+B_{2}\). Right: \(B^{\prime}\).
Figure 18. Simple domains corresponding to Reeb elements in \(\mathcal{A}\).
then according to Lemma 4.12, it consists of at most (and at least) \(4\) elementary sub-arcs, and hence is a circle enclosing the \(z\)-basepoint once. However, such circles are assumed not to exist. When \(\partial D\) is one-cornered, we claim \(D\) is a teardrop. To see this, note that Lemma 4.12 implies that \(\partial D\) crosses the meridian at most three times and the longitude at most three times since each time the meridian or the longitude is crossed (except possibly the last time) the intersection is the beginning of an elementary sub-arc and there are at most two elementary sub-arcs starting on each. Because \(\partial D\) is homologically trivial in \(H_{1}(T^{2})\) it crosses both of the meridian and the longitude and even number of times, so it crosses each at most twice. It follows that \(\partial D\) must circle once around \(z\) and \(D\) is a teardrop with \(n_{z}(D)=1\).
Now we show any two-cornered positive \(\alpha\)-bounded domain \(B\) with \(n_{z}(B)=0\) is a bigon. To see, we may split \(\Sigma^{\prime}\) as \(E_{1}\#E_{2}\) as before and regard \(B\) as a domain in \(E_{2}\) with \(n_{z^{\prime}}=0\). Note by Lemma 4.12 (2), we know \(\partial B\) consists of no elementary subarcs, and hence \(B\) must be of the form shown in Figure 19 (up to rotation), which is a bigon. (Note we do not require the corners of the bigon \(B\) to be convex.)
So far, we have proved \(\bar{\alpha}^{\prime}\) is unobstructed. We now show there are no non-trivial positive provincial periodic domains. If not, let \(B\) be a positive provincial periodic domain for \(\mathcal{H}^{a}(\alpha_{im})\). Then by Proposition 4.15, \(\Psi(B)\) is a positive periodic domain for \(\mathcal{H}^{a}\), where \(\Psi\) denotes the collapsing map. Note \(\Psi(B)\) is left provincial. As \(\mathcal{H}^{a}\) is left provincially admissible, we have \(\Psi(B)=0\), and hence \(\partial B\) has no \(\beta\)-curves. So, \(B\) is a positive zero-cornered \(\alpha\)-bounded domain with \(n_{z}(B)=0\), but such domains are already excluded by unobstructedness.
### The first paring theorem
Recall a bordered Heegaard diagram is _nice_ if every connected region in the complement of the \(\alpha\)- and \(\beta\)-curves is a disk with at most four corners except for the region containing \(z\). Any bordered Heegaard diagram can be turned into a nice diagram via isotopy and handleslides of the \(\beta\)-curves (Proposition 8.2 of [11]). The key property of nice Heegaard diagrams is that the Euler measure of any region away from the base point is non-negative. This property imposes great constraints on domains supporting holomorphic representatives via the index formula. Hence it opens up the combinatorial vein for proving the pairing theorem.
**Theorem 1.4**.: _Let \(\mathcal{H}^{a}\) be a left provincially admissible arced bordered Heegaard diagram, and let \(\alpha_{im}\) be a \(z\)-adjacent immersed multicurve. Then_
\[\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\cong\widehat{CFDA}(\mathcal{H}^{ a})\boxtimes\widehat{CFD}(\alpha_{im}).\]
Proof.: In view of the homotopy equivalence of the relevant invariants under isotopy of \(\beta\) curves (Proposition 2.82), we may assume \(\mathcal{H}^{a}\) is a nice arced bordered Heegaard diagram. Note nice arced bordered Heegaard diagrams are automatically left
Figure 19. Two-cornered positive \(\alpha\)-bounded domains.
provincially admissible. Therefore, \(\widehat{CFDA}(\mathcal{H}^{a})\) and \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\) are defined. In fact, a stronger admissibility condition holds for \(\mathcal{H}^{a}\): any periodic domains with \(n_{z}=0\) has both positive and negative local multiplicities. This implies \(\widehat{CFDA}(\mathcal{H}^{a})\) is bounded, and hence the box-tensor product is expressed as a finite sum.
Implicit in the proof is that we will be using split almost complex structures for defining \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\) and \(\widehat{CFDA}(\mathcal{H}^{a})\). A split almost complex structure is sufficient for defining \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\), since all the domains involved will be bigons and rectangles. In this setting, up to a generic perturbation of the \(\alpha\) and \(\beta\) curves, moduli spaces defined using a split almost complex structure are transverse (c.f. [15, Proposition 3.9]).
We will call the two punctures in \(\mathcal{H}^{a}\) the \(\sigma\)-puncture and the \(\rho\)-puncture, where the \(\rho\)-puncture is the one that gets capped off in the pairing diagram. For now, we assume the local systems on \(\alpha_{im}\) are trivial, and we will indicate the modifications needed for dealing with nontrivial local system later on. First, the generators \(\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))\) and \(\mathcal{G}(\mathcal{H}^{a})\otimes_{\mathcal{I}_{R}}\mathcal{G}(\alpha_{im})\) are identified as pointed out in Remark 4.14 (3). Next, we prove the differentials have a one-to-one correspondence.
We first show any differential incurred by the box-tensor product has a corresponding differential in \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\). A differential arising from the box tensor product comes in two types, depending on whether it involves nontrivial differentials in \(\widehat{CFD}(\alpha_{im})\). If it does not involve non-trivial differential in \(\widehat{CFD}(\alpha_{im})\), then the input from \(\widehat{CFDA}(\mathcal{H}^{a})\) counts curves with the domain being a provincial bigon, a provincial rectangle, or a bigon with a single Reeb chord on the \(\sigma\)-puncture; see [11, Proposition 8.4]. Such bigons or rectangles clearly have their counterparts in \(\mathcal{H}^{a}(\alpha_{im})\), giving the corresponding differentials in \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\). If the box-tensor differential involves differentials in \(\widehat{CFD}(\alpha_{im})\), then the corresponding input from \(\widehat{CFDA}(\mathcal{H}^{a})\) counts curves with the domain being a bigon with a single Reeb chord on the \(\rho\)-puncture [11, Proposition 8.4]. As it pairs with a differential in \(\widehat{CFD}(\alpha_{im})\), this bigon gives rise to a bigon in \(\mathcal{H}^{a}(\alpha_{im})\) (which is a pre-image of the collapsing map), giving the corresponding differential in \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\).
Next, we show that every differential in \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\) corresponds to a differential incurred by the box-tensor product. Suppose \(u\in\pi_{2}(\boldsymbol{x}\otimes a,\boldsymbol{y}\otimes b)\) admits a holomorphic representative contributing to a differential for \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\). Let \(B\) be the domain of \(u\), and let \(B^{\prime}\) denote the image of \(B\) under the collapsing operation. By Proposition 4.15, \(e(B)=e(B^{\prime})+\frac{|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|}{2}\). As \(B^{\prime}\) is a positive domain with \(n_{z}(B^{\prime})=0\) and \(\mathcal{H}^{a}\) is a nice Heegaard diagram, we have \(e(B)\geq e(B^{\prime})\geq 0\). By the index formula, denoting the source surface of \(u\) by \(S\), we have
\[\operatorname{Ind}(u)=g-\chi(S)+2e(B)+|\overrightarrow{\sigma}|.\]
As \(\operatorname{Ind}(u)=1\) and \(2e(B)+|\overrightarrow{\sigma}|\geq 0\), we have \(\chi(S)=g\) or \(g-1\).
When \(\chi(S)=g\), \(S\) consists of \(g\) topological disks; each disk has a \(+\) and a \(-\) puncture, and there is at most one \(\sigma\)-puncture overall since \(2e(B)+|\overrightarrow{\sigma}|=1\). We separate the discussion according to the number of \(\sigma\)-punctures. First, if there is a \(\sigma\)-puncture, then the corresponding domain \(B\) in \(\mathcal{H}^{a}(\alpha_{im})\) is a bigon with a single Reeb chord on the \(\sigma\)-puncture and does not involve \(\alpha_{im}\). This domain clearly has its counterpart in \(\mathcal{H}^{a}\) under the collapsing map, giving rise to an operation in \(\widehat{CFDA}(\mathcal{H}^{a})\); the corresponding differential in the box-tensor product is obtained
by pairing this \(DA\)-operation with an element in \(\widehat{CFD}(\alpha_{im})\). Secondly, if there is no \(\sigma\)-puncture, then the domain \(B\) is a provincial bigon in \(\mathcal{H}^{a}(\alpha_{im})\). There are two sub-cases to consider depending on whether the \(\alpha\)-boundary of \(B\) overlaps with \(\alpha_{im}\). If the \(\alpha\) boundary of \(B\) is not on \(\alpha_{im}\), then we argue as in the first case to see that \(B\) gives a corresponding differential in the box-tensor product. If, on the other hand, the boundary of \(B\) is on \(\alpha_{im}\), since \(1/2=e(B)=e(B^{\prime})+|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|/2\) we have \(|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|\) is either \(0\) or \(1\). If \(|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|=0\), then \(B^{\prime}\) is a provincial domain, giving the type-DA operation for the corresponding differential obtained by the box-tensor product. If \(|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|=1\), then \(B^{\prime}\) is obtained from \(B\) subtracting a simple region (as in the proof of Proposition 4.15) and then applying the collapsing map. We can see \(B^{\prime}\) is a bigon with a single Reeb chord corresponding to the Reeb chord specified by \(\partial_{\alpha_{im}}B\) on the \(\rho\)-puncture. The DA-operation given by \(B^{\prime}\) and the type D operation given by \(\partial_{\alpha_{im}}B\) pair up to give the corresponding differential in the box-tensor product.
When \(\chi(S)=g-1\), then \(S\) consists of \(g-1\) topological disks; \(g-2\) of the disks are bigon, while the remaining one is a rectangle. As \(e(B)=0\), the bigons are mapped trivially to \(\Sigma\). Therefore, the domain \(B\) is a rectangle. Again, since \(e(B)=e(B^{\prime})+|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|/2\) and \(e(B^{\prime})\geq 0\), we have \(|\overrightarrow{\rho}(\partial_{\alpha_{im}}B)|=0\). Then \(B^{\prime}\) is a provincial rectangular domain in \(\mathcal{H}^{a}\), giving rise to a DA-operation that pairs with a trivial type-D operation to give the corresponding differential in the box-tensor product. We have finished the proof when the local system is trivial.
Next, we consider the case where \(\alpha_{im}\) admits a non-trivial local system \((\mathcal{E},\Phi)\). The local system induces a local system on the \(\alpha\) curves in the pairing diagram \(H^{a}(\alpha_{im})\). First, the discussion above identifies the generators at the vector space level: let \(\boldsymbol{x}\otimes y\) be an intersection point in \(\mathcal{G}(\mathcal{H}^{a}(\alpha_{im}))\), where \(\boldsymbol{x}\in\mathcal{G}(H^{a})\) and \(y\in\mathcal{G}(\alpha_{im})\); then \(\boldsymbol{x}\otimes y\) corresponds to a direct summand \(\mathcal{E}|_{\boldsymbol{x}\otimes y}\) of \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\) as a vector space, and \(\mathcal{E}|_{\boldsymbol{x}\otimes y}\) can be naturally identified with \(\boldsymbol{x}\otimes\mathcal{E}|_{y}\), a summand of \(\widehat{CFDA}(\mathcal{H}^{a})\boxtimes\widehat{CFD}(\alpha_{im})\). Secondly, the discussion in the trivial-local-system case shows that \(\widehat{CFD}(\mathcal{H}^{a}(\alpha_{im}))\) has a differential map between the summands \(\mathcal{E}|_{\boldsymbol{x}\otimes y}\to\sigma_{I}\otimes\mathcal{E}|_{ \boldsymbol{x}^{\prime}\otimes y^{\prime}}\) if and only if the box-tensor product has a differential map between the corresponding summands \(\boldsymbol{x}\otimes\mathcal{E}|_{y}\to\sigma_{I}\otimes(\boldsymbol{x}^{ \prime}\otimes\mathcal{E}|_{y^{\prime}})\) in the box-tensor product, and under the natural identification between these summands both differential maps are induced by the same parallel transport from \(\mathcal{E}|_{y}\) to \(\mathcal{E}|_{y^{\prime}}\).
### The second pairing theorem
We are interested in computing knot Floer chain complexes over \(\mathcal{R}=\mathbb{F}[U,V]/(UV)\) using bordered Floer homology. We have already defined an extended type-D structure, and we want to pair it with an extended type-A structure to get a bi-graded chain complex over \(\mathcal{R}=\mathbb{F}[U,V]/(UV)\). Here we will only restrict to a specific extended type-A structure associated to the doubly-pointed bordered Heegaard diagram \(\mathcal{H}_{id}\) given in Figure 20. The diagram \(\mathcal{H}_{id}\) corresponds to the pattern knot given by the core of a solid torus, which is the _identity pattern_.
Recall \(\tilde{\mathcal{A}}\) denotes the weakly extended torus algebra, and \(\mathcal{I}\subset\tilde{\mathcal{A}}\) is the ring of idempotents (see Figure 8).
**Definition 4.17**.: The extended type-A structure \(\widehat{CFA}(\mathcal{H}_{id})\) is a free \(\mathcal{R}\)-module generated by the single intersection point \(x\) in \(\mathcal{H}_{id}\). It is equipped with an \(\mathcal{I}\)-action given by \(x\cdot\iota_{0}=x\) and \(x\cdot\iota_{1}=0\), together with a family of \(\mathcal{R}\)-linear maps
\(m_{i+1}:\widehat{CFA}(\mathcal{H}_{id})\otimes_{\mathcal{I}}\tilde{A}^{\otimes i} \rightarrow\widehat{CFA}(\mathcal{H}_{id})\)\((i\in\mathbb{N})\), where up to \(\mathcal{R}\)-linearity the only non-zero relations are:
\[m_{2}(x,1)=x,\] \[m_{3+i}(x,\rho_{3},\overbrace{\rho_{23},\ldots,\rho_{23}}^{i}, \rho_{2})=U^{i}x,\quad i\in\mathbb{N},\] \[m_{3+i}(x,\rho_{1},\overbrace{\rho_{01},\ldots,\rho_{01}}^{i}, \rho_{0})=V^{i}x,\quad i\in\mathbb{N}.\]
_Remark 4.18_.: This extends the hat-version type A-structure \(\widehat{CFA}(\mathcal{H}_{id})\) by allowing Reeb chords crossing the base point \(z\).
Straightforwardly, the hat-version box-tensor product can be generalized to be an operation between the extended type A structure \(\widetilde{\mathcal{M}}:=\widehat{CFA}(\mathcal{H}_{id})\) and a weakly extended type D structure \((\widetilde{\mathcal{N}},\delta^{i})\): It is the \(\mathcal{R}\)-module \(\widehat{\mathcal{M}}\otimes_{\mathcal{I}}\widetilde{\mathcal{N}}\) together with a differential \(\partial_{\boxtimes}:=\sum_{i\geq 0}(m_{i+1}\otimes\mathbb{I}_{\widetilde{ \mathcal{N}}})\circ(\mathbb{I}_{\widetilde{\mathcal{M}}}\otimes\delta^{i})\); the finiteness of the sum can be guaranteed for type D structures defined using bi-admissible diagrams (see the proof of Theorem 1.6 below). One may verify \(\partial_{\boxtimes}^{2}=0\) algebraically using the structure equations defining the (weakly) extended type D and type A structures. We omit such computation and instead content ourselves with Theorem 1.6 below, which implies that the \(\partial_{\boxtimes}\) induced by gluing bordered Heegaard diagrams is indeed a differential. We further remark that \(\widetilde{\mathcal{M}}\boxtimes_{\mathcal{I}}\widetilde{\mathcal{N}}_{1}\) is chain homotopic to \(\widetilde{\mathcal{M}}\boxtimes_{\mathcal{I}}\widetilde{\mathcal{N}}_{2}\) provided \(\widetilde{\mathcal{N}}_{1}\) is homotopic to \(\widetilde{\mathcal{N}}_{2}\). The proof of this is similar to that in the hat version and is omitted.
**Theorem 1.6**.: _Let \(\mathcal{H}_{im}\) be an unobstructed, bi-admissible immersed bordered Heegaard diagram, and let \(\mathcal{H}_{id}\) be the standard bordered Heegaard diagram for the identity pattern. Then_
\[CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{im})\cong\widehat{CFA}( \mathcal{H}_{id})\boxtimes\widehat{CFD}(\mathcal{H}_{im}).\]
(See Definition 2.8 and 2.10 for the unobstructedness and bi-admissibility for \(\mathcal{H}\).)
Proof of Theorem 1.6.: Note periodic domains of \(\mathcal{H}_{id}\cup\mathcal{H}_{im}\) with \(n_{z}=0\) (respectively \(n_{w}=0\)) corresponds to periodic domains of \(\mathcal{H}_{im}\) with \(n_{\rho_{0}}=n_{\rho_{1}}=0\) (respectively \(n_{\rho_{2}}=n_{\rho_{3}}=0\)). Therefore, since \(\mathcal{H}_{im}\) is bi-admissible, \(\mathcal{H}_{id}\cup\mathcal{H}_{im}\) is bi-admissible in the sense of Definition 3.3. Also, zero- or one-cornered \(\alpha\)-bounded domains in \(\mathcal{H}_{id}\cup\mathcal{H}_{im}\) with \(n_{z}=n_{w}=0\) must lie in \(\mathcal{H}_{im}\). So, unobstructedness of \(\mathcal{H}_{im}\) implies the unobstructedness of \(\mathcal{H}_{id}\cup\mathcal{H}_{im}\). In summary, \(\mathcal{H}_{id}\cup\mathcal{H}_{im}\) is bi-admissible and unobstructed, and hence \(CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{im})\) is defined. The bi-admissibility of \(\mathcal{H}_{im}\) also implies \(\partial_{\boxtimes}\) is expressed as a finite sum and hence is
Figure 20. The bordered diagram \(\mathcal{H}_{id}\).
well-defined. To see this, note for any \(\boldsymbol{x},\boldsymbol{y}\in\mathcal{G}(\mathcal{H}_{im})\), bi-admissibility implies there are only finitely many positive domains connecting \(\boldsymbol{x}\) and \(\boldsymbol{y}\) with a prescribed Reeb-chord sequence of the form \(\rho_{1},\rho_{01},\ldots\rho_{01},\rho_{0}\) and \(\rho_{3},\rho_{23},\ldots\rho_{23},\rho_{2}\).
Recall that the differential in \(CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{im})\) counts holomorphic curves that can only cross at most one of the \(w\)- and \(z\)-base points. Note also that both of the base points in \(\mathcal{H}_{id}\) are adjacent to the east boundary. Therefore, by the symmetry of the base points \(w\) and \(z\), it suffices to prove the theorem for the hat version knot Floer homology, i.e.,
\[\widehat{CFK}(\mathcal{H}_{id}\cup\mathcal{H}_{im})\cong\widehat{ CFA}(\mathcal{H}_{id})\boxtimes\widehat{CFD}(\mathcal{H}_{im}).\]
Though our Heegaard diagrams have immersed \(\alpha\)-mulicurves, given that no boundary degeneration can occur, the proof for the embedded-\(\alpha\)-curves case, which uses neck stretching and time dilation, carries over without changes; see Chapter 9 of [1] for detail or Section 3 of [1] for an exposition.
## 5. Knot Floer homology of satellite knots
We apply the machinery developed in the previous sections to study the knot Floer homology of satellite knots. First, we introduce a gentler condition on the immersed curves than the z-adjacency condition.
**Definition 5.1**.: An immersed multicurve \(\alpha_{im}\) in the marked torus \((T^{2},z)\) is _admissible_ if there are no nontrivial zero- and one-cornered \(\alpha\)-bounded positive domains \(B\) with \(n_{z}(B)=0\).
Note any \(z\)-adjacent immersed multicurve is admissible in view of Lemma 4.12.
Let \(\mathcal{H}_{w,z}\) be a doubly pointed bordered Heegaard diagram for a pattern knot \((S^{1}\times D^{2},P)\). Recall that we can construct a doubly pointed immersed diagram \(\mathcal{H}_{w,z}(\alpha_{im})\). The admissibility condition guarantees that \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\) is defined in view of the following proposition.
**Proposition 5.2**.: _If \(\alpha_{im}\) is an admissible immersed multicurve, then \(\mathcal{H}_{w,z}(\alpha_{im})\) is bi-admissible and unobstructed._
Proof.: First, we show the diagram \(\mathcal{H}_{w,z}(\alpha_{im})\) is unobstructed (in the sense of Definition 3.2). Let \(B\) be a zero- or one-cornered \(\alpha\)-bounded domain for \(\mathcal{H}_{w,z}(\alpha_{im})\). Note \(n_{w}(B)=n_{z}(B)\) since the base points \(w\) and \(z\) are in the same region in the complement of the \(\alpha\)-curves. We restrict to the domains with \(n_{z}(B)=n_{w}(B)=0\). Recall we want to prove \(B\) must be somewhere negative. Splitting the Heegaard surface as in the proof of Proposition 4.16, we see \(B\) corresponds to a zero- or one-cornered \(\alpha\)-bounded domain \(B^{\prime}\) in the marked torus with \(n_{z}(B^{\prime})=0\). Since \(\alpha_{im}\) is admissible, \(B^{\prime}\) is somewhere negative. Therefore, \(B\) is somewhere negative.
Next we show the pairing diagram \(\mathcal{H}_{w,z}(\alpha_{im})\) is bi-admissible. Recall that bi-admissibility means any nontrivial periodic domains \(B\) with \(n_{w}(B)=0\) or \(n_{z}(B)=0\) must have both positive and negative coefficients. To see this, we first claim any given periodic domain \(B\) is bounded by some multiple of a homologically trivial component of \(\alpha_{im}\). (We warn the reader that the claim is no longer true if one further performs a handleslide of such a component over an embedded alpha curve.) To see the claim, note that as homology classes, the curves \([\alpha_{i}]\) (\(i=1,\ldots,g-1\)), \([\alpha_{im}^{1}]\), and \([\beta_{i}]\) (\(i=1,\ldots,g\)) are linearly independent, just as the attaching curves in a Heegaard diagram for \(S^{3}\). Now, the claim implies \(B\) is
a zero-cornered \(\alpha\)-bounded domain. In view of the unobstructedness established above, \(B\) is somewhere negative.
**Definition 5.3**.: Let \(\alpha_{im}\) and \(\alpha^{\prime}_{im}\) be two admissible immersed multicurves. They are said to be admissibly equivalent if there exists a finite sequence of admissible immersed curves \(\alpha^{i}_{im}\), \(i=1,\ldots,n\) such that
1. \(\alpha^{1}_{im}=\alpha_{im}\) and \(\alpha^{n}_{im}=\alpha^{\prime}_{im}\),
2. For \(i=1,\ldots,n-1\), \(\alpha^{i}_{im}\) and \(\alpha^{i+1}_{im}\) are related by a finger move that creates/cancels a pair of self-intersection points of the immersed curves.
**Proposition 5.4**.: _Let \(\alpha_{im}\) and \(\alpha^{\prime}_{im}\) be two admissibly equivalent immersed multicurves, and let \(\mathcal{H}_{w,z}\) be a doubly pointed bordered Heegaard diagram. Then_
\[CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\cong CFK_{\mathcal{R}}( \mathcal{H}_{w,z}(\alpha^{\prime}_{im})).\]
Proof.: The proof follows the same strategy as the usual proof of isotopy invariance. Let \(\boldsymbol{\alpha_{0}}\) and \(\boldsymbol{\alpha_{1}}\) be the two sets of \(\alpha\)-curves. For simplicity, assume they are related by a single finger move. We model the finger move using a locally supported exact Hamiltonian isotopy on \(\Sigma\). The isotopy induces a family of \(\alpha\)-curves, \(\boldsymbol{\alpha}_{t}\), on \(\Sigma\) (\(t\in\mathbb{R}\)); for \(t\ll 0\) (resp. \(t\gg 0\)), \(\boldsymbol{\alpha}_{t}\) is constant with respect to \(t\) and is identified with \(\boldsymbol{\alpha}_{0}\) (resp. \(\boldsymbol{\alpha}_{1}\)). \(\boldsymbol{\alpha}_{t}\) induces an immersed totally real submanifold \(C_{\alpha}=\boldsymbol{\alpha}_{t}\times\{1\}\times\{t\}\) in \(\Sigma\times[0,1]\times\mathbb{R}\). \(C_{\alpha}\) can be realized as an immersion \(\Psi_{t}:(\Pi_{i=1}^{g}S^{1})\times\mathbb{R}\to\Sigma\times\{1\}\times \mathbb{R}.\) Let \(C_{\beta}\) be the Lagrangian induced by the \(\beta\) curves. For \(\boldsymbol{x}\in\mathbb{T}_{\boldsymbol{\alpha}_{0}}\cap\mathbb{T}_{\boldsymbol {\beta}}\) and \(\boldsymbol{y}\in\mathbb{T}_{\boldsymbol{\alpha}_{1}}\cap\mathbb{T}_{\boldsymbol {\beta}}\), one then define \(\mathcal{M}_{\Psi_{t}}(\boldsymbol{x},\boldsymbol{y})\) to be the moduli space of holomorphic curves in \(\Sigma\times[0,1]\times\mathbb{R}\) with boundary on \(C_{\alpha}\cup C_{\beta}\) such that the \(\alpha\)-boundary can be lifted through \(\Psi_{t}\). With this, one can define a map \(\Phi_{0}:CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\to CFK_{ \mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{im}))\) by
\[\boldsymbol{x}\mapsto\sum_{\boldsymbol{y}}\sum_{\phi\in\pi_{2}(\boldsymbol{x },\boldsymbol{y})}\#\mathcal{M}_{\Psi_{t}}(\boldsymbol{x},\boldsymbol{y})U^{ n_{w}(\phi)}V^{n_{z}(\phi)}\boldsymbol{y},\]
where \(\mathcal{M}_{\Psi_{t}}(\boldsymbol{x},\boldsymbol{y})\) has dimension zero. Define \(\Phi_{1}:CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{im}))\to CFK_{ \mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\) similarly. We remark that the compactness and gluing results still apply to this setup. The bi-admissibility of the diagrams obstructs the appearance of boundary degeneration in the compactification of one-dimensional moduli spaces, and hence we can still apply the usual argument to show (1) \(\Phi_{0}\) and \(\Phi_{1}\) are chain maps, and (2) \(\Phi_{0}\circ\Phi_{1}\) and \(\Phi_{1}\circ\Phi_{0}\) are homotopy equivalent to the identity map. Therefore, \(\Phi_{0}\) and \(\Phi_{1}\) are chain homotopy equivalences.
**Definition 5.5**.: An immersed multicurve is called \(z\)-passable if it is admissibly equivalent to a \(z\)-adjacent multicurve.
_Remark 5.6_.: We can easily arrange \(\alpha_{K}\) to be a \(z\)-passable multicurve; see Example 5.7 below. Moreover, when the pattern knot admits a genus-one doubly pointed Heegaard diagram, we can even drop the admissibility condition; see Section 6.2.
**Example 5.7**.: We give a simple way to arrange an immersed multicurve \(\alpha_{K}\) to be \(z\)-passable. Without loss of generality, we consider a single component \(\gamma\) of \(\alpha_{K}\) each time, and we orient \(\gamma\) arbitrarily. We view the torus \(T^{2}\) as a square as usual and position \(\gamma\) such that the elementary arcs hitting the top edge are separated into two groups of arcs where the arcs in a single group intersect the top edge in the same direction; see Figure 21 (1). Next, we perform a Reidemeister-II-like move to the two groups as in Figure 21 (2). Perform the above modification for every
component of \(\alpha_{K}\). We claim the resulting multicurve, which we denote \(\alpha^{\prime}_{K}\), is a \(z\)-passable multicurve.
We justify the claim when \(\gamma\) is homologically trivial; the case where \(\gamma\) is homologically essential is similar. We first check that \(\alpha^{\prime}_{K}\) is admissible by checking that there are no zero- or one-cornered \(\alpha\) bounded domains \(B\) with \(n_{z}(B)=0\). First note that for any zero- or one-cornered \(\alpha\) bounded domains \(B\), \(\partial B\) must include an elementary arc meeting the top edge of the square. To see this, note that \(\partial B\) is a nullhomologous curve in the torus and thus lifts to a closed path in the universal cover. Cutting along (lifts of) the meridian (i.e., \(\mathbb{Z}\times\mathbb{R}\)) breaks \(\partial B\) into pieces, with at least two of these (the leftmost and rightmost piece) forming bigons with the meridian. At least one of those two pieces has no corners (since \(B\) is zero- or one-cornered). The cornerless piece must intersect the longitude because \(\alpha^{\prime}_{K}\) is reduced, and the subarc of \(\partial B\) directly below this intersection with the longitude gives an elementary arc meeting the top edge of the square. Next we observe that the elementary arcs near the top edge of the square are arranged such that each arc has the base point \(z\) both on its left and on its right, in each case without oppositely oriented arcs in between the arc and \(z\), and this implies that no domain whose boundary includes one of these elementary arcs can have \(n_{z}(B)=0\). Having shown the immersed curve \(\alpha^{\prime}_{K}\) is admissible, it remains to check that it is \(z\)-passable. Recall from Proposition 4.11 that we can perform a sequence of finger moves to achieve a \(z\)-adjacent position. Note that all the intermediate diagrams are admissible by exactly the same argument above.
### Proof of the main theorem, ungraded version
This subsection is devoted to proving the ungraded version of Theorem 1.1.
A satellite knot is constructed via the so-called satellite operation that requires a pattern knot and a companion knot as input. A pattern knot is an oriented knot \(P\) in an oriented solid torus \(S^{1}\times D^{2}\), where an oriented meridian \(\mu\) and an oriented longitude \(\lambda\) are chosen for \(\partial(S^{1}\times D^{2})\) so that the orientation determined by \((\mu,\lambda)\) coincides with the induced boundary orientation. A companion knot is an oriented knot \(K\) in the 3-sphere. We orient any Seifert longitude of \(K\) using the parallel orientation, and orient any meridian \(m\) of \(K\) so that \(lk(m,K)=1\). The satellite knot \(P(K)\) is obtained by gluing \((S^{1}\times D^{2},P)\) to the companion knot complement \(S^{3}\backslash\nu(K)\) so that the chosen meridian \(\mu\) is identified with a meridian of \(K\) and that the chosen longitude \(\lambda\) is identified with the Seifert longitude of \(K\); \(P(K)\) is given by viewing \(P\) as a knot in the glued-up 3-sphere \((S^{1}\times D^{2})\cup(S^{3}\backslash\nu(K))\).
Figure 21. (2) is an \(z\)-passable immersed curve obtained from (1).
We state the main theorem again below for the readers' convenience. Recall that any pattern knot can be represented by a doubly-pointed bordered Heegaard diagram [11, Section 11.4].
**Theorem 1.1**.: _Let \(\mathcal{H}_{w,z}\) be a doubly-pointed bordered Heegaard diagram for a pattern knot \(P\), and let \(\alpha_{K}\) be the immersed multicurve associated to a companion knot \(K\). Let \(\mathcal{H}_{w,z}(\alpha_{K})\) be the immersed doubly-pointed Heegaard diagram obtained by pairing \(\mathcal{H}_{w,z}\) and \(\alpha_{K}\), in which \(\alpha_{K}\) is put in a z-passable position. Then the knot Floer chain complex \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}),\mathfrak{d})\) defined using \(\mathcal{H}_{w,z}(\alpha_{K})\) and a generic choice of auxiliary data \(\mathfrak{d}\) is bi-graded homotopy equivalent to the knot Floer chain complex of the satellite knot \(P(K)\) over \(\mathcal{R}\), where \(\mathcal{R}=\mathbb{F}[U,V]/UV\)._
Given a doubly pointed bordered Heegaard diagram \(\mathcal{H}_{w,z}\) for the pattern knot, we will construct an arced bordered Heegaard diagram \(\mathcal{H}_{X(P)}\); the Heegaard diagram \(\mathcal{H}_{X(P)}\) specifies a bordered 3-manifold \(X(P)\) with two boundary components8, where (1) the underlying 3-manifold is \(S^{1}\times D^{2}\backslash\nu(P)\), (2) the parametrization of \(\partial(S^{1}\times D^{2})\) is the standard meridian-longitude parametrization, and (3) the parametrization of interior boundary \(\partial(\nu(P))\) is given by a meridian of \(P\) and some longitude of \(P\). (The choice of the longitude of \(P\) does not matter).
Footnote 8: Strictly speaking, an arced bordered Heegaard diagram specifies a strongly bordered 3-manifold in the sense of Definition 5.1 in [11], where there is also a framed arc in addition to the underlying bordered 3-manifold. This extra structure will not be relevant to us, so we will not specify it.
We describe how to obtain \(\mathcal{H}_{X(P)}\) from the doubly pointed bordered Heegaard diagram \(\mathcal{H}_{w,z}\). This is a standard construction, similar to the one appearing in [11] Section 11.7; the reader familiar with it may skip this paragraph and consult Figure 22 for an overview. Assume \(\mathcal{H}_{w,z}\) is of genus \(g\). First, we stabilize \(\mathcal{H}_{w,z}=(\bar{\Sigma},\bar{\boldsymbol{\alpha}},\boldsymbol{\beta},w,z)\) to get a new doubly pointed bordered Heegaard diagram \(\mathcal{H}^{\prime}_{w,z}=(\bar{\Sigma}^{\prime},\bar{\boldsymbol{\alpha}} \cup\{\alpha_{g}^{c}\},\boldsymbol{\beta}\cup\{\beta_{g+1}\},w,z)\). More concretely, \(\bar{\Sigma}^{\prime}\) is obtained from \(\bar{\Sigma}\) by attaching a two-dimensional one-handle, with feet near the base points \(w\) and \(z\). Parametrize the new one-handle by \(S^{1}\times[0,1]\), where \(S^{1}\times\{0\}\) is the feet circle near \(z\), and \(S^{1}\times\{1\}\) is the feet circle near \(w\). We also parametrize \(S^{1}\) by \([0,2\pi]/(0\sim 2\pi)\). The new \(\alpha\)-circle \(\alpha_{g}^{c}\) is the belt circle \(S^{1}\times\{1/2\}\) of the new one-handle. Let \(p_{1}=(0,0)\) and \(p_{2}=(0,1)\) be two points on the two feet circles of the one-handle. The new \(\beta\) circle \(\beta_{g+1}\) is the union of two arcs \(l_{1}\) and \(l_{2}\) connecting \(p_{1}\) and \(p_{2}\), where \(l_{1}\) is an arc in \(\bar{\Sigma}\backslash\boldsymbol{\beta}\) and \(l_{2}\) is the arc \(\{(0,t)|t\in[0,1]\}\) in new one-handle. Next, introduce a new curve \(\bar{\alpha}_{1}^{a,L}\) as follows. Let \(l_{z}\) be an arc from \(z\) to the point \((-1,0)\in S^{1}\times\{0\}\) does not intersect any of the \(\alpha\)- and \(\beta\)-curves. Let \(l_{2}^{\prime}\) be the arc \(\{(1,t)|t\in[0,1]\}\) in the one-handle; denote the endpoints of \(l_{2}^{\prime}\) by \(p_{1}^{\prime}\) and \(p_{2}^{\prime}\). Let \(l_{1}^{\prime}\) be an arc connecting \(p_{1}^{\prime}\) and \(p_{2}^{\prime}\) in \(\bar{\Sigma}\backslash\{\bar{\boldsymbol{\alpha}}\cup l_{z}\}\). Let \(\bar{\alpha}_{1}^{a,L}=l_{1}^{\prime}\cup l_{2}^{\prime}\). Then \(\bar{\alpha}_{1}^{a,L}\) intersects \(\alpha_{g}^{c}\) geometrically once at a point \(p\). Note \(\alpha_{g}^{c}\) is the meridian of \(P\), and \(\bar{\alpha}_{1}^{a,L}\) is a longitude of \(P\). Let \(\bar{\Sigma}^{\prime\prime}\) be the circle compactification of \(\bar{\Sigma}^{\prime}\backslash\{p\}\). Denote the new boundary circle by \(\partial_{L}\bar{\Sigma}^{\prime\prime}\), and denote the boundary circle inherited from \(\partial\bar{\Sigma}\) by \(\partial_{R}\bar{\Sigma}^{\prime\prime}\). Let \(\alpha_{1}^{a,L}=\bar{\alpha}_{1}^{a,L}\backslash\{p\}\), and let \(\alpha_{2}^{a,L}=\alpha_{g}^{c}\backslash\{p\}\). Let \(\alpha_{1}^{a,R}=\alpha_{1}^{a}\), and let \(\alpha_{2}^{a,R}=\alpha_{2}^{a}\). Let \(\bar{\boldsymbol{\alpha}}^{\prime\prime}=\{\alpha_{1}^{a,L},\alpha_{2}^{a,L}, \alpha_{1}^{a,R},\alpha_{2}^{a,R},\alpha_{1}^{c},\ldots,\alpha_{g-1}^{c}\}\). Let \(\boldsymbol{\beta}^{\prime\prime}=\boldsymbol{\beta}\cup\{\beta_{g+1}\}\). Label the Reeb chords corresponding to the new boundary circle \(\partial_{L}\bar{\Sigma}^{\prime\prime}\) by \(\sigma_{i}\) (\(i=0,1,2,3\)) so that \(\sigma_{2}\) and \(\sigma_{3}\) lie on the side attached to the feet near \(w\), and \(\sigma_{0}\) and \(\sigma_{1}\) lie on the side attached to the feet near \(z\). Let \(z_{R}=z\), and let \(z_{L}\) be a point on \(\sigma_{0}\). Let \(\boldsymbol{z}\) be an arc connecting \(z_{R}\) and \(z_{L}\) in the complement of \(\bar{\boldsymbol{\alpha}}^{\prime\prime}\cup\boldsymbol{\beta}^{\prime\prime}\); \(\boldsymbol{z}\) exists since we can
obtain such an arc by extending \(l_{z}\). Finally, we let \(\mathcal{H}_{X(P)}=(\bar{\Sigma}^{\prime\prime},\bar{\boldsymbol{\alpha}}^{ \prime\prime},\boldsymbol{\beta}^{\prime\prime},\boldsymbol{z})\). See Figure 22.
**Lemma 5.8**.: _Let \(\mathcal{H}_{X(P)}\) be the arcd bordered Heegaard diagram obtained from \(\mathcal{H}_{w,z}\) via the above procedure. Let \(\alpha_{im}\) be a \(z\)-adjacent multicurve. Then \(\mathcal{H}_{X(P)}(\alpha_{im})\) is unobstructed and bi-admissible._
Proof.: The unobstructedness follows from Proposition 4.16. We move to see bi-admissibility in the sense of Definition 2.10. Note that periodic domains \(B\) for \(\mathcal{H}_{X(P)}(\alpha_{im})\) with \(n_{\sigma_{0}}(B)=n_{\sigma_{1}}(B)=0\) (respectively \(n_{\sigma_{2}}(B)=n_{\sigma_{3}}(B)=0\)) correspond to periodic domains \(B^{\prime}\) for \(\mathcal{H}_{w,z}(\alpha_{im})\) with \(n_{z}(B^{\prime})=0\) (respectively \(n_{w}(B^{\prime})=0\)). Therefore, the bi-admissibility of \(\mathcal{H}_{w,z}(\alpha_{im})\), which was shown in Proposition 5.2, implies the bi-admissibility of \(\mathcal{H}_{X(P)}\).
Recall \(\mathcal{H}_{id}\) is the standard doubly pointed bordered Heegaard diagram for the identity pattern knot.
**Lemma 5.9**.: \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\) _is chain homotopy equivalent to \(CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im}))\)._
Proof.: Note the doubly pointed Heegaard diagram \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})\) is obtained from \(\mathcal{H}_{w,z}(\alpha_{im})\) by two stabilizations; see Figure 23. In particular, it is also bi
Figure 22. An example of obtaining \(\mathcal{H}_{X(P)}\) from \(\mathcal{H}_{w,z}\). Here, \(\mathcal{H}_{w,z}\) is showed on the top row; it is a genus-one Heegaard diagram for the \((3,1)\)-cable pattern. \(\mathcal{H}_{X(P)}\) is the rightmost diagram on the second row.
admissible, and hence one can define \(CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im}))\). We claim there is a sequence of Heegaard moves relating \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})\) and \(\mathcal{H}_{w,z}(\alpha_{im})\) which do not involve sliding \(\alpha\) curves over \(\alpha_{im}\). To see this, note that on \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})\) there is a \(\beta\)-circle between the \(w\) and \(z\) base points that intersects an \(\alpha\)-circle geometrically once; denote these curves by \(\beta_{g+2}\) and \(\alpha_{g+2}\) respectively. After sliding other beta curves over \(\beta_{g+2}\) if necessary, we may assume \(\alpha_{g+2}\) does not intersect other beta curves, and hence we can destabilize \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})\) along \(\alpha_{g+2}\) and \(\beta_{g+2}\). Now we arrive at an intermediate Heegaard diagram; see Figure 23 (upper right). It is a stabilization of \(\mathcal{H}_{w,z}(\alpha_{im})\). On this intermediate Heegaard diagram, there is an \(\alpha\)-circle \(\alpha_{g+1}\) that intersects only one \(\beta\)-circle \(\beta_{g+1}\), and the geometric intersection number is one. So, we may slide other \(\alpha\)-curves over \(\alpha_{g+1}\) if necessary so that \(\beta_{g+1}\) do not intersect other \(\alpha\)-curves. After this, we destabilize the Heegaard diagram along \(\alpha_{g+1}\) and \(\beta_{g+1}\), and the resulting Heegaard diagram is \(\mathcal{H}_{w,z}(\alpha_{im})\). The homotopy equivalence between \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{im}))\) and \(CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im}))\) follows from the homotopy invariance of knot Floer chain complexes established in Proposition 3.11.
With these lemmas at hand, we now prove the ungraded version of Theorem 1.1.
Figure 23. An example of \(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}(\alpha_{im})\) (left) and \(\mathcal{H}_{w,z}(\alpha_{im})\) (lower right). Here, \(P\) is the \((3,1)\)-cable. These two diagrams are related via handleslides and destabilizations, where the handleslides do not involve sliding over the immersed \(\alpha\)-curve.
Proof of Theorem 1.1, ungraded version.: In view of Proposition 5.4, we may assume the immersed multicurve \(\alpha_{K}\) for the knot complement of \(K\) is z-adjacent. Let \(\mathcal{H}_{X(P)}\) be the arced bordered Heegaard diagram obtained from \(\mathcal{H}_{w,z}\) via the "punctured-stabilization procedure". Throughout, when referring to the type D structure of a knot complement, we use the meridian and Seifert longitude to parametrize the boundary. By standard arguments, we can arrange that \(\mathcal{H}_{X(P)}\) is left provincially admissible at the cost of isotopy of the \(\beta\) curves. By Theorem 1.4, we have
\[\widehat{CFD}(\mathcal{H}_{X(P)}(\alpha_{K})) \cong\widehat{CFDA}(\mathcal{H}_{X(P)})\boxtimes\widehat{CFD}( \alpha_{K})\] \[\cong\widehat{CFDA}(\mathcal{H}_{X(P)})\boxtimes\widehat{CFD}(S^{ 3}\backslash\nu(K))\] \[\cong\widehat{CFD}(S^{3}\backslash\nu(P(K)))\]
Therefore, up to homotopy equivalence, the extended type D structure \(\widehat{CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))\) extends \(\widehat{CFD}(S^{3}\backslash\nu(P(K)))\). Consequently, we have the following:
\[CFK_{\mathcal{R}}(P(K)) \cong\widehat{CFA}(\mathcal{H}_{id})\overline{\otimes}\widehat{ CFD}(S^{3}\backslash\nu(P(K)))\] \[\cong\widehat{CFA}(\mathcal{H}_{id})\overline{\otimes}\widehat{ CFD}(\mathcal{H}_{X(P)}(\alpha_{K}))\] \[\cong CFK_{\mathcal{R}}(\mathcal{H}_{id}\cup\mathcal{H}_{X(P)}( \alpha_{K}))\]
Here, the last equality follows from applying Theorem 1.6. Note \(\boxtimes\) in the above equation is well-defined since \(\mathcal{H}_{X(P)}(\alpha_{K})\) is bi-admissible by Lemma 5.8. Now, by Lemma 5.9, \(CFK_{\mathcal{R}}(P(K))\) is chain homotopy equivalent to \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\).
### \(\mathcal{H}_{w,z}(\alpha_{K})\) is gradable
We want to show that the chain homotopy equivalence established in the previous subsection preserves the \(w\)-grading and \(z\)-grading of knot Floer chain complexes. As the first step, we need to show that \(\mathcal{H}_{w,z}(\alpha_{K})\) is gradable (in the sense of Definition 3.7).
**Proposition 5.10**.: _The diagram \(\mathcal{H}_{w,z}(\alpha_{K})\) is gradable._
In addition to being gradable, note that the results in the previous subsection also imply that \(\widehat{HF}(\mathcal{H}_{w}(\alpha_{K}))\cong\widehat{HF}(\mathcal{H}_{z}( \alpha_{K}))\cong\mathbb{F}\). Therefore we can define an absolute bigrading on \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\).
We will reduce the proof of Proposition 5.10 to the case where \(\mathcal{H}_{w,z}\) is of genus one. If \(\mathcal{H}_{w,z}\) is a genus-one bordered Heegaard diagram, then one can define a Maslov grading \(m(-)\) on \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\) as follows. Given any two generators \(x\) and \(y\), let \(p_{0}\) and \(p_{1}\) be two paths from \(x\) to \(y\) in \(\alpha_{K}\) and \(\beta\) respectively such that \(p_{0}-p_{1}\) lifts to a closed path \(\gamma\) in the universal cover \(\mathbb{R}^{2}\) of the genus-one Heegaard surface. Up to perturbing the curves, we may assume that \(p_{0}\) and \(p_{1}\) intersect in right angles at \(x\) and \(y\). Then \(m(x)-m(y)\) is equal to \(\frac{1}{\pi}\) times the total counterclockwise rotation along the smooth segments of \(\gamma\) minus twice the number of the (lifts of) base point \(z\) enclosed by \(\gamma\); see [10, Definition 35]. This Maslov grading is also defined (by the same definition) when the \(\beta\) curve is only immersed. In [10], it is shown that the Maslov grading thus defined on a pairing diagram of two immersed curves agrees with the Maslov grading computed using the grading package of bordered Heegaard Floer homology. Next, we show this Maslov grading can be equivalently defined in terms of the index of domains.
**Proposition 5.11**.: _Let \(\mathcal{H}_{w,z}\) be a genus-one bordered Heegaard diagram and let \(m(-)\) be the Maslov grading on \(\mathcal{G}(\mathcal{H}(\alpha_{K}))\) mentioned above. Let \(B\in\pi_{2}(x,y)\) be
a domain connecting \(x\) and \(y\) with \(\partial B=p_{0}-p_{1}\). Then \(m(x)-m(y)=\text{ind}(B)-2n_{z}(B)\). Moreover, this result extends to the case where the \(\beta\) is immersed, in which we define the index of \(B\) by_
\[\text{ind}(B)=e(B)+n_{x}(B)+n_{y}(B)-s(\partial_{\alpha_{K}}B)-s(\partial_{ \beta}B).\]
_(Here \(s(-)\) denotes the self-intersection number of an oriented immersed arc as defined in Section 2.6)_
Before proving Proposition 5.11 we introduce some terminology. It will be clear later that we can assume \(p_{0}-p_{1}\) is immersed and only has discrete double points.
**Definition 5.12**.: A _cornered immersed loop_ in \(T^{2}\) is the union of two oriented immersed arcs \(p_{0}\) and \(p_{1}\) with at most discrete double points such that
1. \(p_{0}\) and \(p_{1}\) share common endpoints,
2. the interior of \(p_{0}\) and \(p_{1}\) intersect transversally,
3. \(p_{0}-p_{1}\) is an oriented loop which is null-homologous,
4. \(p_{0}\) and \(p_{1}\) intersect transversally at the endpoints if \(p_{0}\) and \(p_{1}\) are non-degenerate (i.e., not a point), and
5. if one of \(p_{0}\) and \(p_{1}\) is degenerate, the remaining arc forms a smooth loop after identifying the endpoints.
The endpoints of \(p_{0}\) (or equivalently, \(p_{1}\)) are called _corners_ of the cornered immersed loop.
**Definition 5.13**.: Two cornered immersed loops \(p_{0}-p_{1}\) and \(p_{0}^{\prime}-p_{1}^{\prime}\) in \(T^{2}\) are called _cornered identical_ if they share the same set of corners \(\{x,y\}\) (or \(\{x\}\) if the loops have degenerate arcs) and there are arbitrarily small neighborhoods \(N_{x}\) and \(N_{y}\) of \(x\) and \(y\) respectively such that \((p_{0}-p_{1})|_{N_{x}}=(p_{0}^{\prime}-p_{1}^{\prime})|_{N_{x}}\) and \((p_{0}-p_{1})|_{N_{y}}=(p_{0}^{\prime}-p_{1}^{\prime})|_{N_{y}}\).
**Lemma 5.14**.: _If two cornered immersed loops \(p_{0}-p_{1}\) and \(p_{0}^{\prime}-p_{1}^{\prime}\) are cornered identical, then they are related by a finite sequence of moves of the following types:_
1. _Reidemeister moves that do not involve the corners and_
2. _isotopy that possibly cross the corners._
_(See Figure 24) Here, we require \((p_{0}-p_{1})|_{N_{x}}\) and \((p_{0}-p_{1})|_{N_{y}}\) are fixed throughout the modification for some sufficiently small neighborhoods \(N_{x}\) and \(N_{y}\) of the corners.
Figure 24. Upper row from left to right: Reidemeister I, II, and III move. Lower row from left to right: an isotopy that crosses a non-degenerate corner and a degenerate corner.
Proof.: One can prove this applying the usual Reidemeister-move equivalence of knot diagrams (by treating both immersed loops as diagrams for the unknot via imposing proper crossing information); note that any Reidemeister move involving a corner can be traded by an isotopy crossing the corner and a Reidemeister move that does not involve the corner.
**Definition 5.15**.: Given a cornered immersed loop \(p_{0}-p_{1}\) in \(T^{2}\). Let \(\tilde{p}_{0}-\tilde{p}_{1}\) be a lift of \(p_{0}-p_{1}\) in \(\mathbb{R}^{2}\) and let \(\tilde{B}\) be the bounded domain in \(\mathbb{R}^{2}\) such that \(\partial\tilde{B}=\tilde{p}_{0}-\tilde{p}_{1}\). Let \(B\) be the domain in \(T^{2}\) obtained from \(\tilde{B}\) by applying the covering projection. Define the _index of the cornered immersed loop_ as
\[\operatorname{ind}(p_{0}-p_{1})=e(B)+n_{x}(B)+n_{y}(B)-s(\partial_{p_{0}}B)-s( \partial_{p_{1}}B),\]
where \(x\) and \(y\) are the corners.
Define the _net rotation number_\(nr(p_{0}-p_{1})\) to be \(\frac{1}{\pi}\) times the counterclockwise net rotation along the smooth segments \(p_{0}\) and \(p_{1}\).
**Lemma 5.16**.: _Suppose \(p_{0}-p_{1}\) and \(p_{0}^{\prime}-p_{1}^{\prime}\) are cornered immersed loops differ by an isotopy or a Reidemeister move. Then_
\[\text{ind}(p_{0}-p_{1})-\text{ind}(p_{0}^{\prime}-p_{1}^{\prime})=nr(p_{0}-p_ {1})-nr(p_{0}^{\prime}-p_{1}^{\prime}).\]
Proof.: First, we examine the effect of an isotopy on both quantities. Clearly, the net rotation number is unchanged. We separate the discussion of the index into two cases according to whether the isotopy crosses corners or not. If the isotopy does not cross the corners, it clearly does not change the index as well whence we are done. If the isotopy crosses a corner, then we claim the local multiplicity and the self-intersection numbers change in a way that cancel each other, leaving the index unchanged. This claim can be seen by examining local diagrams, which are further divided into two cases according to whether the corner is degenerate or not. When the corner is non-degenerate, the local diagram of one case is shown in Figure 25 (i); all the other cases can be obtained from this case by swapping the labels and orientations of the arcs, and the analysis of all cases are similar. In the case shown in Figure 25 (i), only \(n_{x}(B)\) and \(s(\partial_{p_{0}}B)\) change: the diagram on the left has \(n_{x}(B)=\frac{a+(a-1)+(a-1)+(a-1)}{4}\) and the local self-intersection of \(p_{0}\) contributes \(s_{p_{0}}=-1\); the diagram on the right has \(n_{x}(B)=\frac{(a+1)+a+a}{4}\) and there are no self-intersections of the arcs in the local diagram so the local contribution \(s_{p_{0}}=0\). In both diagrams we have \(n_{x}(B)-s_{p_{0}}=\frac{4a+1}{4}\), and hence the index is unchanged. When the corner is degenerate, one of the cases is shown in Figure 25 (ii). In this case, only \(n_{x}\) and the self-intersection of \(p_{0}\) change: the diagram on the
Figure 25. Local diagrams for isotopies that cross a corner. The numbers \(a\), \(a-1\), and \(a+1\) indicate the multiplicities of the regions.
left has \(n_{x}=\frac{a+a+(a-1)+(a-1)}{4}\) and a local contribution of the self-intersection of \(p_{0}\) given by \(s_{p_{0}}=-1\); the diagram on the right has \(n_{x}=\frac{(a+1)+(a+1)+a+a}{4}\) and a local contribution of the self-intersection of \(p_{0}\) given by \(s_{p_{0}}=1\). In both local diagrams we have \(n_{x}(B)+n_{x}(B)-s_{p_{0}}=2a\), and hence the index is unchanged. All other cases can be obtained from this case by swapping the labels and orientations of the arcs, and the analysis of all cases are similar.
Next, we examine the effect of Reidemeister I move. Up to swapping orientations and the labels, we may assume the local diagram is as shown in Figure 26 The net rotation number of the diagram on the right is \(2\) less than that of the diagram on the left. For the index comparison, the Euler measure of the local domain on the right is \(1\) less than that of the left diagram and the self-intersection number \(s(\partial_{p_{0}}B)\) of the right diagram is \(1\) more than that of the left diagram; in total the index of the diagram on the right is \(2\) less than that of the diagram on the left. Therefore, the changes in the net rotation and in the index are the same after doing a Reidemeister I move.
Next, we examine the effect of Reidemeister II moves. It does not change the net rotation number. Also, it does not affect the Euler measure and the local multiplicities at the corners. A Reidemeister II move creates/annihilates a pair of self-intersection points whose signs cancel each other if both arcs involved are on \(p_{0}\) or \(p_{1}\), and otherwise does not involve self-intersections; in both cases the self-intersection numbers are unchanged. So, the index does not change as well.
Finally, it is easy to see that a Reidemeister III move does not change the net rotation number. It is also easy to see a Reidemeister III move does not change the Euler measure, local multiplicities at the corners, or self-intersections, and hence it does not change the index either.
**Proposition 5.17**.: _Let \(p_{0}-p_{1}\) be a cornered immersed loop. Then \(\text{ind}(p_{0}-p_{1})=nr(p_{0}-p_{1})\)._
Proof.: By Lemma 5.16 and Lemma 5.14, it suffices to show that \(p_{0}-p_{1}\) is cornered identical with some cornered immersed loop whose index coincides with the net rotation number.
If at least one of \(p_{0}\) and \(p_{1}\) is degenerate, \(p_{0}-p_{1}\) is cornered identical with an embedded circle that passes the corner, and it is easy to see the index and the net rotation number coincide on an embedded circle with a degenerate corner.
Next, we discuss the case where \(p_{0}-p_{1}\) is non-degenerate. We first construct a cornered immersed loop \(p^{\prime}_{0}-p^{\prime}_{1}\) that is cornered identical to \(p_{0}-p_{1}\) as follows. Let \(p^{\prime}_{0}=p_{0}\). We shall construct \(p^{\prime}_{1}\) to be a path which is almost a parallel push-off of \(p_{0}\). (See Figure 27 for examples.) To spell out the construction, let
Figure 26. The local diagram for Reidemeister I move. The numbers \(a\), \(a-1\), and \(a-2\) indicate the multiplicities of the regions.
be an immersion such that \(f_{0}([0,1])=p_{0}\). Let \(\hat{N}\) be a sufficiently small tubular neighborhood of \(p_{0}\) such that it can realized as the image of an extension of \(f_{0}\), i.e., there exits an immersion \(\tilde{f}_{0}:[0,1]\times[-\epsilon,\epsilon]\to T^{2}\) such that \(\tilde{f}_{0}|_{[0,1]\times\{0\}}=f_{0}\) and \(\tilde{f}_{0}([0,1]\times\{pt\})\) is a parallel push-off of \(p_{0}\) for any \(pt\in[-\epsilon,0)\cup(0,\epsilon]\). We can further assume near the two corners \(x=f_{0}(0)\) and \(y=f_{0}(1)\), the other arc \(p_{1}\) is contained in \(\tilde{f}_{0}(\{0,1\}\times[-\epsilon,\epsilon])\); denote these two arcs on \(p_{1}\) near \(x\) and \(y\) by \(p_{x}\) and \(p_{y}\) respectively. We construct \(p_{1}^{\prime}\) in two different cases. In the first case, both \(p_{x}\) and \(p_{y}\) are on the same side of \(p_{0}\), say, \(p_{x}=\tilde{f}_{0}(\{0\}\times[0,\epsilon])\) and \(p_{y}=\tilde{f}_{0}(\{1\}\times[0,\epsilon])\). Then we let \(p_{1}^{\prime}\) be the path obtained from \(p_{x}\cup p_{y}\cup\tilde{f}_{0}([0,1]\times\{\epsilon\})\) by smoothing the corners; see Figure 27 (left) for an example. In the second case, \(p_{x}\) and \(p_{y}\) are on different sides of \(p_{0}\), say, \(p_{x}=\tilde{f}_{0}(\{0\}\times[0,\epsilon])\) and \(p_{y}=\tilde{f}_{0}(\{1\}\times[-\epsilon,0])\). In this case, we extend \(\tilde{f}_{0}\) near \(x\) slightly to an immersion \(F_{0}:[-\delta,1]\times[-\epsilon,\epsilon]\to T^{2}\) for some \(\delta>0\) such that \(F_{0}|_{([-\delta,0]\times[-\epsilon,\epsilon])}\) is an embedding and its image intersects \(N\) at \(\tilde{f}_{0}(\{0\}\times[-\epsilon,\epsilon])\). We will let \(p_{1}^{\prime}\) be the path obtained from \(p_{x}\cup F_{0}([-\delta,0]\times\{\epsilon\})\cup F_{0}(\{-\delta\}\times[- \epsilon,\epsilon])\cup F_{0}([-\delta,1]\times\{-\epsilon\})\cup p_{y}\) by smoothing the corners; see Figure 27 (right) for an example. Note that in both cases, \(p_{0}^{\prime}-p_{1}^{\prime}\) bounds an immersed disk \(B\) in \(T^{2}\), and \(s(\partial_{p_{0}^{\prime}}B)+s(\partial_{p_{1}^{\prime}}B)=0\) since self-intersections of \(p_{0}^{\prime}\) and \(p_{1}^{\prime}\) are in one-to-one correspondence and have opposite signs. In the case where \(p_{x}\) and \(p_{y}\) are on the same side of \(p_{0}\), both corners of \(B\) are acute, and hence we have \(e(B)=1-\frac{1}{4}-\frac{1}{4}=\frac{1}{2}\) and \(n_{x}=n_{y}=\frac{1}{4}\). Therefore, \(\operatorname{ind}(p_{0}^{\prime}-p_{1}^{\prime})=\frac{1}{2}+\frac{1}{4}+ \frac{1}{4}=1\), which is equal to the net rotation number that can be easily computed. In the case where here \(p_{x}\) and \(p_{y}\) are on different sides of \(p_{0}\), one of the corners of \(B\) is obtuse and the other one is acute, and we have \(e(B)=1-\frac{1}{4}+\frac{1}{4}=1\), \(n_{x}=\frac{3}{4}\), and \(n_{y}=\frac{1}{4}\). Therefore, \(\operatorname{ind}(p_{0}^{\prime}-p_{1}^{\prime})=1+\frac{3}{4}+\frac{1}{4}=2\), which is again equal to the net rotation number that can be computed easily. So, \(\operatorname{ind}(p_{0}^{\prime}-p_{1}^{\prime})=nr(p_{0}^{\prime}-p_{1}^{ \prime})\).
We are ready to prove Proposition 5.11.
Proof of Proposition 5.11.: Throughout the discussion, we may assume \(p_{0}-p_{1}\) is an immersed loop with only discrete double points; if not, we can perturb \(p_{0}-p_{1}\) slightly to achieve such a loop and keep both \(m(x)-m(y)\) and \(\operatorname{ind}(B)-2n_{z}(B)\) unchanged.
Note that it does not matter which domain \(B\) we use to compute \(\operatorname{ind}(B)-2n_{z}(B)\) since any two such domains differ by multiples of \([T^{2}]\), which does not change the quantity. For convenience, we take \(B\) to be the domain as specified in Definition 5.15. It is clear that \(n_{z}(B)\) is equal to the number of base point \(z\) enclosed by a lift of \(p_{0}-p_{1}\) in \(\mathbb{R}^{2}\). By Proposition 5.17, we also have \(\operatorname{ind}(B)=nr(p_{0}-p_{1})\). So, \(m(x)-m(y)=\operatorname{ind}(B)-2n_{z}(B)\).
Figure 27. Deforming \(p_{0}-p_{1}\).
Next, we prove Proposition 5.10.
Proof of Proposition 5.10.: Let \(\mathbf{x}\) be a generator in \(\mathcal{H}_{w,z}(\alpha_{K})\) and let \(P\in\tilde{\pi}_{2}(\mathbf{x},\mathbf{x})\) be a non-trivial periodic domain; we need to show that \(\operatorname{ind}(P)-2n_{z}(P)=0\) and \(\operatorname{ind}(P)-2n_{w}(P)=0\), where \(\operatorname{ind}(-)\) is defined in Definition 2.43. Note that \(\partial_{\alpha_{K}}P\) sits on a single connected component of \(\alpha_{K}\). If it sits on the distinguished component, then since the \(\alpha\)-curves and \(\beta\)-curves are homologically independent in \(H_{1}(\Sigma,\mathbb{Q})\) we know \(\partial P=\emptyset\), which means \(P\) is a multiple of \(\Sigma\). In this case it is clear that \(\operatorname{ind}(P)-2n_{z}(P)=0\). Otherwise, \(\partial_{\alpha_{K}}P\) must sit in some null-homologous component of \(\alpha_{K}\), and by homological considerations \(\partial P\) must be some non-zero multiple of the this component.
Note that when \(\Sigma\) has genus greater than or equal to \(2\), the domain \(P\) can be viewed as a stabilization of a domain \(P^{\prime}\) in the marked torus \((T^{2},z)\) bounded by the null-homologous component. In particular, \(n_{z}(P)=n_{w}(P)\), and hence \(\operatorname{ind}(P)-2n_{z}(P)=\operatorname{ind}(P)-2n_{w}(P)\). Moreover, a straightforward computation using the definition of the index shows \(\operatorname{ind}(P)-2n_{z}(P)=0\) if and only if \(\operatorname{ind}(P^{\prime})-2n_{z}(P^{\prime})=0\). The latter follows from Proposition 5.11 since \(\operatorname{ind}(P^{\prime})-2n_{z}(P^{\prime})=m(x)-m(x)=0\), where \(x\) is the component of \(\mathbf{x}\) in the genus-one diagram.
### Gradings in the main theorem
We now show the chain homotopy equivalence established in the main theorem preserves the \(w\)-grading and \(z\)-grading of knot Floer chain complexes. To do so, it suffices to consider a simpler version of knot Floer chain complexes.
According to [1, Theorem 11.19], \(CFA^{-}(\mathcal{H}_{w,z})\boxtimes\widehat{CFD}(\alpha_{K})\) is a bi-graded chain complex over \(\mathbb{F}[U]\) representing \(gCFK^{-}(P(K))\); here \(gCFK^{-}(-)\) refers to the version of knot Floer chain complex whose differential only count holomorphic disks that do not cross the \(z\) base point. We shall prove the following theorem.
**Theorem 5.18**.: \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) _is isomorphic to \(CFA^{-}(\mathcal{H}_{w,z})\boxtimes\widehat{CFD}(\alpha_{K})\) as a bi-graded chain complex over \(\mathbb{F}[U]\)._
When \(\mathcal{H}_{w,z}\) is a genus-one Heegaard diagram, Theorem 5.18 is true by Proposition 5.11 and [1, Theorem 1.2]: Generalizing the corresponding argument in [11], [1, Theorem 1.2] shows the Maslov grading on \(CFA^{-}(\mathcal{H}_{w,z})\boxtimes\widehat{CFD}(\alpha_{K})\) identifies with the Maslov grading on \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) defined via the net rotation numbers. (Strictly speaking, [1] works with \(\widehat{CFK}(-)\) instead of \(gCFK^{-}\), but these two versions of knot Floer complexes are equivalent to each other.) Our strategy for proving Theorem 5.18 is to reduce the higher-genus case to the genus-one case.
Proof of Theorem 5.18.: First, we claim that \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) is isomorphic to \(CFA^{-}(\mathcal{H}_{w,z})\boxtimes\widehat{CFD}(\alpha_{K})\) as ungraded chain complexes; this follows from the ungraded version of Theorem 1.1. Next, we will show that the \(z\)-gradings are identified. Since the \(z\)-gradings on both chain complexes are normalized by the same rule, it suffices to prove the relative \(z\)-gradings are the same.
We set up some notation. Let \(\Sigma\) and \(\Sigma^{\prime}\) denote the Heegaard surface for \(\mathcal{H}_{w,z}(\alpha_{K})\) and \(\mathcal{H}_{w,z}\) respectively. Let \(\mathbf{x}_{i}\) (for \(i\in\{1,2\}\)) denote a generator of \(CFA^{-}(\mathcal{H}_{w,z})\) and let \(y_{i}\) (for \(i\in\{1,2\}\)) denote a generator of \(\widehat{CFD}(\alpha_{K})\); correspondingly, we use \(\mathbf{x}_{i}\otimes y_{i}\) to denote a generator of \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) (where we assume the relevant idempotents match up). Let \(B\in\tilde{\pi}_{2}(\mathbf{x}_{1}\otimes y_{1},\mathbf{x}_{2}\otimes y_{2})\) be a
domain. For a technical reason, we will assume \(B\) is a positive domain; this can always be achieved by adding sufficiently many copies of the Heegaard surface \([\Sigma]\). We will use \(B\) to compute the relative \(z\)-gradings of \(\boldsymbol{x}_{1}\otimes y_{1}\) and \(\boldsymbol{x}_{2}\otimes y_{2}\) in two ways and compare them: one in Definition 3.8 and one via the grading package in bordered Floer homology; we refer the readers to [3, Section 2] for a brief summary of this grading package.
Let \(k=n_{z}(B)\) and write \(B=B_{0}+k[\Sigma]\). Let \(B^{\prime}_{0}=\Phi(B_{0})\) where \(\Phi\) denotes the collapsing map, and let \(B^{\prime}=B^{\prime}_{0}+k[\Sigma^{\prime}]\). Let \(\partial_{\alpha_{K}}B\) denote the portion of the boundary of \(B\) lying on \(\alpha_{K}\). Note \(\partial_{\alpha_{K}}(B)=\partial_{\alpha_{K}}(B_{0})\).
We compute the \(z\)-grading difference using Definition 3.8. By Definition 2.43, \(\operatorname{ind}(B)=e(B)+n_{\boldsymbol{x}_{1}\otimes y_{1}}(B)+n_{ \boldsymbol{x}_{2}\otimes y_{2}}(B)-s(\partial_{\alpha_{K}}B)\). Therefore, we have the following equation.
\[gr_{z}(\boldsymbol{x}_{2}\otimes y_{2})-gr_{z}(\boldsymbol{x}_{1}\otimes y_{ 1})=-e(B)-n_{\boldsymbol{x}_{1}\otimes y_{1}}(B)-n_{\boldsymbol{x}_{2}\otimes y _{2}}(B)+s(\partial_{\alpha_{K}}B)+2n_{z}(B). \tag{5.1}\]
We now compute the \(z\)-grading obtained from the box-tensor product, and we will use \(gr_{z}^{\boxtimes}\) to distinguish it with the \(z\)-grading computed above. Note \(B^{\prime}_{0}\) is a domain in \(\Sigma^{\prime}\) with \(n_{z}(B^{\prime}_{0})=0\) and it connects \(\boldsymbol{x}_{1}\) to \(\boldsymbol{x}_{2}\). Let \(gr_{A}\) denote the grading function for \(CFA^{-}\) that consists of the Maslov component and the \(Spin^{c}\)-component. Then
\[gr_{A}(\boldsymbol{x}_{2})=gr_{A}(\boldsymbol{x}_{1})\cdot(-e(B^{\prime}_{0} )-n_{\boldsymbol{x}_{1}}(B^{\prime}_{0})-n_{\boldsymbol{x}_{2}}(B^{\prime}_{ 0}),[\partial^{\partial}B^{\prime}_{0}]).\]
Here, \(\partial^{\partial}B^{\prime}_{0}\) denotes the portion of the oriented boundary of \(B^{\prime}_{0}\) on \(\partial\overline{\Sigma^{\prime}}\), and \([\partial^{\partial}B^{\prime}_{0}]\) denotes the \(Spin^{c}\)- component of \(gr_{A}\) determined by the homology class of \(\partial^{\partial}B^{\prime}_{0}\).
Note \(\partial_{\alpha_{K}}(B)\) determines a sequence of type D operations connecting \(y_{1}\) to \(y_{2}\), giving rise to
\[gr_{D}(y_{2})=(m(\partial_{\alpha_{K}}B),-[\partial_{\alpha_{K}}B])\cdot gr_{ D}(y_{1})\]
Here \(gr_{D}\) denotes the grading function on \(\widehat{CFD}(\alpha_{K})\) and \(m(\partial_{\alpha_{K}}B)\) denotes the Maslov component of the grading; we will not need a specific formula for \(m(\partial_{\alpha_{K}}B)\).
Note \([\partial^{\partial}B_{0}]=[\partial_{\alpha_{K}}B]\) in view of the definition of the collapsing map. Therefore,
\[gr_{A}(\boldsymbol{x}_{2})gr_{D}(y_{2})=(-e(B^{\prime}_{0})-n_{\boldsymbol{x} _{1}}(B^{\prime}_{0})-n_{\boldsymbol{x}_{2}}(B^{\prime}_{0})+m(\partial_{ \alpha_{K}}B),0)\cdot gr_{A}(\boldsymbol{x}_{1})gr_{D}(y_{1}).\]
Hence,
\[gr_{z}^{\boxtimes}(\boldsymbol{x}_{2}\otimes y_{2})-gr_{z}^{\boxtimes}( \boldsymbol{x}_{1}\otimes y_{1})=-e(B^{\prime}_{0})-n_{\boldsymbol{x}_{1}}(B^ {\prime}_{0})-n_{\boldsymbol{x}_{2}}(B^{\prime}_{0})+m(\partial_{\alpha_{K}}B).\]
Since \(B^{\prime}=B^{\prime}_{0}+k[\Sigma^{\prime}]\), the above equation is equivalent to
\[gr_{z}^{\boxtimes}(\boldsymbol{x}_{2}\otimes y_{2})-gr_{z}^{\boxtimes}( \boldsymbol{x}_{1}\otimes y_{1})=-e(B^{\prime})-n_{\boldsymbol{x}_{1}}(B^{ \prime})-n_{\boldsymbol{x}_{2}}(B^{\prime})+m(\partial_{\alpha_{K}}B)+n_{z}(B) \tag{5.2}\]
Comparing Equation 5.2 and 5.1, identifying both \(z\)-gradings is equivalent to proving the following equation:
\[-e(B^{\prime})-n_{\boldsymbol{x}_{1}}(B^{\prime})- n_{\boldsymbol{x}_{2}}(B^{\prime})+m(\partial_{\alpha_{K}}B)\] \[=-e(B)-n_{\boldsymbol{x}_{1}\otimes y_{1}}(B)-n_{\boldsymbol{x}_ {2}\otimes y_{2}}(B)+n_{z}(B)+s(\partial_{\alpha_{K}}B). \tag{5.3}\]
Equation 5.3 is true when \(\mathcal{H}_{w,z}\) is a genus-one Heegaard diagram by [3] and Proposition 5.11; see the discussion before the proof. We will reduce the proof of the higher-genus case to the genus-one case. To do so, we will first crop both \(B\) and \(B^{\prime}\), leaving only a portion of the domains near \(\alpha_{K}\) and the \(\alpha\)-arcs respectively; we can reduce the proof of the grading identification to a claim involving some quantities of the cropped domains; later on, we will extend the cropped domains to domains
in genus-one Heegaard diagrams, where we can use the grading identification to derive the desired claim on the cropped domains. (See Figure 28 for an illustration of the cropping and extending procedures.)
We spell out the cropping procedure. Let \(N\) be a closed subset in \(\overline{\Sigma^{\prime}}\) given by the union of three subsets \(R\cup S_{1}\cup S_{2}\) satisfying the following requirements:
1. \(R\) is a collar neighborhood of \(\partial\overline{\Sigma^{\prime}}\), and \(S_{i}\) (\(i=1,2\)) is a neighborhood of \(\alpha_{i}^{a}\) homeomorphic to \([0,1]\times[0,1]\) where \(\alpha_{i}^{a}\) is identified with \([0,1]\times\{0\}\);
2. \(\beta\)-curves do not intersect \(R\);
3. if a \(\beta\)-curve intersects some \(S_{i}\) (\(i=1,2\)), the intersections are arcs of the form \(\{p\}\times[0,1]\) for some \(p\in[0,1]\).
One can think about \(N\) as the image under the collapsing map of a slightly larger neighborhood than the one specified in Definition 4.13 (Step 1). Abusing the notation, we will also use \(N\) to denote the inverse image of \(N\) in \(\Sigma\) under the collapsing map.
Let \(B_{N}=B\cap N\) and let \(B_{N^{c}}=\overline{B-B_{N}}\). Then \(B=B_{N}+B_{N^{c}}\). Similarly we can define \(B_{N}^{\prime}\) and \(B_{N^{c}}^{\prime}\) and have \(B^{\prime}=B_{N}^{\prime}+B_{N^{c}}^{\prime}\). Let \(x_{i}^{a}\) (\(i=1,2\)) be the component of \(\boldsymbol{x}_{i}\) on the \(\alpha\)-arcs and let \(\hat{\boldsymbol{x}}_{i}\) denote the remaining components; we have \(\boldsymbol{x}_{i}=\hat{\boldsymbol{x}}_{i}\cup\{x_{i}^{a}\}\). Similarly, we have \(\boldsymbol{x}_{i}\otimes y_{i}=\hat{\boldsymbol{x}}_{i}\cup\{x_{i}^{a}\otimes y _{i}\}\). Now we claim that to prove Equation 5.3, we only need to prove it over \(N\), i.e., proving the following equation:
\[\begin{split}-e(B_{N}^{\prime})-& n_{x_{1}^{a}}(B_{N}^{ \prime})-n_{x_{2}^{a}}(B_{N}^{\prime})+m(\partial_{\alpha_{K}}B_{N})\\ &=-e(B_{N})-n_{x_{1}^{a}\otimes y_{1}}(B_{N})-n_{x_{2}^{a} \otimes y_{2}}(B_{N})+n_{z}(B_{N})+s(\partial_{\alpha_{K}}B_{N}).\end{split} \tag{5.4}\]
Figure 28. The cropping and extending procedures. The lower row is obtained from the upper row by the collapsing operation.
The claim follows from Equation 5.5-5.7 below.
\[-e(B^{\prime})-n_{\boldsymbol{x}_{1}}(B^{\prime})-n_{\boldsymbol{x}_{2 }}(B^{\prime})+ m(\partial_{\alpha_{K}}B)=-e(B^{\prime}_{N^{c}})-n_{\hat{ \boldsymbol{x}}_{1}}(B^{\prime}_{N^{c}})-n_{\hat{\boldsymbol{x}}_{2}}(B^{ \prime}_{N^{c}})\] \[-e(B^{\prime}_{N})-n_{x_{1}^{a}}(B^{\prime}_{N})-n_{x_{2}^{a}}(B^{ \prime}_{N})+m(\partial_{\alpha_{K}}B_{N}) \tag{5.5}\]
Here we used \(\partial_{\alpha_{K}}B_{N}=\partial_{\alpha_{K}}B\) and the additivity of the other terms.
\[-e(B)-n_{\boldsymbol{x}_{1}\otimes y_{1}}(B)-n_{\boldsymbol{x}_{2 }\otimes y_{2}}(B)+n_{z}(B)+s(\partial_{\alpha_{K}}B)=-e(B_{N^{c}})-n_{\hat{ \boldsymbol{x}}_{1}}(B_{N^{c}})\] \[-n_{\hat{\boldsymbol{x}}_{2}}(B_{N^{c}})-e(B_{N})-n_{x_{1}^{a} \otimes y_{1}}(B_{N})-n_{x_{2}^{a}\otimes y_{2}}(B_{N})+n_{z}(B_{N})+s( \partial_{\alpha_{K}}B_{N}) \tag{5.6}\]
Here we used \(n_{z}(B)=n_{z}(B_{N})\) and \(\partial_{\alpha_{K}}B_{N}=\partial_{\alpha_{K}}B\).
\[-e(B^{\prime}_{N^{c}})-n_{\hat{\boldsymbol{x}}_{1}}(B^{\prime}_{N^{c}})-n_{ \hat{\boldsymbol{x}}_{2}}(B^{\prime}_{N^{c}})=-e(B_{N^{c}})-n_{\hat{ \boldsymbol{x}}_{1}}(B_{N^{c}})-n_{\hat{\boldsymbol{x}}_{2}}(B_{N^{c}}) \tag{5.7}\]
Here we used the identification of \(B_{N^{c}}\) and \(B^{\prime}_{N^{c}}\) under the collapsing map.
With Equation 5.5-5.7, it is clear that Equation 5.3 is equivalent to Equation 5.4.
We need to further process the domains \(B_{N}\) and \(B^{\prime}_{N}\) before we can appeal to the genus-one case.
**Lemma 5.19**.: \(B_{N}=B_{main}+B_{sub}\) _where \(B_{main}\) is an immersed disk in \(\Sigma\) such that \(\partial_{\alpha_{K}}B_{main}=\partial_{\alpha_{K}}B\) and \(B_{sub}\) is a domain whose boundaries consist of arcs in \(\partial N\) or connected components of \((\beta\)-curves \(\cap N)\). In particular, the above decomposition also induces a decomposition \(B^{\prime}_{N}=B^{\prime}_{main}+B^{\prime}_{sub}\) via applying the collapsing map._
Proof of Lemma 5.19.: Recall that \(B\) is a positive domain. By [10, Lemma \(4.1^{\prime}\)]9, there a smooth map \(u:S\to\Sigma\times[0,1]\times\mathbb{R}\) representing the homology class \(B\) such that \(u^{-1}(\alpha_{K}\times\{1\}\times\mathbb{R})\) consists of a single arc in \(\partial S\) and the map \(\pi_{\Sigma}\circ u\) is a branched covering. Let \(D_{main}\) be the connected component of \((\pi_{\Sigma}\circ u)^{-1}(N)\) that contains the boundary arc of \(S\) that maps to \(\alpha_{K}\). Then up to shrinking \(N\), we may assume \(D_{main}\) is homeomorphic to a disk. Let the domain \(B_{main}\) be the image of \(\pi_{\Sigma}\circ u\) restricted to \(D_{main}\). Let \(B_{sub}=B-B_{main}\). By construction, \(\partial_{\alpha_{K}}B_{main}=\partial_{\alpha_{K}}B\). Therefore, the boundaries of the regions in \(B_{sub}\) do not involve \(\alpha_{K}\) and hence must consist of arcs on \(\partial N\) and the \(\beta\)-curves.
Footnote 9: Strictly speaking, we need a version of [10, Lemma \(4.1^{\prime}\)] where the \(\alpha\) curves are only immersed, but this can be proved the same way as if the \(\alpha\) curves are embedded.
**Lemma 5.20**.: _Equation 5.4 is equivalent to the following equation:_
\[-e(B^{\prime}_{main})-n_{x_{1}^{a}}(B^{\prime}_{main})-n_{x_{2}^{a }}(B^{\prime}_{main})+m(\partial_{\alpha_{K}}B_{main})\] \[\quad=-e(B_{main})-n_{x_{1}^{a}\otimes y_{1}}(B_{main})-n_{x_{2}^ {a}\otimes y_{2}}(B_{main})+n_{z}(B_{main})+s(\partial_{\alpha_{K}}B_{main}). \tag{5.8}\]
Proof of Lemma 5.20.: The lemma will follow from verifying that \(B^{\prime}_{sub}\) and \(B_{sub}\) contribute equally to the left- and right-hand side of Equation 5.4 respectively.
First, \(-e(B^{\prime}_{sub})=-e(B_{sub})+n_{z}(B_{sub})\) since \(B^{\prime}_{sub}\) is obtained from \(B_{sub}\) by removing \(n_{z}(B_{sub})\) many disks near \(z\) (and doing collapses that do not affect the Euler measures). Secondly, \(n_{x_{i}^{a}}(B^{\prime}_{sub})=n_{x_{i}^{a}\otimes y_{i}}(B_{sub})\) for \(i=1,2\). To see this, note that \(x_{i}^{a}\otimes y_{i}\) is either in its interior or on some beta arc that appears on \(\partial B_{sub}\); the collapsing map sends \(x_{i}^{a}\otimes y_{i}\) to \(x_{i}\) which lies in the interior of \(B^{\prime}_{sub}\) or on some beta-arc boundary correspondingly; the local multiplicities of \(B_{sub}\) at \(x_{i}^{a}\otimes y_{i}\) and local
multiplicities of \(B^{\prime}_{sub}\) at \(x_{i}\) are the same. Finally, \(m(\partial_{\alpha_{K}}B_{main})=m(\partial_{\alpha_{K}}B_{N})\) since \(\partial_{\alpha_{K}}B_{main}=\partial_{\alpha_{K}}B_{N}\). Lemma 5.20 follows readily from combining these observations and the following two equations:
\[-e(B^{\prime}_{N})- n_{x_{1}^{a}}(B^{\prime}_{N})-n_{x_{2}^{a}}(B^{\prime}_{N})+m( \partial_{\alpha_{K}}B_{N})\] \[=-e(B^{\prime}_{main})-n_{x_{1}^{a}}(B^{\prime}_{main})-n_{x_{2}^ {a}}(B^{\prime}_{main})+m(\partial_{\alpha_{K}}B_{main})\] \[-e(B^{\prime}_{sub})-n_{x_{1}^{a}}(B^{\prime}_{sub})-n_{x_{2}^{a} }(B^{\prime}_{sub}) \tag{5.9}\]
\[-e(B_{N})-n_{x_{1}^{a}\otimes y_{1}}(B_{N})-n_{x_{2}^{a}\otimes y_{2}}(B_{N })+n_{z}(B_{N})+s(\partial_{\alpha_{K}}B_{N})\] \[=-e(B_{main})-n_{x_{1}^{a}\otimes y_{1}}(B_{main})-n_{x_{2}^{a} \otimes y_{2}}(B_{main})+n_{z}(B_{main})+s(\partial_{\alpha_{K}}B_{main})\] \[-e(B_{sub})-n_{x_{1}^{a}\otimes y_{1}}(B_{sub})-n_{x_{2}^{a} \otimes y_{2}}(B_{sub})+n_{z}(B_{sub}) \tag{5.10}\]
Next, we prove Equation 5.8. We say \((B_{main},N)\) is _extendable to a genus-one Heegaard diagram_ if there exists a genus-one bordered Heegaard \(\mathcal{H}_{1}\) and a domain \(\tilde{B}\) in \(\mathcal{H}_{1}(\alpha_{K})\) connecting a pair of intersection points such that the cropped domain \(\tilde{B}_{N}\) can be identified with \(B_{main}\) for some suitable chosen region \(N\) in \(\mathcal{H}_{1}(\alpha_{K})\). In this case, \(B^{\prime}_{main}\) can be identified with the image \(\tilde{B}^{\prime}_{N}\) of \(\tilde{B}_{N}\) under the collapsing map. Moreover, the \(\beta\) curve in \(\mathcal{H}_{1}\) is allowed to be immersed, in which case we require \(\beta\) and \(\alpha_{K}^{1}\) induce linearly independent homology class so that one can define a \(z\)-grading on \(CFK_{\mathcal{R}}(\mathcal{H}_{1}(\alpha_{K}))\) as in Definition 3.8.
Equation 5.8 holds as long as \((B_{main},N)\) is extendable to a genus-one Heegaard diagram: Equation 5.3 holds for \(\tilde{B}\) and \(\tilde{B}^{\prime}\) since this is in the genus-one case and hence Equation 5.4 holds for \(\tilde{B}_{N}\) and \(\tilde{B}^{\prime}_{N}\); as \(\tilde{B}_{N}\) and \(\tilde{B}^{\prime}_{N}\) are identified with \(B_{main}\) and \(B^{\prime}_{main}\) respectively, Equation 5.8 is true10. Therefore, we are left to prove the following lemma to show the \(z\)-gradings on \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) and \(CFA^{-}(\mathcal{H}_{w,z})\boxtimes\widehat{CFD}(\alpha_{K})\) are the same.
Footnote 10: Strictly speaking, when the \(\beta\) curve is immersed, we need a version of Equation 5.3 that takes the \(s(\partial_{\beta}B)\) terms into account, but this is a straightforward modification; the validity of this modified equation follows from Proposition 5.11 and an extension of grading identification in [Che23] to include the case where the \(\beta\)-curve is immersed, which uses the same proof when \(\beta\) is embedded.
**Lemma 5.21**.: \((B_{main},N)\) _is extendable to a genus-one Heegaard diagram._
Proof of Lemma 5.21.: We can embed \(N\) in a genus-one doubly-pointed Riemann surface \(\widetilde{\Sigma}\). (The \(w\) base point can be placed arbitrarily and will not affect our discussion since we are dealing with the \(z\)-grading.) Recall \(B_{main}\) is an immersed disk \(f:D_{main}\to N\). In particular, \(\partial D_{main}\) can be decomposed into the union of two connected sub-arcs \(b_{1}\cup b_{2}\), where \(b_{1}\) is mapped to \(\alpha_{K}\) and \(b_{2}\) is mapped to \(\beta\) arcs and \(\partial N\) alternatively. We simply perturb \(f\) near the portion of \(\partial D_{main}\) that is mapped to \(\partial N\) to obtain a new map \(\tilde{f}:D_{main}\to\widetilde{\Sigma}\) so that \(\tilde{f}(b_{2})\) is an immersed arc and that \(\tilde{f}(D_{main})\) contains \(f(D_{main})\) as a subdomain. Then, we extend \(\tilde{f}(b_{2})\) to a closed \(\beta\)-curve (which is possibly immersed). Now, the doubly-pointed genus-one Riemann surface, the newly constructed \(\beta\)-curve, and \(\alpha_{K}\) constitute a genus-one Heegaard diagram and we can take \(\tilde{B}\) to be \(\tilde{f}(D_{main})\).
The above discussion finishes the identification of the \(z\)-gradings. Next, we show the \(w\)-gradings can also be identified. Equivalently, we show the Alexander gradings are identified since \(A=\frac{1}{2}(gr_{w}-gr_{z})\) and we already know the \(z\)-gradings are the same. Again, we will only need to show the relative gradings are the same since we will normalize both gradings via the same rule.
The corresponding proof in [10] can be adapted to the current setting even though our Heegaard diagram might have a higher genus. More specifically, using the previous notations, let \(\mathbf{x}_{1}\otimes y_{1}\) and \(\mathbf{x}_{2}\otimes y_{2}\) be two generators of \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) and let \(B\in\pi_{2}(\mathbf{x}_{1}\otimes y_{1},\mathbf{x}_{2}\otimes y_{2})\) be a domain; we no longer require \(B\) to be positive, but for convenience we assume \(n_{z}(B)=0\). Then the Alexander grading difference of \(\mathbf{x}_{1}\otimes y_{1}\) and \(\mathbf{x}_{2}\otimes y_{2}\) in \(gCFK^{-}(\mathcal{H}_{w,z}(\alpha_{K}))\) is:
\[A(\mathbf{x}_{2}\otimes y_{2})-A(\mathbf{x}_{1}\otimes y_{1})=n_{w}(B)\]
Next we show the corresponding Alexander grading difference in \(CFA^{-}(\mathcal{H}_{w,z})\boxtimes\widehat{CFD}(\alpha_{K})\) is also equal to \(n_{w}(B)\). Let \(B^{\prime}\in\pi_{2}(\mathbf{x}_{1},\mathbf{x}_{2})\) be the domain obtained from \(B\) by applying the collapsing map. We will use \(\tilde{gr}_{A}\) denote the grading function on \(CFA^{-}(H_{w,z})\) that consists of the \(Spin^{c}\) component and the Alexander component. (See [11, Section 11.4] for details on the grading function.) Then
\[\tilde{gr}_{A}(\mathbf{x}_{2})=\tilde{gr}_{A}(\mathbf{x}_{1})\cdot([\partial^{\partial }B^{\prime}],n_{w}(B^{\prime})) \tag{5.11}\]
Let \(\tilde{gr}_{D}\) denote the grading function on \(\widehat{CFD}(\alpha_{K})\) consisting of the \(Spin^{c}\) component and the Alexander component; note that in this case the value the Alexander component of \(\tilde{gr}_{D}\) is always zero. The boundary \(\partial_{\alpha_{K}}(B)\) determines a sequence of type D operations connecting \(y_{1}\) to \(y_{2}\), giving rise to
\[\tilde{gr}_{D}(y_{2})=(-[\partial_{\alpha_{K}}B],0)\cdot gr_{D}(y_{1}) \tag{5.12}\]
As before, we have \([\partial_{\alpha_{K}}B]=[\partial^{\partial}B^{\prime}]\). So, combining Equation 5.11 and 5.12, we have
\[\tilde{gr}_{A}(\mathbf{x}_{2})\tilde{gr}_{D}(y_{2})=\tilde{gr}_{A}(\mathbf{x}_{1}) \tilde{gr}_{D}(y_{1})\cdot(0,n_{w}(B^{\prime}))\]
This implies the Alexander grading difference computed using the box-tensor product is equal to \(n_{w}(B^{\prime})\), which is equal to \(n_{w}(B)\) since the collapsing map preserves the multiplicity of the domain at \(w\).
In establishing the grading correspondence, the version of the knot chain complex is not important. In particular, we have the following.
Proof of the main theorem, with gradings.: The chain homotopy equivalence in Theorem 1.1 established in the Subsection 5.1 also preserve the \(w\)- and \(z\)-gradings by Theorem 5.18.
## 6. (1,1) Patterns
### \((1,1)\) diagrams
Theorem 1.1 is particularly useful for patterns admitting a genus one Heegaard diagram \(\mathcal{H}_{w,z}\) because in this case the paired diagram \(\mathcal{H}_{w,z}(\alpha_{K})\) associated to a satellite knot is genus one and computing the Floer chain complex from this diagram is combinatorial. We will give examples involving patterns of this form in Section 6.3, but first we will review notation for diagrams for these patterns and show that one of our hypotheses may be dropped in this setting.
By definition, a \((1,1)\) pattern is a pattern admitting a genus-one doubly-pointed bordered Heegaard diagram. Equivalently, a pattern is \((1,1)\) if it admits a so-called \((1,1)\) diagram defined in [10]. This is a more flexible object to work with compared to a genus-one bordered Heegaard diagram, and we now recall the definition. A \((1,1)\) diagram is a six-tuple \((T^{2},\lambda,\mu,\beta,w,z)\) consisting of a torus \(T^{2}\), three closed curves \(\mu\), \(\lambda\), and \(\beta\) embedded on \(T^{2}\), and two base points \(w,z\in T^{2}\) such that:
1. \(\mu\) and \(\lambda\) intersect geometrically once;
2. \(\beta\) is isotopic to \(\mu\) in \(T^{2}\);
3. \(w\) and \(z\) are in the complement of \(\mu\), \(\lambda\), and \(\beta\).
A \((1,1)\) diagram encodes a pattern knot \(P\) as follows. Attaching a two-handle to \(T^{2}\times[0,1]\) along \(\beta\times\{1\}\subset T^{2}\times\{1\}\) and filling in the resulting \(S^{2}\) boundary-component by a \(3\)-ball produces a solid torus \(S^{1}\times D^{2}\). The boundary \(\partial(S^{1}\times D^{2})\) is parametrized by \((\mu,\lambda)\). Let \(l_{\beta}\) be an oriented arc on \(\partial(S^{1}\times D^{2})\) connecting \(z\) to \(w\) in the complement of \(\beta\), and let \(l_{\alpha}\) be an oriented arc on \(\partial(S^{1}\times D^{2})\) connecting \(w\) to \(z\) in the complement of \(\mu\) and \(\lambda\). Then \(P\) is obtained by pushing \(l_{\beta}\) into the interior of the solid torus. We remark that our convention in this paper is that (1,1) diagram gives the boundary of the solid torus as viewed from inside the solid torus, so pushing into the solid torus means pushing out of the page.
Any doubly pointed genus-one bordered diagram determines a \((1,1)\)-diagram by filing in the boundary with a disk and extending the \(\alpha\) arcs across that disk to form the intersecting closed curves \(\mu\) and \(\lambda\). Conversely it is shown in [10, Section 2.4] that one can construct a genus-one bordered Heegaard diagram from a \((1,1)\) diagram by reversing this process, possibly after isotoping \(\beta\). Just as a doubly pointed Heegaard diagram can be paired with an immersed curve, one can pair a \((1,1)\) diagram with an immersed curve by identifying the punctured torus containing the immersed curve with a neighborhood of \(\mu\cup\lambda\). For a \((1,1)\) diagram obtained directly from a doubly pointed genus-one bordered diagram it is clear that pairing a given immersed curve with either diagram yields the same result. Moreover, if a \((1,1)\) diagram is isotopic to one coming from a bordered diagram, the diagram obtained by pairing an immersed curve with the \((1,1)\) diagram is isotopic to the diagram obtained by pairing the immersed curve with the bordered diagram (we can perform the same isotopy of \(\beta\) in the paired diagram). It follows that we can use pairing diagrams of \((1,1)\) diagrams with immersed multicurves, in place of the pairing diagrams of bordered diagrams with immersed curves, to compute the knot Floer chain complex of satellite knots. The corresponding statement for knot Floer chain complexes over \(\mathbb{F}[U]\) is in [10, Theorem 1.2], and it holds for \(\mathbb{F}[U,V]/UV\) in view of the result in the present paper.
### Removing the \(z\)-passable assumption
One additional advantage of (1,1) patterns is that the admissibility assumption on \(\alpha_{im}\) in Theorem 1.1 can be relaxed.
**Theorem 6.1**.: _Let \(P\) be a pattern knot admitting a genus-one doubly pointed bordered Heegaard diagram \(\mathcal{H}_{w,z}\), and let \(\alpha_{K}\) denote an immersed multicurve for the knot complement of a knot \(K\) in the 3-sphere. Then \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\) is bigraded chain homotopy equivalent to a knot Floer chain complex of \(P(K)\)._
Proof.: If \(\alpha_{K}\) is \(z\)-passable, this follows from Theorem 1.1. If \(\alpha_{K}\) is not \(z\)-passable, there is a \(z\)-passable multicurve \(\alpha^{\prime}_{K}\) obtained from \(\alpha_{K}\) via the finger moves described in Proposition 4.11. Then by Theorem 1.1, \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{K}))\) is chain
homotopy equivalent to \(CFK_{\mathcal{R}}(P(K))\). We claim there is a chain isomorphism between \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha^{\prime}_{K}))\) and \(CFK_{\mathcal{R}}(\mathcal{H}_{w,z}(\alpha_{K}))\), which proves the proposition. To prove the claim, first note that the generators of both chain complexes are in one-to-one correspondence; this is because the finger moves that transform \(\alpha_{K}\) to \(\alpha^{\prime}_{K}\) are supported away from the intersection points. For genus-one Heegaard diagrams, the differentials of the Floer chain complexes count immersed bigons. Note the finger moves are actually supported away from the \(\beta\)-curve and do not create any teardrops with \(n_{z}=0\) or \(n_{w}=0\), and hence they deform immersed bigons on \(\mathcal{H}_{w,z}(\alpha_{K})\) to those on \(\mathcal{H}_{w,z}(\alpha^{\prime}_{K})\), setting up a one-to-one correspondence. The claimed isomorphism now follows from the one-to-one correspondences on the generators and differentials.
### One-bridge braids
One-bridge braids are \((1,1)\) patterns, and they include cables and Berge-Gabai knots as special cases. They were first studied by Gabai [10] and Berge[1] from the perspective of Dehn surgeries. The knot Floer homology of cable knots were studied extensively in the literature [1, 1, 2, 3, 4, 5, 6, 7, 8, 9]. The knot Floer homology of Berge-Gaibai satellite knots were studied by Hom-Lidman-Vafaee in [13], where they gave a sufficient and necessary condition for such satellite knots to be L-space knots. In this subsection, we apply the main theorem to study the knot Floer chain complexes of one-bridge-braid satellite knots.
**Definition 6.2**.: A knot \(P\subset S^{1}\times D^{2}\) is called a one-bridge braid if it is isotopic to a union of two arcs \(\delta\cup\gamma\) such that (1) \(\delta\) is embedded in \(\partial S^{1}\times D^{2}\) and is transverse to every meridian \(\star\times\partial D^{2}\), and (2) \(\gamma\) is properly embedded in a meridian disk \(\star\times D^{2}\).
We regard two one-bridge braids equivalent if they are isotopic within the solid torus. Our convention follows that in [13].
Every one-bridge braid is isotopic to a braid \(B(p,q,b)\) specified as follows. Let \(p\), \(q\), \(b\) be integers such that \(0\leq b\leq p-1\). Let \(B_{p}\) be the braid group with \(p\) strands and let \(\{\sigma_{i}\}_{i=1}^{p-1}\) denote the generators of \(B_{p}\). Then \(B(p,q,b)\) is defined to be the braid closure of \((\sigma_{p-1}\sigma_{p-2}\cdots\sigma_{1})^{q}(\sigma_{b}\cdots\sigma_{2} \sigma_{1})\). We only consider those \(p\), \(q\), and \(b\) such that \(B(p,q,b)\) is a knot. See Figure 29 (left) for an example. Note that we could restrict the value of \(b\) to be strictly less than \(p-1\), since \(B(p,q,p-1)\) is isotopic to \(B(p,q+1,0)\); however, we find it convenient to allow presentations with \(b=p-1\) so that it will be easier to introduce a different way of describing one-bridge braids that will be useful for us later.
Instead of specifying a one-bridge braid by a triple \((p,q,b)\), a one-bridge braid with a single component can also be specified (non-uniquely) using the winding number \(p\) and a slope \(m\), as we now describe. For each \(p\) we exclude slopes in the set \(\mathcal{X}_{p}=\{\frac{a}{b}|a,b\in\mathbb{Z},1\leq b<p\}\); for any other slope \(m\) we define a knot \(B(p;m)\) in the solid torus by first describing its projection to the boundary torus, which we identify with \((\mathbb{R}/\mathbb{Z})^{2}\) such that \(\{0\}\times(\mathbb{R}/\mathbb{Z})\) is a meridian. The projection consists of two embedded segments connecting the points \(w^{\prime}=(\epsilon,m\epsilon)\) and \(z^{\prime}=(\epsilon,m\epsilon-pm)\), a vertical segment \(\gamma_{p;m}\) along the curve \(\{\epsilon\}\times(\mathbb{R}/\mathbb{Z})\) that has \(w^{\prime}\) as its top endpoint and \(z^{\prime}\) as its bottom endpoint, and a curve segment \(\delta_{p;m}\) of slope \(m\) that wraps around the torus in the horizontal direction \(p\) times and has \(w^{\prime}\) as its right endpoint and \(z^{\prime}\) as its left endpoint. The knot \(B(p;m)\) is obtained by pushing the arc
into the solid torus; it is immediate from Definition 6.2 that the result is a one-bridge braid. See Figure 29 (right) for an example. Note that the slopes in \(\mathcal{X}_{p}\) are excluded because they are precisely the slopes for which the curve segment \(\delta_{p;m}\) defined above returns to its starting point before wrapping \(p\) full times around the torus and is thus not embedded.
To relate the two descriptions of a one-component \(1\)-bridge braid, we can divide the torus with the projection described above into two pieces. The strip \([0,2\epsilon]\times(\mathbb{R}/\mathbb{Z})\) is called the bridge region and contains a projection of the braid \(\sigma_{b}\cdots\sigma_{2}\sigma_{1}\), and the rest of the torus is called the twist region and contains the projection of \((\sigma_{p-1}\sigma_{p-2}\cdots\sigma_{1})^{q}\). We will define \(q(p,m)\) and \(b(p,m)\) to be the values of \(q\) and \(b\) associated with the one-bridge braid \(B(p;m)\); that is, they are defined so that \(B(p;m)=B(p,q(p,m),b(p,m))\). It is straightforward to check that \(q(p,m)=\lfloor pm\rfloor\). We obtain \(b(p,m)\) by counting the intersections of \(\delta_{p;m}\) with the interior of \(\gamma_{p;m}\).
It is clear that a pair \((p,m)\) for any positive \(p\) and any slope \(m\) in \(\mathbb{R}\setminus\mathcal{X}_{p}\) gives rise to a \(1\)-bridge braid with one component. It is less obvious that any single-component \(1\)-bridge braid can be represented in this way, but this is indeed true as we now show.
**Lemma 6.3**.: _For any \(p>0,q\in\mathbb{Z}\), and \(0\leq b\leq p-1\) for which \(B(p,q,b)\) is a knot, there exists a slope \(m\) such that \(q(p,m)=q\) and \(b(p,m)=b\)._
Proof.: Because \(q=q(p,m)=\lfloor pm\rfloor\) we can restrict to slopes \(m\) in \([\frac{q}{p},\frac{q+1}{p})\). We must find a slope in this interval for which \(b(p,m)=b\). We will describe how \(b(p,m)\) changes as \(m\) increases from \(q(p,m)=q\), showing that it attains every value \(b\) for which \(B(p,q,b)\) is a knot.
We first observe that \(b(p,m)\) is a locally constant function which jumps at points in \(\mathcal{X}_{p}\). We will label the intersections of \(\{\epsilon\}\times\mathbb{R}/\mathbb{Z}\) with \(\delta_{p,m}\) by \(x_{0},x_{1},\ldots,x_{p}\), moving leftward along \(\delta_{p,m}\) from \(x_{0}=w^{\prime}\) and \(x_{p}=z^{\prime}\). The function \(b(p,m)\) counts the number of \(0<i<p\) for which \(x_{i}\) lies on the vertical segment between \(x_{0}\) and \(x_{p}\). As \(m\) varies continuously the \(x_{i}\) all move continuously, so the number of
Figure 29. Left: the one-bridge braid \(B(4,2,1)\). Right: A projection of \(B(4;\frac{9}{16})\) to the boundary torus determined by \(p=4\) and the slope \(m=\frac{9}{16}\). Note that the torus is drawn as a square such that the vertical direction gives the meridian of the solid torus, and we take the interior of the solid torus to be above the page. From this figure we can observe that \(B(4;\frac{9}{16})=B(4,2,1)\).
between \(x_{0}\) and \(x_{p}\) can only change at a slope for which some \(x_{k}\) coincides with either \(x_{0}\) or \(x_{p}\). This in turn happens exactly when either \(km\) or \((p-k)m\) is an integer, which means that \(m\) is in \(\mathcal{X}_{p}\). We can also see that \(b(p,m)\) is non-decreasing on the interval \([\frac{q}{p},\frac{q+1}{p})\) by observing that when the slope increases by \(\Delta m>0\) the point \(x_{i}\) moves downward by \(i\Delta m\). Since any given \(x_{i}\) moves downwards it will never leave the interval between \(x_{0}\) and \(x_{p}\) by passing \(x_{0}\), and because \(x_{i}\) moves downward slower than \(x_{p}\) it will never leave the interval by passing \(p\). In other words, as \(m\) increases from \(\frac{q}{p}\) to \(\frac{q+1}{p}\) the vertical segment from \(x_{0}\) to \(x_{p}\) grows and swallows more \(x_{i}\), and once in the interval between \(x_{0}\) and \(x_{p}\) no \(x_{i}\) ever escapes this growing interval before \(x_{p}\) returns to \(x_{0}\) at slope \(m=\frac{q+1}{p}\). At each slope \(n/k\) in \(\mathcal{X}_{p}\) the count \(b(p,m)\) must increase since \(x_{k}\) coincides with \(x_{0}\) and thus \(x_{k}\) enters the vertical segment at this slope. In fact, \(b(p,m)\) increases by an even number at these points since if \(x_{k}\) coincides with \(x_{0}\) then \(x_{p-k}\) coincides with \(x_{p}\). Thus \(b(p,m)\) is a non-decreasing locally constant function on \([\frac{q}{p},\frac{q+1}{p})\setminus\mathcal{X}_{p}\) that is constant mod 2. The function may jump by more than two; in fact it is straightforward to check that at \(m\in\mathcal{X}_{p}\) it increases by twice the number of pairs \((a,b)\) with \(a,b\in\mathbb{Z}\) and \(1\leq b<p\) for which \(m=\frac{a}{b}\).
To see that \(b(p,m)\) realizes every value for which the corresponding one-bridge braid is a knot, we must consider the degenerate projections arising from slopes \(m\) in \(\mathcal{X}_{p}\). For each such slope \(m\), by taking different perturbations of the the non-embedded arc \(\delta_{p,m}\) we can construct a sequence of one-bridge braid projections that realizes all values of \(b\) between \(b(p,m-\epsilon)\) and \(b(p,m+\epsilon)\) for small \(\epsilon\). Consider a slope \(m=n/k\) in \(\mathcal{X}_{p}\) (with \(n\) and \(k\) relatively prime), and let \(\ell=\lfloor\frac{p}{k}\rfloor\). With \(x_{i}\) defined as above we have that \(x_{i}\) coincides with \(x_{j}\) if and only iff \(i\equiv j\pmod{k}\). We first assume that \(m\) is not a multiple of \(\frac{1}{p}\), so that \(x_{0}\) and \(x_{p}\) do not coincide. On one extreme we will perturb \(\delta_{p,m}\) by sliding the \(x_{i}\) points off each other so that within each group of neaby points \(x_{j}\) is above \(x_{i}\) if \(j>i\) (see the leftmost projection in Figure 30 for an example). This is the ordering of the \(x_{i}\) that would arise from a slope slightly smaller than \(m\), so the knot arising from this projection is isotopic to \(B(p;m-\epsilon)\), in particular it is \(B(p,q(p,m),b(p,m-\epsilon))\). For the next projection we swap \(x_{0}\) with \(x_{k}\), extending the vertical arc \(\gamma_{p,q}\) upward to reach the new \(x_{0}\) and perturbing the portion of \(\delta_{p,m}\) leaving \(x_{k}\) to the right downward with \(x_{k}\) as shown in the second projection in Figure 30 (the pieces of \(\delta_{p,m}\) to the left of \(x_{0}\) and \(x_{k}\) are unaffected). This clearly adds one new crossing with \(\gamma_{p,m}\), so the value of \(b\) for the corresponding one-bridge braid increases by one. We also observe that this change splits off a closed curve containing a portion of \(\delta_{p,m}\). More precisely, if we keep the labels on the remaining \(x_{i}\) the same, the perturbation of \(\delta_{p,m}\) now has a closed component that connects \(x_{1},x_{2},\cdots,x_{k}\) and a component that connects \(x_{0}\), \(x_{k+1}\), \(x_{k+2},\ldots,x_{p}\). It follows that \(B(p,q(p,m),b(p,m-\epsilon)+1)\) is a two component link. If \(\ell>1\) we can repeat this procedure, creating a new projection by swapping \(x_{0}\) with the point directly above it (now \(x_{2k}\)), increasing \(b\) by one and splitting off one more closed component that passes through the points \(x_{k+1},x_{k+2},\ldots,x_{2k}\). We continue in this way, adding one to \(b\) and increasing the number of link components by one at each step, until \(x_{0}\) has moved above all other \(x_{i}\) with \(i\equiv 0\pmod{k}\); at this point we have a projection of \(B(p,q(p,m),b(p,m-\epsilon)+\ell)\) which has \(\ell+1\) components. We now continue our sequence of projections by sliding \(x_{p}\) downward, interchanging \(x_{p}\) with one other point at a time (first \(x_{p-k}\), then \(x_{p-2k}\), etc.). Each new projections adds a crossing that increases \(b\) by one and decreases the number
of link components by one. At the last step \(x_{p}\) is below all other \(x_{i}\) with \(i\equiv p\pmod{k}\) and the link has single component; see the rightmost projection in Figure 30 for an example. It is clear that this projection is isotopic to the one obtained from a slope slightly larger than \(m\), so the last projection in the family arising from \(m\) is a projection of \(B(p,q(p,m),b(p,m-\epsilon)+2\ell)=B(p,q(p,m),b(p,m+\epsilon))\). When \(m=\frac{q}{p}\) is in \(\mathcal{X}_{p}\) we construct a similar family but we start with a projection with the maximum number of components. If \(\ell=\gcd(p,q)\) we construct a projection of an \(\ell\)-component one-bridge braid link by following a line of slope \(m\) and closing up the curve and starting a new component each time a curve returns to its starting point. Perturbing the components off each other, it is easy to see that this projection gives the torus link \(B(p,q,0)=T(p,q)\). From this projection we can introduce a vertical segment between the first and last point on one of the closed components and expand this, adding crossings with other components. Each time a crossing is added we get a projection of a one-bridge braid where \(b\) has increased by one and the number of components has decreased by one, and when we arrive at the single component link \(B(p,q,\ell-1)\) it is clear that this the projection is isotopic to that arising from a slope slightly larger than \(m=\frac{q}{p}\).
To summarize, by varying the slope \(m\) from \(\frac{q}{p}\) to \(\frac{q+1}{p}\) and considering the families of projections described above at the degenerate slopes in \(\mathcal{X}_{p}\) we obtain a projection of \(B(p,q,b)\) for every values of \(b\) with \(0\leq b<p\), and moreover the one-bridge braids \(B(p,q,b)\) that only occur at degenerate slopes are precisely those with more than one component. It follows that for any knot \(B(p,q,b)\) there is an interval of slopes \(m\) for which \(b(p,m)=b\).
_Remark 6.4_.: While we will not give an closed formula to find an appropriate slope \(m\) given any \(p\), \(q\), and \(b\), it is clear from the proof of Lemma 6.3 that there is a simple procedure to find the interval of slopes for each value of \(b\). We remove any slopes in \(\mathcal{X}_{p}\) from the interval \([\frac{q}{p},\frac{q+1}{p})\) and then determine \(b(p,m)\) on the remaining intervals as follows: the value in the leftmost region is the number of pairs in \(\{(a,b)|a,b\in\mathbb{Z},0<b<p\}\) for which \(\frac{a}{b}=\frac{q}{p}\) and at each slope in \(m\in\mathcal{X}_{p}\) the value increases by twice the number of pairs in \(\{(a,b)|a,b\in\mathbb{Z},0<b<p\}\) for which \(\frac{a}{b}=m\).
Figure 30. A family of one-bridge braids associated with the degenerate slope \(m=\frac{1}{2}=\frac{2}{4}\) and winding number \(p=5\). The leftmost is the projection of a knot that is isotopic to \(B(p;m-\epsilon)\) for a sufficiently small \(\epsilon>0\) and the rightmost is the projection of a knot that is isotopic to \(B(p;m+\epsilon)\). The value of \(b\) associated with the one-bridge braid increases by one at each step, and the number of link components first increases and then decreases so that only the first and last projection give knots.
Any single component one-bridge braid in \(S^{1}\times D^{2}\) is a (1,1) pattern. To prove Theorem 1.3 in the following section, we will make use of a particular (1,1) diagram for such a one-bridge braid pattern \(B(p,q,b)\). The construction of this diagram uses a choice of \(m\) for which \(B(p;m)=B(p,q,b)\), and we will denote this diagram \(\mathcal{H}_{p;m}\). This diagram is isotopic to another diagram \(\mathcal{H}^{\prime}_{p;m}\), which we will describe first.
The diagram \(\mathcal{H}^{\prime}_{p;m}=(T^{2},\lambda^{\prime},\mu^{\prime},\beta^{\prime },w^{\prime},z^{\prime})\) is defined using the projection for \(B(p;m)\) described earlier in this section. We need to choose \(\beta^{\prime}\) and \(\mu^{\prime}\) homologous to meridians of the solid torus while \(\lambda^{\prime}\) will be homologous to a longitude. The basepoints \(w^{\prime}\) and \(z^{\prime}\) are the points defined above. Our goal is to choose \(\beta^{\prime}\) to be disjoint from the arc \(\gamma_{p,m}\) and to choose \(\lambda^{\prime}\) and \(\mu^{\prime}\) to be disjoint from the arc \(\delta_{p,m}\). We can take \(\beta^{\prime}\) to be the curve \(\{0\}\times(\mathbb{R}/\mathbb{Z})\), for it is in the complement of \(\gamma_{p,q}\) and is isotopic to the meridian of the solid torus. We now consider the curves \(\mu=\{\frac{1}{2}\}\times(\mathbb{R}/\mathbb{Z})\) and \(\lambda=(\mathbb{R}/\mathbb{Z})\times\{\frac{1}{2}\}\); these are isotopic to the meridian and the longitude of the solid torus, respectively, as desired but they intersect the arc \(\delta_{p,m}\). We modify these curves by performing finger moves along \(\delta_{p,m}\) in order to eliminate all intersections with \(\delta_{p,m}\). More concretely, whenever there is an intersection between \(\mu\) or \(\lambda\) and \(\delta_{p,m}\), we slide that intersection to the left along \(\delta_{p,m}\) until it passes over the endpoint \(z^{\prime}\) of \(\delta_{p,m}\). We define \(\mu^{\prime}\) and \(\lambda^{\prime}\) to be the curves resulting from these finger moves. It is clear that the diagram \(\mathcal{H}^{\prime}_{p;m}\) encodes \(B(p;m)\) since the knot determined by the diagram is the union of \(\gamma_{p,m}\) and \(\delta_{p,m}\) with \(\gamma_{p,m}\) pushed into the interior of the solid torus. An example of a diagram of this form is given on the left of Figure 31.
In \(\mathcal{H}^{\prime}_{p;m}\) the curve \(\beta^{\prime}\) is simple but the curves \(\mu^{\prime}\) and \(\lambda^{\prime}\) are complicated. We can isotope this diagram to produce a second diagram \(\mathcal{H}_{p;m}=(T^{2},\lambda,\mu,\beta,w,z)\) in which the opposite is true. This is accomplished by sliding \(z^{\prime}\) to the right along the arc \(\delta_{p,m}\) until it reaches the point \(z\) at \(=(-\epsilon,-m\epsilon)\). It is clear from the way \(\lambda^{\prime}\) and \(\mu^{\prime}\) were constructed that this transformation takes these curves back to \(\lambda\) and \(\mu\). Let \(\beta\) denote the image of \(\beta^{\prime}\) under this transformation; that is, \(\beta\) is the result of modifying \(\beta^{\prime}\) by finger moves pushing every intersection of \(\beta^{\prime}\) with \(\delta_{p,m}\) rightward along \(\delta_{p,m}\) until the intersection occurs in a \((2\epsilon)\)-neighborhood of the right endpoint of \(l^{\prime}_{\alpha}\), which is the point \(w=w^{\prime}=(\epsilon,m\epsilon)\). Note that in \(\mathcal{H}_{p;m}\) the basepoints \(z\) and \(w\) are near each other and the midpoint between them is \((0,0)\). An example of a diagram \(\mathcal{H}_{p;m}\) is shown on the right of Figure 31. Since \(\mathcal{H}_{p;m}\) is isotopic to \(\mathcal{H}^{\prime}_{p;m}\), it is still a (1,1) diagram for \(B(p,q,r)\).
### Immersed curves for 1-bridge braid satellites
Given a knot \(K\) in the 3-sphere, we use \(K_{p,q,b}\) to denote the satellite knot whose pattern knot is \(B(p,q,b)\) and whose companion knot is \(K\). Let \(\gamma_{K}\) and \(\gamma_{K_{p,q,b}}\) be the immersed multicurve associated with \(K\) and \(K_{p,q,b}\) respectively. Our goal is to describe a way to obtain \(\gamma_{K_{p,q,b}}\) from \(\gamma_{K}\). To do so, we will pass to the lifts11 of immersed curves in the universal covering space \(\widetilde{T}_{\bullet}\) of the marked torus \(T_{\bullet}\); \(\widetilde{T}_{\bullet}\) is \(\mathbb{R}^{2}\) with marked points at the integer points \(\mathbb{Z}^{2}\). We require the lifts of the immersed curves to be symmetric in the sense that they are invariant under \(180^{\circ}\) rotation about \((0,\frac{1}{2})\). The lifts of the immersed curves of \(K\) and \(K_{p,q,b}\) are related by a planar transform \(f_{p,q,b}:\widetilde{T}_{\bullet}\to\widetilde{T}_{\bullet}\)
defined up to isotopy, which we will now construct. In fact it is most convenient to construct a planar transformation \(f_{p;m}:\widetilde{T}_{\bullet}\to\widetilde{T}\) determined by \(p\) and a slope \(m\in\mathbb{R}\setminus\mathcal{X}_{p}\), but we will see that up to isotopy \(f_{p;m}\) depends only on \(p\), \(q(p,m)\) and \(b(p,m)\) so we can define \(f_{p,q,b}\) to be \(f_{p;m}\) for any \(m\) such that \(q(p,m)=q\) and \(b(p,m)=b\).
To define \(f_{p;m}:\widetilde{T}_{\bullet}\to\widetilde{T}\), we first define a map \(g_{p;m}:\mathbb{R}^{2}\to\mathbb{R}^{2}\) that does not fix the integer lattice. The map \(g_{p;m}\) is periodic with horizontal period \(p\), by which we mean that if \(g_{p;m}(x,y)=(x^{\prime},y^{\prime})\) then \(g_{p;m}(x+p,y)=(x^{\prime}+p,y^{\prime})\), so it suffices to define \(g_{p;m}\) on the strip \([-\frac{1}{2},p-\frac{1}{2}]\times\mathbb{R}\). On this strip \(g_{p;m}\) is given by an isotopy that fixes the boundary of the strip and slides each integer lattice point \((a,b)\) with \(a\neq 0\) to the left along a line of slope \(m\) until it hits the vertical line \(\{0\}\times\mathbb{R}\), i.e, it stops at \((0,b-am)\). Note that in general the points along the vertical line \(\{0\}\times\mathbb{R}\) that are the images of the integer lattice points in \([-\frac{1}{2},p-\frac{1}{2}]\times\mathbb{R}\) are not evenly spaced. We now define \(f_{p;m}\) by composing \(g_{p;m}\) with three isotopies that together take image of the lattice back to the lattice:
1. An isotopy that shifts the images of the lattice points under \(g_{p;q}\) vertically, preserving the vertical ordering of these points and fixing the points at \((0,n)\) for \(n\in\mathbb{Z}\), so that they are evenly spaced--that is, so that they lie in \(\{(0,\frac{n}{p})\}_{n\in\mathbb{Z}}\);
2. A horizontal compression by a factor of \(p\) taking the line the point \((x,y)\) to \((\frac{x}{p},y)\); and
3. A vertical stretching by a factor of \(p\) taking the point \((x,y)\) to the point \((x,py)\).
Note that \(f_{p;m}\) defines a bijection on the integer lattice even though it takes strips of width \(p\) to strips of width \(1\), and \(f_{p;m}\) is periodic in the sense that if \(f_{p;m}(x,y)=(x^{\prime},y^{\prime})\) then \(f_{p;m}(x+p,y)=(x^{\prime}+1,y^{\prime})\). Because of the periodicity of \(f_{p;m}\) and
of \(\gamma_{K}\) and \(\gamma_{K_{p,q,b}}\) it is sufficient to consider the restriction of \(f_{p;m}\) to one strip of width \(p\), which we may view as a map
\[f_{p;m}:[-\frac{1}{2},p-\frac{1}{2}]\times\mathbb{R}\to[-\frac{1}{2p},1-\frac{1 }{2p}]\times\mathbb{R}.\]
Although we chose a slope \(m\) to define the map \(f_{p;m}\), we observe that different choices of \(m\) within the same component of \(\mathbb{R}\setminus\mathcal{X}_{p}\) determine isotopic maps. Given two slopes \(m_{0}\) and \(m_{1}\) in the same component we can vary the slope \(m\) from \(m_{0}\) to \(m_{1}\) and note that the vertical ordering on the images of the lattice points under \(g_{p;m}\) never changes; if it did there must be some slope for which two lattice points map to the same point, but this only happens for slopes in \(\mathcal{X}_{p}\). Thus we can define \(f_{p,q,b}\) to be \(f_{p;m}\) for any \(m\) for which \(B(p;m)=B(p,q,b)\).
We can now restate the method for obtaining \(\gamma_{K_{p,q,b}}\) from \(\gamma_{K}\) mentioned in the introduction:
**Theorem 1.3**.: _Let \(\gamma_{K}\) and \(\gamma_{K_{p,q,b}}\) be the immersed multicurve associated with \(K\) and \(K_{p,q,b}\) respectively. Let \(\tilde{\gamma}_{K}\) and \(\tilde{\gamma}_{K_{p,q,b}}\) be the lifts of \(\gamma_{K}\) and \(\gamma_{K_{p,q,b}}\) to \(\tilde{T}_{\bullet}\) respectively. Then \(\tilde{\gamma}_{K_{p,q,b}}\) is homotopic to \(f_{p,q,b}(\tilde{\gamma}_{K})\)._
Proof.: Fixing a slope \(m\) for which \(B(p;m)=B(p,q,b)\), we consider the (1,1) diagram \(\mathcal{H}_{p;m}\) defined in the previous section. Consider the doubly pointed diagram
Figure 32. A lift of the \((1,1)\) diagram \(\mathcal{H}_{4;9/16}\) for \(B(4,2,1)\). The green curve is the lift \(\tilde{\beta}\) of \(\beta\). The red vertical lines are lifts of \(\mu\) and the orange horizontal lines are lifts of \(\lambda\). Lifts of \(l^{\prime}_{\alpha}\) are shown in gray (the portion over which \(z\) slides when constructing \(\mathcal{H}_{4;9/16}\) from \(\mathcal{H}_{4;9/16}\) is dashed).
\(\mathcal{H}_{p;m}(\gamma_{K})=(T^{2},\gamma_{K},\beta,w,z)\) obtained by pairing \(\mathcal{H}_{p;m}\) and \(\gamma_{K}\). By Theorem 6.1, the knot Floer chain complex \(CFK_{\mathcal{R}}(\mathcal{H}_{p;m}(\gamma_{K}))\) is chain homotopy equivalent to \(CFK_{\mathcal{R}}(K_{p,q,b})\).
The complex \(CFK_{\mathcal{R}}(\mathcal{H}_{p;m}(\gamma_{K}))\) can be computed in the universal cover \(\mathbb{R}^{2}\) of \(T^{2}\) (marked with lifts of \(w\) and \(z\)) by taking the Floer complex of \(\tilde{\gamma}_{K}\) and \(\tilde{\beta}\) where \(\tilde{\beta}\) and \(\tilde{\gamma}_{K}\) are lifts of \(\beta\) and \(\gamma_{K}\) to \(\widetilde{T}_{\bullet}\). We pick the lift \(\tilde{\beta}\) of \(\beta\) so that it is isotopic to \(\{0\}\times\mathbb{R}\) if we ignore the lifts of \(z\). An example of the lift of \(\mathcal{H}_{p;m}\) to \(\mathbb{R}^{2}\), with the single lift \(\tilde{\beta}\) of \(\beta\) and all lifts of \(\lambda\) and \(\mu\), is shown in Figure 32. Lifts of \(\delta_{p;m}\) are also shown in the figure. For any \(K\), \(\tilde{\gamma}_{K}\) will be a horizontally periodic curve lying in a neighborhood of the lifts of \(\lambda\) and \(\mu\). Note that from the construction of \(\beta\) it is clear that \(\tilde{\beta}\) will never pass the vertical line \(\{p-1\}\times\mathbb{R}\), so all intersection points lie in the strip \(\left[-\frac{1}{2},p-\frac{1}{2}\right]\times\mathbb{R}\).
If we were to isotope the lift of \(\mathcal{H}_{p;m}\) by sliding each lift of \(z\) leftward along the lifts of \(\delta_{p;m}\) to the left endpoints of those arcs we would recover a lift of \(\mathcal{H}^{\prime}_{p;m}\). Instead we will consider a different transformation of the plane that slides each pair of nearby lifts of \(w\) and \(z\) together leftward along the lifts of \(\delta_{p;m}\) until the midpoint between them reaches a vertical line \(\{np\}\times\mathbb{R}\) for some integer \(n\). This transformation agrees with the map \(g_{p;m}\) up to isotopy. By applying appropriate vertical shifts and horizontal and vertical scaling as well, we realize the map \(f_{p;m}\). Clearly this transformation of the plane, if applied to both \(\tilde{\gamma}_{K}\) and \(\tilde{\beta}\), does not affect the complex computed from the diagram. The image of \(\tilde{\beta}\) under this transformation is \(\tilde{\beta}^{\prime}=\{0\}\times\mathbb{R}\). We thus have that the complex obtained from \(f_{p;m}(\tilde{\gamma}_{K})\) by pairing with \(\{0\}\times\mathbb{R}\) agrees with the complex \(CFK_{\mathcal{R}}(K_{p,q,b})\). Note that \(f_{p;m}(\tilde{\gamma}_{K})\) is periodic and, possibly after a homotopy, intersects each line \(\{n+\frac{1}{2}\}\times\mathbb{R}\) exactly once. But pairing with \(\{0\}\times\mathbb{R}\) gives a bijection between homotopy classes of such curves and homotopy equivalence classes of complexes over \(\mathcal{R}\) (this follows from [1, Theorem 1.2], or from [10, Theorem 4.11] and the well understood relationship between complexes over \(\mathcal{R}\) with rank one horizontal and vertical homology and type D structures over the torus algebra). Thus since \(f_{p;m}(\tilde{\gamma}_{K})\) and \(\tilde{\gamma}_{K_{p,q,b}}\) determine the same complex we must have that \(f_{p;m}(\tilde{\gamma}_{K})\) is homotopic to \(\tilde{\gamma}_{K_{p,q,b}}\).
An example illustrating Theorem 1.3 is shown in Figure 33. Let \(K\) be the right handed trefoil \(T_{2,3}\) and consider the satellite knot \(K_{4,2,1}\); we fix the slope \(m=\frac{9}{16}\), which is consistent with this value of \(q\) and \(b\). The figure shows a lift of the pairing diagram \(\mathcal{H}_{4;9/16}(\gamma_{K})\). The planar transformation \(f_{4,2,1}=f_{4;9/16}\) takes \(\tilde{\beta}\) to a vertical line with \(z\)'s on the left and \(w\)'s on the right and the image of \(\tilde{\gamma}_{K}\) under this transformation, one period of which is shown on the right in the figure, is homotopic to \(\tilde{\gamma}_{K_{4,2,1}}\) as a curve in \(\mathbb{R}^{2}\setminus\mathbb{Z}^{2}\).
### L-space slopes, \(\tau\), and \(\epsilon\) for one-bridge braid satellites
Theorem 1.3 gives uniform access to several results in the literature. It generalizes the cabling transformation of immersed curves by the second author and Watson [10], which correspond to the case when \(b=0\). This cabling transformation gave simple proofs of earlier cabling formulas for the \(\tau\)-invariant and the \(\epsilon\)-invariant [1, Theorems 1 and 2] and of an L-space knot criterion for cables [12]. We can now extend these results, with essentially the same proofs, to all one-bridge braid satellites; see Theorem 6.5 below for the L-space gluing criterion, Theorem 6.6 for the \(\epsilon\) formula, and Theorem 6.8 for the \(\tau\) formula. In addition to cables, other special cases of one-bridge braid patterns have been studied previously. In [10],
Hom-Lidman-Vafaee proved the aforementioned L-space criterion for Berge-Gaibai knots, which is a proper subset of one-bridge braids. In Example 1.4 of [16], Hom gave a sufficient condition for satellite knots with 1-bridge-braid patterns to be L-space knots. Theorem 6.5 can be viewed as a generalization of both these results. Theorem 6.8 also recovers a recent formula of Bodish for \(\tau\) of a family of (1,1) satellites [1, Theorem 1.1]; note that the pattern denoted \(P_{p,1}\) there is the one-bridge braid \(B(p,1,2)\).
We first state the L-space knot criterion for one-bridge braid satellites.
**Theorem 6.5**.: \(K_{p,q,b}\) _is an \(L\)-space knot if and only if \(K\) is an \(L\)-space knot and \(\frac{q}{p}\geq 2g(K)-1\)._
Proof.: The proof is a straightforward generalization of the proof of Theorem 9 in [10], using Theorem 1.3 instead of the cabling transformation. The key idea is that an L-space knot is one for which the immersed curve moves monotonically downward from the first to the last time it intersects the vertical line through the punctures, i.e. there is no vertical backtracking. The planar transformation \(f_{p;m}\) compresses multiple periods of the periodic curve \(\tilde{\gamma}_{K}\) by sliding along lines of slope \(m\), and there will be no vertical backtracking in the result if and only if \(\gamma_{K}\) has no vertical backtracking and the highest point of \(\tilde{\gamma}_{K}\) along one vertical line ends up below the lowest point of \(\gamma_{K}\) on the previous vertical line; this last condition occurs exactly when \(m>2g(K)-1\). Finally, we observe that \(m\in[\frac{q}{p},\frac{q+1}{p})\) is greater than \(2g(K)-1\) if and only if \(\frac{q}{p}\geq 2g(K)-1\). (Note that when \(\frac{q}{p}=2g(K)-1\), \(m\) must be greater than \(\frac{q}{p}\) in order for \(B(p,q,b)\) to be a knot.)
Figure 33. The planar transform \(f_{4,2,1}\) acting on the immersed curve of the right-handed trefoil \(T_{2,3}\). On the left is a lift of the pairing diagram \(\mathcal{H}_{4;9/16}(\gamma_{K})\) to \(\mathbb{R}^{2}\), restricted to \(\left[-\frac{1}{2},4-\frac{1}{2}\right]\times\mathbb{R}\). Applying \(f_{4,2,1}\) pulls the green curve straight and the image of \(\tilde{\gamma}_{K}\) is homotopic to the curve on the right. This curve (repeated horizontally) is \(\tilde{\gamma}_{K_{4,2,1}}\).
We next derive a formula for \(\epsilon\) of one-bridge braid satellites. Recall that \(\tilde{\gamma}_{K}\) has one non-compact component which is homotopic to \(\mathbb{R}\times\{0\}\) if the punctures are ignored and that we view as being oriented left to right. Also recall that \(\epsilon\) records whether this component turns downward (\(\epsilon=1\)), turns upward (\(\epsilon=-1\)), or continues straight (\(\epsilon=0\)) after the first time (moving left to right) it crosses the vertical line \(\{0\}\times\mathbb{R}\).
**Theorem 6.6**.: _If \(\epsilon(K)=\pm 1\) then \(\epsilon(K_{p,q,b})=\epsilon(K)\). If \(\epsilon(K)=0\) then_
\[\epsilon(K_{p,q,b})=\begin{cases}1&\text{ if }q>1\text{ or if }q=1\text{ and }b>0\\ 0&\text{ if }q\in\{0,-1\}\text{ or if }(q,b)\in\{(1,0),(-2,p-1)\}\\ -1&\text{ if }q<-2\text{ or if }q=-2\text{ and }b<p-1\end{cases}\]
Proof.: The proof is essentially the same as the proof of Theorem 3 in [10]. If the non-compact component of \(\tilde{\gamma}_{K}\) turns either upward or downward, it is clear that this property is preserved by the operation \(f_{p;m}\) for any \(m\). If \(\epsilon(K)=0\), then the relevant component of \(\tilde{\gamma}_{K}\) is a horizontal line. In this case, \(\epsilon(K_{p,q,b})=0\) if and only if all the lattice points initially above \(\tilde{\gamma}_{K}\) remain above all lattice points initially below \(\tilde{\gamma}_{K}\) after applying \(f_{p;m}\); since the lattice points moving the most are those on the line \(\{p-1\}\times\mathbb{R}\), which move vertically by \((p-1)m\), this means that \(-\frac{1}{p-1}<m<\frac{1}{p-1}\). For other slopes \(\tilde{\gamma}_{K_{p,q,b}}\) turns downward or upward depending on whether \(m\) is positive or negative. Finally, since the only point of \(\mathcal{X}_{p}\) in \((-\frac{1}{p-1},\frac{1}{p-1})\) is \(0\), it is simple to check that \((q(p,m),b(p,m))\) is \((-2,p-1)\) on \((-\frac{1}{p-1},-\frac{1}{p})\), \((-1,0)\) on \((-\frac{1}{p},0)\), \((0,p-1)\) on \((0,\frac{1}{p})\), and \((1,0)\) on \((\frac{1}{p},\frac{1}{p-1})\).
Note that \(B(p,1,0)\) and \(B(p,0,p-1)\) are both isotopic to the torus knot \(T(p,1)\) in the boundary of the solid torus and \(B(p,-1,0)\) and \(B(p,-2,p-1)\) are both isotopic to \(T(p,-1)\), so the only satellites with one bridge braid patterns for which \(\epsilon=0\) are \((p,\pm 1)\)-cables.
Finally, we compute \(\tau\) for one-bridge braid satellites. Recall that \(\tau(K)\) measures the height of the first intersection of the non-trivial component of \(\tilde{\gamma}_{K}\) with the vertical line \(\{0\}\times\mathbb{R}\); the first intersection occurs between heights \(\tau(K)\) and \(\tau(K)+1\), while by symmetry the last intersection with \(\{0\}\times\mathbb{R}\) occurs between heights \(-\tau(K)\) and \(-\tau(K)+1\). It follows that \(2\tau-1\) is the difference between the height of the lattice point immediately below the first intersection and the height of the lattice point immediately above the last intersection. It is not difficult to identify points on \(f_{p;m}(\tilde{\gamma}_{K})\) that give the first and last intersection with \(\{0\}\times\mathbb{R}\). The only step that requires some care is computing the height difference between these points; the following lemma will be helpful. We use \(y(p)\) to denote the \(y\)-coordinate of a point \(p\) in \(\mathbb{R}^{2}\).
**Lemma 6.7**.: _For any \(p>0\), \(q\geq 0\) and \(b\in\{0,\dots,p-1\}\) for which \(B(p,q,b)\) is a knot, if \(A=(0,0)\) and \(B=(p-1,0)\) then_
\[y(f_{p,q,b}(A))-y(f_{p,q,b}(B))=pq-q+b.\]
Proof.: Choose a slope \(m\) in \(\mathbb{R}\setminus\mathcal{X}_{p}\) such that \(f_{p,q,b}=f_{p;m}\); note that \(m>0\) since \(q\geq 0\). The difference in height between \(f_{p;m}(A)\) and \(f_{p;m}(B)\) is one less than the number of points of \(\{0\}\times\mathbb{Z}\) between \(f_{p;m}(A)\) and \(f_{p;m}(B)\), inclusive. This latter quantity is the same as the number of integer lattice points contained in the parallelogram (with boundary included) with vertices at \(g_{p;m}(A)=A=(0,0)\)
\(g_{p;m}(B)=(0,-m(p-1))\), \(B=(p-1,0)\), and \(C=(p-1,m(p-1))\), since the lattice points in this parallelogram are precisely those that map to either \(g_{p;m}(A)\), \(g_{p;m}(B)\), or the interval between them under \(g_{p;m}\).
To count lattice points in the the parallelogram with corners at \(A\), \(g_{p;m}(B)\), \(B\), and \(C\), we first count lattice points in the larger parallelogram with corners at \(A^{\prime}=(-1,0)\), \(B^{\prime}=(-1,-mp)\), B, and \(C^{\prime}=(p-1,pm)\) (see Figure 34). We claim that this closed region contains \(q(p+1)+b+2\) lattice points. To see this, note that if we apply the transformation \(g_{p;m}\) translated leftward by one unit (that is, the transfomation \(g^{\prime}_{p;m}\) defined such that \(g^{\prime}_{p;m}(x,y)\) is one unit to the left of \(g_{p;m}(x+1,y)\)) then no lattice points enter or leave this parallelogram and all lattice points in the parallelogram apart from those on its right edge end up on the left edge of the parallelogram. Since \(m\in[\frac{q}{p},\frac{q+1}{p}]\), \(B^{\prime}\) lies between the lattice points at \((-1,-q)\) and \((-1,-q-1)\). From the method of computing \(b(p,m)\) described in Section 6.3, the number of these points ending up between \(B^{\prime}\) and \((-1,-q)\) is \(b\). In addition, for each \(-q\leq i\leq-1\) there are \(p\) lattice points that end up on the half open segment \(\{(-1,i+t)\}_{t\in[0,1)}\). Combining these along with the point \(A^{\prime}\), we see that there are \(qp+b+1\) lattice points taken to the left edge of the parallelogram by \(g^{\prime}_{p;m}\). Adding the \(q+1\) lattice points on the right edge gives \(pq+q+b+2\). To obtain
Figure 34. The number of lattice points in the blue parallelogram is the same as the number of lattice points between \(f_{p;m}(A)\) and \(f_{p;m}(B)\), inclusive. We count these lattice points by finding the number in the larger green parallelogram, which can be more simply stated in terms of \(p\), \(q\), and \(b\), and subtracting the excess.
the number of lattice points in the smaller parallelogram, we remove from this the \(q+1\) lattice points points on the the left edge from \(A^{\prime}\) to \(B^{\prime}\). We also subtract the number of lattice points in the trapezoid with corners \(A^{\prime}\), \(A\), \(C\), and \(C^{\prime}\), not counting \(A^{\prime}\) or any along the edge \(AC\); this trapezoid intersects the \(q\) horizontal lines \(\mathbb{R}\times\{i\}\) for \(1\leq i\leq q\) and intersects each in a half open interval of length one that must contain exactly one lattice point, so the trapezoid contains \(q\) lattice points. Thus the smaller parallelogram contains \(pq-q+b+1\) lattice points, and subtracting one gives the height difference.
Note that the vertical translation the formula in Lemma 6.7 also give the height difference between \(f_{p,q,b}((0,n))\) and \(f_{p,q,b}((p-1,n))\) for any integer \(n\).
**Theorem 6.8**.: _If \(\epsilon(K)=\pm 1\) then \(\tau(K_{p,q,b})=p\tau(K)+\frac{(p-1)(q\mp 1)+b}{2}\). If \(\epsilon(K)=0\) then_
\[\tau(K_{p,q,b})=\begin{cases}\frac{(p-1)(q-1)+b}{2}&\text{if $q>1$ or if $q=1$ and $b>0$}\\ 0&\text{if $q\in\{0,-1\}$ or if $(q,b)\in\{(1,0),(-2,p-1)\}$}\\ \frac{(p-1)(q+1)+b}{2}&\text{if $q<-2$ or if $q=-2$ and $b<p-1$}\end{cases}\]
Proof.: The proof is similar to that of Theorem 4 in [10]. We first assume that \(q\geq 0\); the case of \(q<0\) follows from this by taking mirrors. We consider cases based on the value of \(\epsilon(K)\).
If \(\epsilon(K)>0\), consider the points \(A=(0,\tau(K))\), \(B=(p-1,\tau(K))\), and \(B^{\prime}=(p-1,1-\tau(K))\). When the non-compact component of \(\tilde{\gamma}_{K}\) is pulled tight the first intersection with \(\{0\}\times\mathbb{R}\) occurs just above \(A\) and the last intersection with \(\{p-1\}\times\mathbb{R}\) occurs just below \(B^{\prime}\); it is clear that after applying \(f_{p,q,b}\) the first intersection with \(\{0\}\times\mathbb{R}\) occurs just above \(f_{p,q,b}(A)\) and the last intersection with \(\{0\}\times\mathbb{R}\) occurs just below \(f_{p,q,b}(B^{\prime})\); see Figure 35. We now have that
\[2\tau(K_{p,q,b})-1 =y(f_{p,q,b}(A))-y(f_{p,q,b}(B^{\prime}))\] \[=\left[y(f_{p,q,b}(A))-y(f_{p,q,b}(B))\right]+\left[y(f_{p,q,b}(B ))-y(f_{p,q,b}(B^{\prime}))\right].\]
The first term in this sum is \(pq-q+b\) by Lemma 6.7 and the second term is \(p(2\tau(K)-1)\) since \(f_{p,q,b}\) scales the distance between lattice points in the same column by a factor of \(p\). Solving for \(\tau(K_{p,q,b})\) gives
\[\tau(K_{p,q,b})=2p\tau(K)+\frac{(p-1)(q-1)+b}{2}.\]
If \(\epsilon(K)<0\), we instead consider the points \(A=(0,\tau(K)+1)\), \(B=(p-1,\tau(K)+1)\), and \(B^{\prime}=(p-1,-\tau(K))\). When the non-compact component of \(\tilde{\gamma}_{K}\) is pulled tight the first intersection with \(\{0\}\times\mathbb{R}\) occurs just _below_\(A\) and the last intersection with \(\{p-1\}\times\mathbb{R}\) occurs just _above_\(B^{\prime}\); it is clear that after applying \(f_{p,q,b}\) the first intersection with \(\{0\}\times\mathbb{R}\) occurs just below \(f_{p,q,b}(A)\) and the last intersection with \(\{0\}\times\mathbb{R}\) occurs just above \(f_{p,q,b}(B^{\prime})\); see Figure 35. It follows that
\[2\tau(K_{p,q,b})+1 =y(f_{p,q,b}(A))-y(f_{p,q,b}(B^{\prime}))\] \[=\left[y(f_{p,q,b}(A))-y(f_{p,q,b}(B))\right]+\left[y(f_{p,q,b}(B ))-y(f_{p,q,b}(B^{\prime}))\right].\]
Once again the first term in this sum is \(pq-q+b\) by Lemma 6.7 and the second term is now \(p(2\tau(K)+1)\). Solving for \(\tau(K_{p,q,b})\) gives
\[\tau(K_{p,q,b})=2p\tau(K)+\frac{(p-1)(q+1)+b}{2}.\]
If \(\epsilon(K)=0\), consider the points \(A=(0,0)\), \(B=(p-1,0)\), and \(B^{\prime}=(p-1,1)\). The non-compact component of \(\tilde{\gamma}_{K}\) is homotopic to the horizontal line that passes just above \(A\) and \(B\). If \(q=0\) or \((q,b)=(1,0)\) then \(f_{p,q,b}=f_{p;m}\) for some slope \(m\) with \(0<m<\frac{1}{p-1}\); in this case it is clear that no lattice points cross the horizontal line just above \(A\) when \(g_{p;m}\) is applied, so the image under \(f_{p;m}\) of this line is still homotopic to a horizontal line and \(\tau(K_{p,q,b})=0\). If \(q>1\) or if \(q=1\) and \(b>0\) then \(f_{p,q,b}=f_{p;m}\) for some \(m>\frac{1}{p-1}\). We can homotope the non-compact component of \(\tilde{\gamma}_{K}\) so that it passes just above \(A\) and just below \(B^{\prime}\), as in Figure 35, and it is clear that the first intersection with \(\{0\}\times\mathbb{R}\) of the image under \(f_{p;m}\) occurs just above \(f_{p;q}(A)\) and the last occurs just below \(f_{p;q}(B^{\prime})\), so
\[2\tau(K_{p,q,b})-1 =y(f_{p,q,b}(A))-y(f_{p,q,b}(B^{\prime}))\] \[=[y(f_{p,q,b}(A))-y(f_{p,q,b}(B))]+[y(f_{p,q,b}(B))-y(f_{p,q,b}(B^ {\prime}))]\] \[=[pq-q+b]+[-p].\]
From this we find that \(\tau(K_{p,q,b})=\frac{(p-1)(q-1)+b}{2}\).
Finally, we check the case of \(q\leq 0\) using the fact that the mirror of \(B(p,q,b)\) is \(B(p,-q-1,p-b-1)\) and the fact that mirroring flips the sign of \(\tau\) and \(\epsilon\). If \(\epsilon(K)=\pm 1\) then \(\epsilon(\overline{K})=\mp 1\) and
\[\tau(K_{p,q,b}) =-\tau(\overline{K}_{p,-q-1,p-b-1})=-p\tau(\overline{K})-\frac{(p -1)((-q-1)\pm 1)+(p-b-1)}{2}\] \[=p\tau(K)-\frac{(p-1)(-q\pm 1)-b}{2}=p\tau(K)+\frac{(p-1)(q\mp 1)+b }{2}\]
as desired. A similar computation, which we omit, checks the case that \(\epsilon(K)=0\) when \(q<0\).
### Mazur satellites
We have seen that for one-bridge braid patterns, the immersed curve invariant for a satellite knot can be obtained from that of the companion knot by performing a diffeomorphism in a covering space. Unfortunately this is not always the case, even for \((1,1)\)-satellites. Consider the Mazur pattern \(M\), pictured along with its genus one doubly pointed Heegaard diagram in Figure 36. We can use Theorem 1.1 to compute the complex \(CFK_{\mathcal{R}}\) associated with \(M(T_{2,3}\), the Mazur satellite of the right handed trefoil; the resulting complex is shown in Figure 37. The immersed multicurves representing this complex, also shown in the figure, can be obtained from the complex following the algorithm in [1]. Note that the resulting curve has more then one component, even though the curve for the trefoil is connected, indicating that there is no hope of recovering this curve by a plane transformation. It is an interesting question whether there is some more complicated geometric operation to recover the immersed multicurve for a Mazur satellite directly from the immersed multicurve for the companion, although we do not pursue this in the present paper.
Despite this difficulty, Theorem 1.1 can be useful in peforming computations with the Mazur pattern and other (1,1) patterns. We will demonstrate this by reproving formulas for the \(\epsilon\)-invariant and a \(\tau\)-invariant formula of Mazur satellite knots derived by Levine [16]. Levine derived these formulas by first computing
the bimodule \(\widehat{CFDD}(X_{M})\) of the exterior \(X_{M}\) of the Mazur pattern using the arc-slides algorithm that is developed in [14] and is implemented in a Python script by [11]. In theory, it suffices to analyze \(\widehat{CFDD}(X_{M})\boxtimes\widehat{CFA}(S^{3}\nu(K))\)
Figure 36. A doubly pointed bordered Heegaard diagram (right) for the Mazur pattern (left).
Figure 35. The first intersection of the non-compact component of \(\tilde{\gamma}_{K}\) with \(\{0\}\times\mathbb{R}\) and the last intersection with \(\{p-1\}\times\mathbb{R}\) give rise to the first and last intersection with \(\{0\}\times\mathbb{R}\) after applying \(f=f_{p,q,b}\)
to derive both formulas. However, this approach is hindered by its computational complexity since \(\widehat{CFDD}(X_{M})\) is large. Instead, by taking box-tensor products of the bimodule with appropriate bordered invariants, Levine partially computed the hat-version knot Floer chain complexes of Mazur satellite knots \(M(K)\) to obtain the \(\tau\)-invariant formula, and then further deduced the \(\epsilon\)-invariant of \(M(K)\) by computing and examining the hat-version knot Floer chain complexes of \((2,\pm 1)\)-cables of the Mazur satellite knots. In this subsection, we present an alternate proof for both formulas using the immersed-curve technique. While our approach is ultimately built on computing \(\widehat{CFDD}(X_{M})\boxtimes\widehat{CFA}(S^{3}\backslash\nu(K))\) too, we circumvent the complexity in obtaining the bimodule and in performing and simplifying the box-tensor product. Instead we use 1.1 to analyzes the \(\mathbb{F}[UV]/UV\)-version knot Floer chain complex of Mazur satellite knots using immersed curves diagrams.
**Theorem 6.9** (Theorem 1.4 of [11]).: _Let \(M\) be the Mazur pattern. Then_
\[\epsilon(M(K))=\begin{cases}0&\epsilon(K)=0\\ 1&\text{otherwise},\end{cases}\]
Figure 37. (Top left): A lift of the immersed genus one Heegaard diagram obtained by pairing a (1,1)-diagram for the Mazur pattern with the immersed curve for the right handed trefoil. (Bottom): The chain complex \(CFK_{\mathcal{R}}\) computed from this diagram. (Top right): The immersed multicurve representing this complex.
_and_
\[\tau(M(K))=\begin{cases}\tau(K)+1&\tau(K)>0\text{ or }\epsilon(K)=-1\\ \tau(K)&\text{otherwise}.\end{cases}\]
In the proof of Theorem 6.9, we will examine the knot Floer chain complex defined from the pairing diagram of the bordered Heegaard diagram \(\mathcal{H}_{M}\) for \(M\) shown in Figure 36 and the immersed curve \(\alpha_{K}\) of the companion knot \(K\). For convenience, we will be working with the lifts of the curves in the universal cover of the doubly-marked torus; the curves will be denoted by \(\tilde{\beta}_{M}\) and \(\tilde{\alpha}_{K}\) throughout the proof.
Moreover, we only need the portion of \(\alpha_{K}\) that indicates the value of \(\tau(K)\) and \(\epsilon(K)\). Specifically, in the universal cover \(\mathbb{R}^{2}\), the lines \(\mathbb{Z}\times\mathbb{R}\) divides \(\tilde{\alpha}_{K}\) into segments, and there is only one segment connecting \(\{i\}\times\mathbb{R}\) and \(\{i+1\}\times\mathbb{R}\) for each \(i\); we call it _the unstable segment_ as it corresponds to the unstable chain (defined in [1, Theorem 11.26]). The sign of the slope of the unstable segment is equal to the sign of \(\tau(K)\), and the minimum number of intersection points between the unstable segment and the horizontal grid lines \(\mathbb{R}\times\mathbb{Z}\) is equal to \(2|\tau(K)|\). The invariant \(\epsilon(K)\) can be read off from the subsequent segment of the unstable segment as we traverse the unstable segment from left to right: If the next segment is a cap that turns right, \(\epsilon(K)=1\); if the next segment is a cap that turns left, then \(\epsilon(K)=-1\); otherwise, \(\epsilon(K)=0\). Note that by the symmetry of knot Floer chain complexes, the segment preceding the unstable segment is determined by the segment after the unstable segment. Apart from the unstable segment and the segments preceding and trailing it, the rest of \(\alpha_{K}\) will not affect the proof of Theorem 6.9.
Proof of Theorem 6.9.: When \(\epsilon(K)=0\), \(K\) is \(\epsilon\)-equivalent to the unknot \(U\), and hence the \(\tau\)- and \(\epsilon\)-invariant of \(M(K)\) coincide with those of \(M(U)\)[1, Definition 1 and Proposition 4]. Since \(M(U)\) is isotopic to the unknot, \(\epsilon(M(K))=0\) and \(\tau(M(K))=0\).
When \(\epsilon(K)\neq 0\), we will discuss the case \(\epsilon(K)=1\) and the case \(\epsilon(K)=-1\) separately. Within each case, we will further separate the discussion into two sub-cases depending on the value of \(\tau(K)\). We shall inspect the chain complex defined from the pairing diagram, and at the cost of an isotopy of the curves in the pairing diagram, we may assume every chain complex is reduced, i.e., each bigon in the pairing diagram contributing to a differential will cross some base point. The differentials obtained from bigons crossing \(z\) are called the _vertical differentials_, and the arrows are labeled by a power of \(V\) specifying the multiplicity of \(z\). Likewise, _horizontal differentials_ refer to those obtained from bigons crossing \(w\) and the arrows are labeled by an appropriate power of \(U\).
We begin with the case \(\epsilon(K)=1\). When \(\tau(K)>0\), the pairing diagram is shown in Figure 38. By direct inspection, the intersection points \(\{x_{i}\}_{i=1}^{2n+1}\) (with the vertical differentials) form a non-acyclic subcomplex of the hat-version knot Floer chain complex \(\widetilde{CFK}(\mathcal{H}_{M}(\alpha_{K}))\): the cycle \(\sum_{i=0}^{n}x_{2i+1}\) survives in the homology \(\widetilde{HF}(S^{3})\), and this cycle is the distinguished element of some vertically simplified basis in the sense of [1, Section 2.3]. Note that there is a horizontal arrow from \(y_{i}\) to \(x_{i}\) for each odd \(i\), which we write \(\partial^{horz}(y_{i})=Ux_{i}\). Since \(U(\sum_{i=0}^{n}x_{2i+1})=\partial^{horz}(\sum_{i=0}^{n}y_{2i+1})\), the distinguished element \(\sum_{i=0}^{n}x_{2i+1}\) is a boundary with respect to the horizontal differential. Therefore, \(\epsilon(M(K))=1\)
by [12, Lemma 3.2 and Definition 3.4]. Also, from the aforementioned subcomplex, it is standard to see that the \(\tau\)-invariant of \(M(K)\) is equal to the Alexander grading of \(x_{1}\), which we denote by \(A(x_{1})\). To get \(A(x_{1})\), we apply an algorithm given in [10, Lemma 4.1]. Specifically, up to an isotopy, the planar pairing diagram admits an involution (which swaps the \(w\)'s and \(z\)'s) given by a 180-degree rotation about a center point \(c\in\tilde{\beta}_{M}\cap\tilde{\alpha}_{K}\). Let \(l_{c,x_{1}}\) denote the path on \(\tilde{\beta}_{M}\) from \(c\) to \(x_{1}\), and let \(\delta_{w,z}\) denote the set of equivariant short arcs in the complement of \(\tilde{\alpha}_{K}\) connecting \(w\) to \(z\) within each fundamental region. Then \(A(x_{1})\) is equal to the algebraic intersection number \(l_{c,x_{1}}\cdot\delta_{w,z}\). To express \(l_{c,x_{1}}\cdot\delta_{w,z}\) in terms of \(\tau(K)\), note that the
Figure 38. The pairing diagram and relevant differentials for \(M(K)\) when \(\epsilon(K)=1\) and \(\tau(K)>0\). As an illustration for finding the differentials, some bigons contributing to the listed differentials are shaded. The y-coordinates of the horizontal grid lines are labeled.
unstable segment stretches across \(2\tau(K)\) fundamental regions vertically, with its midpoint sharing the same height with \(c\). For a clearer discussion, we parametrize the plane so that \(c\) is the origin \((0,0)\), the side length of each fundamental region is \(1\); see Figure 38. Observe that \(l_{c,x_{1}}\) consists of \(\tau(K)-1\) copies of lifts of \(\beta_{M}\), together with a path from the point \((0,\tau(K)-1)\) to \(x_{1}\). Each copy of \(\beta_{M}\) intersects the \(\delta_{w,z}\)'s algebraically once, and the path from the point \((0,\tau(K)-1)\) to \(x_{1}\) intersects the \(\delta_{w,z}\)'s algebraically twice; in sum, \(l_{c,x_{1}}\cdot\delta_{w,z}=\tau(K)+1\). So, \(\tau(M(K))=\tau(K)+1\). When \(\tau(K)\leq 0\), the corresponding pairing diagram is shown in Figure 39. The intersection points \(\{x_{i}\}_{i=1}^{2n+1}\) generate a sub-complex of \(\widehat{CFK}(\mathcal{H}_{M}(\alpha_{K}))\): the cycle represented by \(x_{2n+1}\) survives in \(\widehat{HF}(S^{3})\) and is the distinguished element of a vertically simplified basis. One can see that \(x_{2n+1}\) is a boundary with respect to the horizontal differential since \(\partial^{horz}(\sum_{i=1}^{k}y_{i})=U^{2}x_{2n+1}\), implying \(\epsilon(M(K))=1\).
Figure 39. The pairing diagram and relevant differentials for \(M(K)\) when \(\epsilon(K)=1\) and \(\tau(K)\leq 0\).
Figure 40. The pairing diagrams and relevant differentials for \(M(K)\) when \(\epsilon(K)=-1\), separated into two cases by \(\tau(K)\).
by [11, Lemma 3.2 and Definition 3.4]. Using a similar argument as in the previous case, one may show \(\tau(M(K))=A(x_{2n+1})=l_{c,x_{2n+1}}\cdot\delta_{w,z}=\tau(K)\).
When \(\epsilon(K)=-1\), the proof is similar to the case when \(\epsilon(K)=1\). The pairing diagrams are given in Figure 40. When \(\tau(K)\geq 0\), \(\sum_{i=0}^{n}x_{2i+1}\) is the distinguished element of some vertically simplified basis. Since \(\partial^{horz}(\sum_{i=0}^{n}y_{2i+1})=U\sum_{i=0}^{n}x_{2i+1}\), we have \(\epsilon(M(K))=1\). For the \(\tau\)-invariant, we have \(\tau(M(K))=A(x_{1})=l_{c,x_{1}}\cdot\delta_{w,z}=\tau(K)+1\). When \(\tau(K)<0\), \(x_{2n+1}\) is the distinguished element of some vertically simplified basis. As \(x_{2n+1}=U\partial^{horz}(y)\), we know \(\epsilon(M(K))=1\). Finally, \(\tau(M(K))=A(x_{2n+1})=l_{c,x_{2n+1}}\cdot\delta_{w,z}=\tau(K)+1\).
|
2305.00546 | Making Changes in Webpages Discoverable: A Change-Text Search Interface
for Web Archives | Webpages change over time, and web archives hold copies of historical
versions of webpages. Users of web archives, such as journalists, want to find
and view changes on webpages over time. However, the current search interfaces
for web archives do not support this task. For the web archives that include a
full-text search feature, multiple versions of the same webpage that match the
search query are shown individually without enumerating changes, or are grouped
together in a way that hides changes. We present a change text search engine
that allows users to find changes in webpages. We describe the implementation
of the search engine backend and frontend, including a tool that allows users
to view the changes between two webpage versions in context as an animation. We
evaluate the search engine with U.S. federal environmental webpages that
changed between 2016 and 2020. The change text search results page can clearly
show when terms and phrases were added or removed from webpages. The inverted
index can also be queried to identify salient and frequently deleted terms in a
corpus. | Lesley Frew, Michael L. Nelson, Michele C. Weigle | 2023-04-30T18:16:06Z | http://arxiv.org/abs/2305.00546v1 | # Making Changes in Webpages Discoverable: A Change-Text Search Interface for Web Archives
###### Abstract.
Webpages change over time, and web archives hold copies of historical versions of webpages. Users of web archives, such as journalists, want to find and view changes on webpages over time. However, the current search interfaces for web archives do not support this task. For the web archives that include a full-text search feature, multiple versions of the same webpage that match the search query are shown individually without enumerating changes, or are grouped together in a way that hides changes. We present a change text search engine that allows users to find changes in webpages. We describe the implementation of the search engine backend and frontend, including a tool that allows users to view the changes between two webpage versions in context as an animation. We evaluate the search engine with U.S. federal environmental webpages that changed between 2016 and 2020. The change text search results page can clearly show when terms and phrases were added or removed from webpages. The inverted index can also be queried to identify salient and frequently deleted terms in a corpus.
Web archives, Information retrieval, Government documents, Versioned document collections +
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: journal: Information systems Research Interface
+
Footnote †: journal: Journal of Information Systems Research Interface
+
Footnote †: journal: Information systems Research Interface
+
Footnote †: journal: Information systems Research Interface
one matching page version at the Portuguese Web Archive,3 but this web archive actually holds additional matching versions of this webpage that are not included in the search results. Figure (c)c shows a search of the 2016 End of Term archive on the Internet Archive's Wayback Machine,4 yet the version linked in the search result is from January 24, 2017, after inauguration day. Later, we will present a more effective search results page for changes in webpages, showing that the deletion occurred between February and March 2017.
Footnote 3: [https://arquivro.pt/](https://arquivro.pt/)
Footnote 4: [https://web.archive.org/EndOfTerm2016WebCrawks/search/](https://web.archive.org/EndOfTerm2016WebCrawks/search/)
In this work, we contribute a temporal search engine architecture and interface for web archive collections that allows users to search for changes on webpages and view those changes in context. The main contribution of the search engine architecture is the calculation of changed terms between mementos. The main contributions of the search
Figure 1. Two examples of web archive URI lookup of [https://www.niehs.nih.gov/health/topics/agents/index.cfm](https://www.niehs.nih.gov/health/topics/agents/index.cfm). URI lookup does not allow the user to search by page text.
Environmental Agents
URL: [https://www.niehs.nth.gov/health/topics/agents/index.cfm](https://www.niehs.nth.gov/health/topics/agents/index.cfm)
This text was captured on Oct 05, 2016 Show All Captures
Agents Pestictides Radon Environmental Agents Radon Soy Infant Formula Environmental Agents Soy Infant Formula Skyrene Environmental Agents Styrene Water Pollution Environmental Agents Water Pollution
Content: text/html Size: 73 KB
Environmental Agents
URL: [http://www.niehs.nth.gov/health/topics/agents/index.cfm](http://www.niehs.nth.gov/health/topics/agents/index.cfm)
This text was captured on Jul 04, 2016 Show All Captures
Parkinson's Disease Reproductive Health Environmental Agents Environmental Health Topics
Environmental Agents Acrylamide Environmental Agents Acrylamide Air Pollution Environmental
Agents Air Pollution
Content: text/html Size: 71 KB
(a) Archive-It collection SERP matching the query in Figure (a)a showing title, URI, date, text snippet, metadata, replay link, and link to additional captures. Two captures matching the query are shown individually.
interface are a search engine that can query for changes, a search engine results page that shows multiple versions of a page as one result in a meaningful way, a sliding difference viewer that allows users to examine a page's text change over multiple time periods, and a difference animation that allows users to see changes in context.
The change text search engine backend was evaluated with the Environmental Data and Governance Initiative's (EDGI) federal environmental webpages data set (Nerner et al., 2016). Users can view the webpage changes on a search engine results page snippet. The indexing process can identify additional terms beyond the original terms identified in the original EDGI study, and the temporal granularity of the deletions can be increased compared to the original four-year window from the original study.
## 2. Background and Related Work
The Memento Protocol (Memento et al., 2016) is the standard HTTP content negotiation protocol for web archives. Users request a specific webpage according to its URI on the live web, also known as its URI-R, along with a datetime. Memento-compliant web archive servers will query for and return the archived version of the webpage closest to the requested datetime. Each archived version of a webpage can be accessed directly with its own URI, also known as a URI-M. The archived versions of webpages are also known as _mementos_. Finally, a listing of all archived versions of a webpage on a specific server can be queried. This listing is called a _TimeMap_, or URI-T. Mementos are commonly referred to as being a part of the past web, while a still-working URI-R would belong to the live web. There are multiple web archives, including national web archives like the Portuguese Web Archive, subscription-based web archives for organizations like Archive-It, and more comprehensive web archives like the Internet Archive's Wayback Machine. While the Memento Protocol is standard in web archives, there is no standard level of support for full-text search. Web archives contain multiple versions of pages
Figure 3. Some web archives have full-text search with various filtering features over small collections or over the entire web archive. None of the search interfaces allow for users to search for terms that have been removed from webpages.
over time, yet existing full-text temporal search engines for web archives do not successfully address how to present multiple versions of the same page.
Web archives contain a vast amount of untapped potential for analyzing change over time. Webpages change at different rates according to features such as their domain and the depth of the URL [(3)]. Existing data sets of webpage crawls commonly used for academic purposes, such as ClueWeb [(28)] and Common Crawl,5 aim to collect snapshots of a large amount of unique URLs. Since these data sets did not aim to collect multiple versions of the same page, there is no guarantee of any particular page being crawled regularly, which would hinder the ability to analyze change over time. Additionally, in order for an analysis of webpage change to be of consequence, the changes on those webpages need to be meaningful. Clearly, examining topic-based change over time using web archives is limited because searching for pages with specific changes is not currently possible.
Footnote 5: [https://commoncrawl.org/](https://commoncrawl.org/)
In order to capitalize on using web archives to examine change over time, temporal search interfaces must be extended to support searching for term and phrase changes. These search interfaces will also need to effectively show the changes to the users. Finally, the search tool will need to be able to find meaningful changes in a data set of webpages relevant to existing digital humanities research.
### Temporal Search Engines for Web Archives
Full-text search is not a feature available in every web archive. Some web archives, such as the Portuguese Web Archive[(25)], the UK Web Archive,6 and collections in Archive-It,7 do support full-text search. Each of these systems is independent; there is no full-text search engine that can search all of the versions of a webpage across multiple web archives. One of the major challenges in creating a temporal search engine for web archives is that each archived web page may have multiple versions. Ready-made information retrieval systems for versioned document collections do not exist. Indexing, presenting search results, and displaying changes in context all need to be addressed when creating such a system.
Footnote 6: [https://www.webarchive.org.uk/ukwa/](https://www.webarchive.org.uk/ukwa/)
How versioned document collections are indexed is vital to discovery of changes between versions. Berberich et al. [(5)] developed a temporal coalescing framework that orders all versions of a document over time and then assigns each document a validity range rather than a single datetime. This allowed for the amount of document change to be quantified between versions. While documents with a certain similarity threshold were combined in the index to save space, the separate document versions were not combined in the search engine results page, and searching by change was not possible. While temporal date ranges were not implemented when Berberich developed the validity range framework, date range fields are now a standard part of Apache Lucene [(35)], the premiere open-source search backend.8
Footnote 7: [https://archive-it.org/](https://archive-it.org/)
Footnote 8: [https://lucene.apache.org/](https://lucene.apache.org/)
A few temporal search engines have investigated how to present multiple versions of a page in the search results. Melo et al. [(25)] added a backend parameter to the Portuguese Web Archive search system to limit the number of versions displayed on a search results page. Kiesel et al. [(20)] performed a qualitative evaluation on their personal web archiving localized search system, and found that including every version of a web page introduced clutter into the search results pages that affected usability. Major [(24)] also identified repeated captures of the same URI in search results as problematic. On the other hand, Jackson et al. [(15)] chose to include every relevant version of a web page in their search results page ordered by time. This was because grouping web page versions would hide change over time. In all
of these systems, searching by change is not possible, and the only way to view the changes between versions is by manual inspection.
### Showing Change Between Document Versions
Temporal search engines for web archives do not incorporate the changes between versions of webpages into their results, but some other tools do exist to help users find and view changes on known pages. Sherratt et al. [34] created a tool to help users find changes in a specific web page's version history as a part of the GLAM Workbench web archives Jupyter notebook collection. Because this tool does not index any web page content, each URL must be searched individually. The tool also only includes a linear search option, which hinders its speed. The results highlight the query term in context, but not in context of the previous versions of the page. Another tool, WikiBlame,9 allows for users to search for changes in a specific Wikipedia page. While this tool does allow for binary search, it does not index any page content, so only one page can be searched at a time rather than an entire group of related pages. Wikipedia includes a differences tool as part of its version history viewer,10 allowing users to view changes in a static context.
Footnote 9: [http://wikipedia.ramselehof.de/wiki?lame.php](http://wikipedia.ramselehof.de/wiki?lame.php)
Footnote 10: [https://en.wikipedia.org/wiki/Help:Diff](https://en.wikipedia.org/wiki/Help:Diff)
The Internet Archive Wayback Machine allows users to compare two different versions of a webpage using its Changes tool.11 This tool, shown in Figure 4, was built upon the Environmental Data and Governance Initiative's Web-Monitoring-Diff suite.12 This service is not currently connected with any tool that allows users to search for changes across versions, and only lets users compare versions from the Internet Archive. There is currently no version of the Changes tool that allows users to compare versions of webpages that exist across multiple archives. The Changes tool is only able to compare two versions of a webpage at one time, and the comparison is currently presented in a side by side static context.
Footnote 11: [https://web.archive.org/web/changes/](https://web.archive.org/web/changes/)
Footnote 12: [https://github.com/edgi-govdata-archiving/web-monitoring-diff](https://github.com/edgi-govdata-archiving/web-monitoring-diff)
Other types of versioned document collections besides web archives have led to tools that allow for comparison for multiple document versions. For example, Henley et al. [13] developed a tool called Yestercode that allows programmers to use a slider to navigate between different versions of their code and display the differences between consecutive versions. The collaborative document writing tool DocuViz [38] aims to help users visualize how documents evolved, and Perez-Messina et al. [29] developed a tool to visualize the origin of text segments in collaborative documents.
Careful design choices are necessary to help users spot changes. Preattentive processing is a core technique employed during design to aid in rapid detection of changes, such as in estimation situations [12]. One system that animates changes in text as a complement to displaying the changes side-by-side is Diffamation [7]. This system shows all text changes between two versions of a document in parallel animation with navigation in order to help the user understand where, what, and how much text has changed. Another system that uses text animation to show differences between document versions is SlideDiff [8]. SlideDiff shows changes to text and media in slide presentation versions, even showing a simulated mouse cursor in order to draw the user's attention to the position of each of the changes and help the user infer the intent of the editor due to the animation appearing more life-like. These goals are also applicable to viewing and understanding edits on webpages. Jatowt et al. [17] created the Journey to the Past framework that included a browser that allowed users to search for changes in a web page across web archives and animate the differences that matched the query terms. The interface for this browser is similar to video replay, with buttons to help the user navigate between versions and control the animation replay. Because this framework does not include indexing, searching for a changed term or phrase must be done one page at a time, rather than at the collection level. Adar et al. [2] developed
the Zoetrope system to allow users to search individual web page versions for terms and show the differences over time in a stop-motion style of animation. This system relied on local crawling, and the queries relied on closest matches in the DOM rather than indexing; integrating full-text search with Zoetrope's visualization capabilities was identified as future work.
### Changes on Federal Websites
The archival of federal websites, especially at the end of a president's term, is an important task undertaken by multiple organizations. The End of Term Web Archive is created through a partnership between five organizations, including the Internet Archive and the Library of Congress (Wojcik et al., 2016; Wang et al., 2017). This web archive includes a full-text search feature,13 but each end of term crawl includes only one capture of each web page. Phillips et al. (Phillips et al., 2017) compared the 2008 and 2012 end of term collections to identify changes in crawl dates and webpage addresses, but individual terms were not analyzed. Nost et al. (Nost et al., 2017), on behalf of the Environmental Data and Governance Initiative (EDGI), compared the change in 56 pre-chosen environmental terms and phrases between 2016 and 2020 using the web archive holdings at the Internet Archive. They identified a list of approximately 10,000 webpages that had versions in both 2016 and 2020. The EDGI
Figure 4. The Internet Archive’s Wayback Machine Changes tool shows users the differences between two captures. It only works for captures at this one web archive, and there is no way to search the changes to find the dates and times.
analysis focused on highlighting collection-level trends, rather than creating an interface to allow users to navigate to changes in individual pages. Both the 2008/2012 and 2016/2020 analyses relied on changes between versions captured four years apart, but many additional versions of these pages exist across multiple web archives that have not been analyzed for changes.
## 3. Design Motivation
Developing a mental model of web archives and learning how to navigate the past web are not trivial tasks for many users. As recently as 2019, Abrams et al. (Abrams et al., 2019) found that users had difficulty distinguishing between whether they were on the live web or past web. They also found that users' lack of understanding of web archives hindered their success as much as an ineffective user interface. Full-text search is therefore an advanced feature that will only benefit users with a strong understanding of the past web.
One group of candidate users for full-text search in web archives are journalists. There has been no prior analysis of journalists' use of web archives (Kal
We searched Google News for the phrase "Wayback Machine" using the GNews Python API,14 and collected 500 articles matching this query on a biweekly basis from May to July, 2022. Out of the 500 collected articles, 106 of them used web archives as evidence. We manually categorized these articles by user task. The journalists stated how they used web archives in their articles, which is how we created the task codes. Similar to Teevan, one author coded the dataset. As shown in Figure 5, journalists use web archives to view unavailable pages, but they also frequently use web archives to view change over time. These users wanted to view term and phrase additions, deletions, and the associated content lifespan (Krishnan et al., 2020).
Footnote 14: [https://news.google.com/](https://news.google.com/), [https://github.com/ranahaani/GNews/](https://github.com/ranahaani/GNews/)
Currently, finding additions and deletions of terms and phrases is only possible by manually inspecting multiple versions of a webpage. Since journalists do have a strong mental model of web archives, their tasks would be made easier by a search engine that allowed them to find the changes in content over time.
While journalists are the only group of users we studied, they are not the only users who would benefit from such a tool. The IIPC Access Working Group (Han et al., 2020) identified additional professional users including researchers studying change over time in computational and humanities contexts, professionals investigating data evolution such as in tourism or real estate, and lawyers in civil trademark cases and patent cases.
Instead of considering how to display multiple versions of webpages in search results as a hindrance, finding and showing the changes between the versions should be one of the main features of a temporal search engine for web archives.
## 4. Architecture
The architecture for the change text search engine consists of three levels, as shown in Figure 6. First, webpages with multiple versions must be acquired. Next, the webpages are indexed for both text content and replay. The changes in the text content are calculated after the initial indexing. Finally, the user interface for a search engine provides the user with a way to discover the changes in the webpages, and replay those changes in context.
### Document Acquisition
The documents for the change text search engine need to be in WARC format. While users can replay public web archives' holdings, they cannot access the original WARC files. Indexing these public holdings into the change text search engine is therefore not possible without a tool that can turn a URI-M into a WARC file. Hypercane (Han et al., 2020) is a tool for interacting with web archive collections. One feature of the Hypercane tool is to _synthesize_ a WARC from a given URI-M. The WARC file output by Hypercane includes the original HTML of the web page in raw form, along with all available embedded resources. The HTML is necessary for creating an inverted index of the changes in page content, while the embedded resources are necessary for merged replay showing the differences between page versions in context. Users and organizations with existing WARC collections would not need to perform any additional document acquisition.
### Indexing
The document collection must be indexed in order to support querying and replay. The first index generated is the Apache Lucene index that contains the tokenized content of the pages, along with additional page metadata. The UK Web Archive WARC indexer (Miller et al., 2020) is an existing tool that can properly index WARCs for Lucene. Apache Solr provides
an interface for working with Lucene.15 We extended the Solr schema originating from SolrWayback [9] to support new change text fields (for sets of added and deleted terms) prior to indexing. We wrote a change text calculation script to populate these new fields. This Java code interfaces directly with the Lucene index to calculate the sets of added and deleted terms as well as the document temporal validity ranges as defined by Berberich et al. [5]. Documents are grouped according to their canonicalized URL, which is an existing field populated during the initial indexing.
Footnote 15: [https://solr.apache.org/](https://solr.apache.org/)
The second index generated is the PyWB [21] CDX that contains the information needed to replay the pages. Since PyWB's default behavior is to make a copy of all indexed WARCs in a single folder, this behavior was turned off in order to reduce disk space usage. The folders containing the WARCs can simply be added into the PyWB configuration file to enable successful replay without duplication.
### User Interface
A user can interact with the change text term index through a search interface, which we built using Solarium,16 a PHP Solr interface. A user may query for a deleted term, a deleted phrase, an added term, or an added phrase. Deleted terms and phrases can be detected whether the term is fully, or only partially, removed from the webpage. The type of change, along with the term or phrase, is transformed into a valid Lucene query over the appropriate change text fields, and then the search results are returned and displayed in a search engine results page, shown in Figure 7.
Footnote 16: [https://github.com/solariumphp/solarium](https://github.com/solariumphp/solarium)
Figure 6. The architecture of the change text search engine. Level 1 consists of document acquisition. Level 2 consists of the documents and indices. Level 3 consists of the user interface.
Figure 8: The sliding difference viewer shows the term ’scientific’ was deleted from the page. [https://www.niehs.nih.gov/health/topics/agents/index.cfm](https://www.niehs.nih.gov/health/topics/agents/index.cfm) in 2017, along with the context of the deletion.
Figure 7: Change text search interface. 1, 2, and 4: individual replay links to page mementos; 3: the diff between the pre and post deletion versions; 5: content lifespan calculation; 6: the link to the sliding diff viewer across all indexed versions of the page; 7: the link to the deletion animation
The change-text search interface allows the user to understand the differences between multiple versions of the same page, unlike the current web archive interfaces shown in Figure 2. Each search engine result shows information about when the term was added and/or removed, along with a snippet using hues showing the difference between two consecutive versions most relevant to the query. The search engine result includes links to replay these versions individually via standard PyWB replay, a link to compare all page versions between the addition and deletion as shown in Figure 8, and a link to replay the pre- and post-deletion versions simultaneously as shown in Figure 9. In the first tool, the sliding difference tool, the user can skip past identical mementos with the fast forward and rewind buttons. We created this sliding difference viewer by using the plaintext indexed content from Lucene. The viewer functions using a Solarium query, the PHP difference library [6], and some additional JavaScript. The second tool, the dual replay tool, the user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past identical mementos with the fast forward and rewind buttons. The user can skip past mementos with the fast forward and rewind buttons.
shows an animation of the difference in context. This tool is shown in action at [https://youtu.be/qHSVvcubuYo](https://youtu.be/qHSVvcubuYo). The differences in the animation use hues. Both the animation itself along with the highlighted text colors lend themselves to pre-attentive processing of the changes. The animation jumps to each change in turn, in contrast to the static Changes Tool on the Wayback Machine. The animation is also meant to give the illusion of the changes happening in real time. In order to create this animation, we used EDGI's Python HTML difference library to calculate and combine the differences in a static context, and then extended the code to generate the HTML and JavaScript to animate the merged pages. We support successful difference calculation between page versions that originated from different web archives by using bannerless replay with PyWB. To draw the user's attention to each change, the page jumps to each change one-by-one, animates one change at a time, and pauses before jumping to the next change.
## 5. Evaluation
### Finding changes on federal webpages
Federal webpages with changes between 2016 and 2020, as calculated by EDGI, form an excellent data set for evaluation of a change text search engine. The EDGI data set consists of about 40,000 web page addresses. Approximately 10,000 of these web pages have versions in 2016 and 2020 at the Internet Archive. First, we expanded the data set to include more of the original pages by considering additional web archives. Next, we filtered the results so that only pages with successful HTTP status codes (HTTP 200 OK) are kept for indexing. Finally, we prioritized pages with known term changes, and we identified additional versions of these pages inside of the four-year window with salient changes for indexing.
EDGI examined each monitored page to determine if a capture from both the first half of 2016 and the first half of 2020 existed, which they defined as a paired-page sample. Since only about 10,000 pages of the 40,000 monitored pages had paired mementos at the Internet Archive, there are about 30,000 pages to examine for paired mementos using other web archives. We generated a TimeMap for each of these 30,000 web page addresses using MemGator (Beng et al., 2019), a memento aggregation service. Examining the TimeMaps, about 8,500 of the 30,000 pages do have newly found paired mementos. Interestingly, about half of these new pairs exist at the Internet Archive. It is possible that these mementos were added after some kind of delay or embargo period. The other pairs directly rely on web archives besides the Internet Archive for at least one of the mementos in the 2016/2020 pair.
In the original set of approximately 10,000 paired mementos, about 75% of the pairs have successful HTTP status codes for both 2016 and 2020, as shown in Figure 9(a). In the additional set of approximately 8,500 paired mementos, about 38% of the pairs have successful HTTP status codes for both 2016 and 2020, as shown in Figure 9(b). In all, there are about 11,000 pairs of 2016/2020 mementos that are candidates for indexing. EDGI collected their data in a way that minimized non-successful HTTP status codes by using the Internet Archive's CDX API.17 For the new pair set, other web archives do not have public CDX APIs and TimeMaps do not have any status code information included, which is what led to more pages in the new pair set having non-successful HTTP status codes.
Footnote 17: [https://github.com/internetarchive/wayback/tree/master/wayback-cdx-server](https://github.com/internetarchive/wayback/tree/master/wayback-cdx-server)
The Wayback Machine holds both paired mementos in the original EDGI set. In contrast, the new pair set has mementos located at multiple web archives. In this new set, there are 3,563 mementos with successful status codes in both 2016 and 2020. Table 1 shows the web archives that hold these mementos, according to their TimeMaps. Some URI-Rs have multiple valid mementos in the time range of interest, which is the first half of 2016 and 2020. In the table, these URI-Rs are counted once for each web archive. The percentages represent the number of URI-Rs with at least one
Figure 10: Out of about 40,000 seed URI-Rs in the EDGI data set, approximately 11,000 have successful (200) HTTP status codes in both 2016 and 2020. We found an additional 3,500 mementos from multiple web archives with successful status codes.
* [20] **Environmental Agents** **[https://www.niehs.nih.gov/health/topics/agents/index.cfm](https://www.niehs.nih.gov/health/topics/agents/index.cfm)**
* Post-deletion memento
- Animated deletion**
* [22] **Differences: 2017-02-18 11:42:38 to 2017-03-20 02:43:43**
* Aloe Vera Fact Sheet (IMB) Concerned Citizens
- US EPA site geared towards citizens who want to become familiar with environmental issues and the potential environmental and human health risks caused by pollution.**
* [24] **- Topics include antibiotic resistance, agricultural policy, air quality, animal farms, e environmental justice, pollution prevention, etc.**
* [25] **- Addresses topics covering toxic substances, children's environmental health, air pollution ion, chronic diseases, climate change, vulnerable populations and drinking water.**
* Enviro-Health Links
- A portal to selected links to Internet resources on toxicology an environmental health issues of recent special interest; e.g., oil spills, nanotechnol**
* [27] **ogy, environmental justice, pollution, toxicogenomics, et al.**
* [28] **Addition memento (2010-10-05, 6 year lifespan). Sliding.diff**
* [29] **Change text SERP**
memento found in the TimeMap in the specified time range, divided by 3,563. The percentages allow for comparison of holdings between archives and across years. Since each URI-R may have a memento at multiple archives, the sums of the columns are non-meaningful.
Firstly, the Library of Congress web archive holds mementos for many pages that are not available in the Wayback Machine in the first half of 2016. Future researchers examining U.S. federal websites should utilize the Library of Congress web archive in addition to the Wayback Machine. Secondly, it appears that Arquivo.pt conducted a crawl of U.S. federal websites in May 2016, and mementos with similar datetimes are not available in the Wayback Machine or at the Library of Congress. Using MemGator to query for aggregated listings of mementos across all web archives not only increases the amount of historical evidence available to analyze, but also brings light to entire collections and the possible motivations behind the creation of those collections.
Indexing the paired 2016/2020 mementos is a starting point for determining additional versions of each page to index. The change text calculation script will determine all terms that have been added and removed between the two versions for each page indexed. Then, a binary search over the other versions of the page can be used to increase the temporal granularity of when a term was changed. Pages that were already identified as containing a term or phrase deletion by EDGI are prime candidates for early indexing. Since EDGI only tracked about 50 terms and phrases, additional meaningful terms can be identified from the change text calculations.
The initial indexing consisted of a small set consisting of 100 pairs of mementos. EDGI calculated that these pairs had complete or partial deletions of the terms sustainability, pollution, anthropogenic, or the phrase "endangered species." The next indexing set was larger, consisting of 1,000 pairs, or 10% of the original EDGI matched pairs. These pairs had complete or partial deletions of the terms and phrases: "toxic", "clean energy", "climate change", and "global warming". Many of the trends that emerged from the initial small sample persisted into the larger sample, so these trends are likely to persist throughout the entire data set.
Downloading the 1,000 pairs using Hypercane, part of Level 1 on Figure 6, took 41 hours. Initial Lucene indexing of the WARCs using the UKWA WARC indexer took 2.5 hours, and PyWB indexing took 40 minutes. The change text calculation script took no more than 10 seconds to generate the JSON updates to post to the Lucene index, and posting the data took no more than 5 seconds. These steps correspond to Level 2 on Figure 6.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Web Archive & \% (2016) & \% (2020) \\ \hline webarchive.loc.gov & 59.81 & 60.93 \\ web.archive.org & 34.41 & 71.15 \\ wayback.archive-it.org & 12.41 & 10.36 \\ arquivo.pt & 5.95 & 3.59 \\ perma.cc & 0.39 & 0.39 \\ web.archive.org.au & 0.11 & 0.08 \\ swap.stanford.edu & 0.11 & 0.06 \\ wayback.vefsafn.is & 0.11 & 0.20 \\ waext.banq.qc.ca & 0.08 & 0.08 \\ www.webarchive.org.uk & 0.03 & 0.08 \\ archive.md & 0.03 & 0.06 \\ \hline \end{tabular}
\end{table}
Table 1. Web archives with mementos in the first half of 2016 and 2020 for the 3,563 pages in the new pairing set with 200-to-200 status codes, as shown in the top right of Figure 9(b).
### Discussion
The change text search engine results page has increased functionality compared to other temporal search engines for web archives, like SolrWayback, as well as compared to the web monitoring strategy used by EDGI. The change text search engine groups multiple versions of a page in a way that allows the user to make sense of the differences between them, as shown in Figure 11. In SolrWayback, multiple identical versions of the same page will be shown in the search results page, and the version right after the deletion will not be a part of the search results. In contrast, the change text search engine results page only shows versions with meaningful change of the query term. The search engine results page can also be used to examine changes in context because of the diff snippet, and further detail is available in the sliding diff and animation tools linked in each result. Examining changes in context is not possible with the web monitoring strategy.
One of the major differences between indexing text content with Lucene and the deleted terms calculation completed by EDGI using Python is that the methods have completely different boilerplate removal techniques. Generally, the Lucene boilerplate removal strips more from the page than the EDGI technique. This meant that some terms were identified as deleted according to EDGI, but not according to the Lucene index. Other pages in Lucene indexed poorly due to the boilerplate removal, which is another difference in the deleted terms calculation. Some of the terms identified as deleted by EDGI were in various navigation page sections that were not stripped during boilerplate removal, so whether or not these should be named as deletions depends on the individual researcher's boilerplate removal preferences. Additionally, indexing all of the content with Lucene provides access to all of the deleted terms, while the EDGI web monitoring technique can only track pre-defined terms. The ability to find deleted terms without pre-defining them is powerful, which may outweigh poor indexing for some researchers.
While the original EDGI study only tracked 56 terms and phrases with environmental motivations, additional common deleted terms are now discoverable, as shown in Table 2. The original term list included terms like "safety", "transparency", "regulation", and "jobs", but "public", "access", "action", "development" and "science" are terms with similar meaning that were also commonly deleted. Many of the top deleted terms were stop words or temporal terms. Two of the original EDGI terms, "change" and "state", are stop words that have meaning within a federal environmental dataset.18
Footnote 18: [https://countwordsfree.com/stopwords](https://countwordsfree.com/stopwords)
The list of the newly found terms, in order by most deletions, is: national, support, public, program, resources, process, data, including, u.s, development, united, learn, department, action, access, work, impact, tools, areas, search, laboratory, technology, efforts, include, natural, science, planning, address, and open.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Term Type & Examples & Count \\ \hline Stop words & about, more, which, state & 49 \\ Temporal & 2015, 2, one, year & 13 \\ EDGI & climate, clean, impacts, water & 11 \\ Newly found & public, development, access, science & 29 \\ \hline \end{tabular}
\end{table}
Table 2. Categorization of top 100 deleted terms in the 1,000 paired memento index sample.
Additional discovery of deleted phrases is possible by examining the search results for the new deleted terms. For example, "public" is a new deleted term comparable to the original term "transparency." Manual inspection of the search results for "public" showed that the term often comes up in the context of a deleted phrase. "Public comment" was a common context for the deleted term "public," and this usage is similar to the original term "transparency." Another more frequent context was the phrase "public health," which is actually more similar to the original term "safety" than "transparency."
The two additional tools for examining changes in detail, the sliding diff and the animation, each have advantages for different contexts. The sliding diff tool can show more than two versions of a page, while the animation can only show two versions of a page at a time. The sliding diff also shows the changes in a persistent manner, while the deleted text in the animation view disappears by nature. Another benefit of the sliding diff tool is that it loads extremely quickly, because only content is compared. The animation must load both mementos through replay, compute the HTML difference, and then load and show the animation, so it takes much longer. The sliding diff shows the blocks of text with changes over time well, but diffs hide content that has not changed. If the user wants to view the entire page in context, the animation shows the entire page, including text and images that have not changed. The animation also makes the differences pop out more strongly than the sliding diff tool. Both tools suffer when there are too many changes on the page, because there is too much highlighted and the animation is too slow.
The EDGI federal webpages dataset was both valuable and appropriate for evaluating this change text search engine. Documents created by the government are not entitled to privacy; in fact, these documents should be archived and made discoverable for the people. However, researchers must take great care when indexing web archive collections. Heterogeneous web archive collections may contain material that should be subject to stronger privacy protections, such as websites made by minors. Lin et al. (Lin et al., 2017) speculated that people are unaware their websites are archived because they never gave explicit consent, and other people rely on a lack of a web archive search function to provide them with privacy. Mackinnon (Mackinson, 2018) confirmed both of these hypotheses in a user study. Both Lin et al. and Mackinnon suggest anonymization as a tool for researchers to study overall web archive collections without compromising the privacy of the content creators.
## 6. Conclusions and Future Work
Existing temporal search engines for web archives do not allow for users to query for change over time. This paper presents a change text search engine, which allows users to find and view the changes in webpages. The search engine results page groups multiple versions of a page together without hiding the changes between these versions. In fact, the changes between the versions are the core reason why the grouping can occur, since the grouping represents the lifespan of a term on the page. A deletion animation shows changes in context, and a sliding difference viewer enables quick examination of the differences between many versions a page. The inverted index contains valuable information about the most frequently deleted terms in the corpus.
One important aspect of future work regarding the architecture will include automating the indexing process. Kiesel et al. (Kiesel et al., 2017) automated their personal web archiving indexing for both Lucene and PyWB. The change text calculation script will also need to be automated for this system to function. Automating the dual Lucene/PyWB indexing was identified as a significant architecture advancement for the full-text search in web archives community by Jackson et al. (Jackson et al., 2018) Another improvement to the search engine code will involve expanded support for detecting added phrases when both terms are already present on the page separately, and identification of partially added terms.
Indexing additional earlier versions of the webpages will lead to important insights about when terms were added, such as during a prior Presidential administration. Earlier versions will also enable discovery of additional terms and phrases that were deleted in prior administrations. Viewing the index from a computational standpoint allowed for the calculation of new deleted terms. Additional work will include computation of new deleted phrases. Another aspect of future work will include categorizing the amount of change to a webpage. This work would make it possible to distinguish between different types of changes, such as an entire page rewrite aligned with an organization's new goals versus removals of blocks of content that remove public access to vital information.
The need for a change text search engine and the specific search engine result page features was justified by examining specific user tasks. Jayanetti et al. (2018) categorized users in web archives, including those who view multiple versions of the same page. Future work on the user interface will involve analyzing the tasks of these users to better determine what these users need. Future work will also include a user study of the search interface front end, including the animation, to evaluate whether users such as journalists can more successfully complete their information seeking tasks.
|
2309.11946 | Thermal tides in neutrally stratified atmospheres: Revisiting the
Earth's Precambrian rotational equilibrium | Rotational dynamics of the Earth, over geological timescales, have profoundly
affected local and global climatic evolution, probably contributing to the
evolution of life. To better retrieve the Earth's rotational history, and
motivated by the published hypothesis of a stabilized length of day during the
Precambrian, we examine the effect of thermal tides on the evolution of
planetary rotational motion. The hypothesized scenario is contingent upon
encountering a resonance in atmospheric Lamb waves, whereby an amplified
thermotidal torque cancels the opposing torque of the oceans and solid
interior, driving the Earth into a rotational equilibrium. With this scenario
in mind, we construct an ab initio model of thermal tides on rocky planets
describing a neutrally stratified atmosphere. The model takes into account
dissipative processes with Newtonian cooling and diffusive processes in the
planetary boundary layer. We retrieve from this model a closed-form solution
for the frequency-dependent tidal torque which captures the main spectral
features previously computed using 3D general circulation models. In
particular, under longwave heating, diffusive processes near the surface and
the delayed thermal response of the ground prove to be responsible for
attenuating, and possibly annihilating, the accelerating effect of the
thermotidal torque at the resonance. When applied to the Earth, our model
prediction suggests the occurrence of the Lamb resonance in the Phanerozoic,
but with an amplitude that is insufficient for the rotational equilibrium.
Interestingly, though our study was motivated by the Earth's history, the
generic tidal solution can be straightforwardly and efficiently applied in
exoplanetary settings. | Mohammad Farhat, Pierre Auclair-Desrotour, Gwenaël Boué, Russell Deitrick, Jacques Laskar | 2023-09-21T10:00:34Z | http://arxiv.org/abs/2309.11946v2 | Thermal tides in neutrally stratified atmospheres: Revisiting the Earth's Precambrian rotational equilibrium
###### Abstract
Rotational dynamics of the Earth, over geological timescales, have profoundly affected local and global climatic evolution, probably contributing to the evolution of life. To better retrieve the Earth's rotational history, and motivated by the published hypothesis of a stabilized length of day during the Precambrian, we examine the effect of thermal tides on the evolution of planetary rotational motion. The hypothesized scenario is contingent upon encountering a resonance in atmospheric Lamb waves, whereby an amplified thermotidal torque cancels the opposing torque of the oceans and solid interior, driving the Earth into a rotational equilibrium. With this scenario in mind, we construct an ab-initio model of thermal tides on rocky planets describing a neutrally stratified atmosphere. The model takes into account dissipative processes with Newtonian cooling and diffusive processes in the planetary boundary layer. We retrieve from this model a closed form solution for the frequency-dependent tidal torque which captures the main spectral features previously computed using 3D general circulation models. In particular, under longwave heating, diffusive processes near the surface and the delayed thermal response of the ground prove to be responsible for attenuating, and possibly annihilating, the accelerating effect of the thermotidal torque at the resonance. When applied to the Earth, our model prediction suggests the occurrence of the Lamb resonance in the Phanerozoic, but with an amplitude that is insufficient for the rotational equilibrium. Interestingly, though our study was motivated by the Earth's history, the generic tidal solution can be straightforwardly and efficiently applied in exoplanetary settings.
keywords: Atmospheric dynamics, Thermal tides, Earth's rotation, Precambrian Earth +
Footnote †: journal: Earth and Planetary Science Letters
## 1 Introduction
For present day Earth, the semi-diurnal atmospheric tide, driven by the thermal forcing of the Sun and generated via atmospheric pressure waves, describes the movement of atmospheric mass away from the substellar point. Consequently, mass culminates forming bulges on the nightside and the dayside, generating a torque that accelerates the Earth's rotation. As such, this thermally generated torque counteracts the luni-solar gravitational torque associated with the Earth's solid and oceanic tides. The latter components typically drive the closed system of the tidal players towards equilibrium states of orbital circularity, coplanarity, and synchronous rotation via dissipative mechanisms (e.g., Mignard, 1980; Hut, 1981). In contrast, the inclusion of the stellar flux as an external source of energy renders the system an open system where radiative energy is converted, by the atmosphere, into me
chanical deformation and gravitational potential energy. Though this competition between the torques is established on Earth, the thermotid torque remains, at least currently, orders of magnitude smaller.
Interestingly though, this dominance of the gravitational torque over the thermal counterpart admits exceptions. The question of the potential amplification of the atmospheric tidal response initiated with Kelvin (1882), who invoked the theory of atmospheric tidal resonances, ushering a stream of theoretical studies investigating the normal modes spectrum of the Earth's atmosphere (see Chapman and Lindzen, 1970, for a neat and authoritative historical overview). Studies of the Earth's tidal response spectrum advanced the theory of thermal tides for it to be applied to Venus (Goldreich and Soter, 1966; Gold and Soter, 1969; Ingersoll and Dobrovolskis, 1978; Dobrovolskis and Ingersoll, 1980; Correia and Laskar, 2001, 2003b; Correia et al., 2003), hot Jupiters (e.g., Arras and Socrates, 2010; Auclair-Desrotour and Leconte, 2018; Gu et al., 2019; Lee, 2020), and near-synchronous and Earth-like rocky exoplanets (Cunha et al., 2015; Leconte et al., 2015; Auclair-Desrotour et al., 2017, 2019). Namely, for planetary systems within the so-called habitable zone, the gravitational tidal torque diminishes in the regime near spin-orbit synchronization and becomes comparable in magnitude to the thermotid torque. Consequently, the latter may actually prevent the planet from precisely reaching its destined synchronous state (Laskar and Correia, 2004; Correia and Laskar, 2010; Cunha et al., 2015; Leconte et al., 2015).
Going back to Earth, Holmberg (1952) suggested that the thermal tide at present is resonant, and the generated torque is equal in magnitude and opposite in sign to that generated by gravitational tides, thus placing the Earth into a rotational equilibrium with a stabilized spin rate. As this was proven to be untrue for present Earth (Chapman and Lindzen, 1970), Zahnle and Walker (1987) revived Holmberg's hypothesis by applying the resonance scenario of thermal tides to the distant past. Their suggestion relied on two factors needed to close the gap between the competing torques. The first is the occurrence of a resonance in atmospheric Lamb waves (e.g., Lindzen and Blake, 1972) - which we coin as a Lamb resonance - that characterizes the frequency overlap between the fundamental mode of atmospheric free oscillations and the semidirurnal forcing frequency. According to Zahnle and Walker (1987), this resonance occurred when the length of day (LOD) was around 21 hrs, exciting the thermotid torque to large amplitudes. Secondly, the gravitational tidal torque must have been largely attenuated in the Precambrian. Recently, Bartlett and Stevenson (2016) revisited the equilibrium scenario and investigated the ef
Figure 1: Modeled histories of the rotational motion of the Earth. Plotted is the Earth’s LOD evolution in time over geological timescales for three models: _i)_ the model of Farhat et al. (2022), where the evolution is driven solely by oceanic and solid tidal dissipation; _ii)_ the model of Zahnle and Walker (1987), where the Lamb resonance is encountered for LOD\(\sim\)21 hr, forcing a rotational equilibrium on the Earth; _iii)_ the model of Bartlett and Stevenson (2016), which also adopts the equilibrium scenario, but further studies the effect of thermal noise, and the required temperature variation to escape the equilibrium. Three curves of the latter model correspond to different parameterizations of the gravitational tide. Plotted on top of the modelled histories are geological proxies of the LOD evolution that can be retrieved from [http://astrogeo.eu](http://astrogeo.eu).
fect of temperature fluctuations on the stability of the resonance trapping and the Earth's equilibrium. The authors concluded that the rotational stabilization could have lasted 1 billion years, only to be distorted by a drastic deglaciation event (on the scale that follows the termination of a snowball Earth), thus allowing the LOD to increase again from \(\sim\)21 hr to its present value. Evidently, the occurrence of such a scenario has very significant implications on paleoclimatic studies, with the growing evidence on links between the evolving LOD and the evolution of Precambrian benthic life (e.g., Klatt et al., 2021).
We are fresh out of a study on the tidal evolution of the Earth-Moon system (Farhat et al., 2022), where we focused on modelling tidal dissipation in the Earth's paleo-occans and solid interior. There we learned that the tidal response of the oceans, characterized by intermittent resonant excitations, is sufficient to explain the present rate of lunar recession and the estimated lunar age, and is in good agreement with the geological proxies on the lunar distance and the LOD, leaving little-to-no place for an interval of a rotational equilibrium (Figure 1).
On the other hand, major progress has been achieved in establishing the frequency spectrum of the thermotidal response of rocky planets with various approaches ranging from analytical models (Ingersoll and Dobrovolskis, 1978; Dobrovolskis and Ingersoll, 1980; Auclair-Desrotour et al., 2017, 2018), to parameterized models that capture essential spectral features (e.g., Correia and Laskar, 2001, 2003), to fully numerical efforts that relied on the advancing sophistication of general circulation models (GCM; e.g., Leconte et al., 2015; Auclair-Desrotour et al., 2019). The latter work presents, to our knowledge, the first and, to-date1, the only study to have numerically computed the planetary thermotidal torque in the high frequency regime, i.e. around the Lamb resonance (Lindzen and Blake, 1972). Of interest to us here are two perplexing results that Auclair-Desrotour et al. (2019) established: first, for planets near synchronization, the simplified Maxwellian models often used to characterize the thermotidal torque did not match the GCM simulated response; second, the torque at the Lamb resonance featured only a decelerating effect on the planet. Namely, it acts in the same direction of gravitational tides, and thus the effect required for the rotational stabilization disappeared.
Footnote 1: While this paper was under review, Wu et al. (2023) presented another GCM-computed spectrum for the Earth in the high frequency regime. We provide an elaborate discussion of their work’s results in a separate dedicated paper (Laskar et al., 2023).
More recently, while this work was under review, two studies on the Precambrian LOD stabilization were published. Mitchell and Kirscher (2023) compiled various geological proxies on the Precambrian LOD and established the best piecewise linear fit to this data compilation. The authors' analysis depicts that a Precambrian LOD of 19 hr was stabilized between 1 and 2 Ga. In parallel, Wu et al. (2023) also attempted to fit a fairly similar set of geological proxies, but using a simplified model of thermal tides. The authors conclude that the LOD was stabilized at \(\sim 19.5\) hr between 0.6 and 2.2 Ga, with a sustained very high mean surface temperature (\(40-55^{\circ}\)C). Although using different approaches, the two studies have thus arrived at similar conclusions. A closer look at the subset of geological data that favored this outcome, however, indicates that both studies heavily rely on three stromatolitic records from the Paleoproterozoic that were originally studied by Pannella (1972, 2018). These geological data have been, ever since, identified as unsuitable for precise quantitative interpretation (see e.g., Scrutton, 1978; Lambeck, 1980; Williams, 2000). To this end, we provide a more detailed analysis of the geological proxies of the LOD, and of the model presented by Wu et al. (2023) in a parallel paper dedicated to the matter (Laskar et al., 2023).
With a view to greater physical realism, we aim here to study, analytically, the frequency spectrum of the thermotidal torque, from first principles, interpolating between the low and high frequency regimes. Our motivation is two-fold: first, to provide a novel physical model for the planetary thermotidal torque that better matches the GCM-computed response, and that can be used
in planetary dynamical evolution studies; second, to apply this model to the Earth and attempt quantifying the amplitude of the torque at the Lamb resonance and explore the intriguing rotational equilibrium scenario.
## 2 Ab initio atmospheric dynamics
For an atmosphere enveloping a spherically symmetric planet, we define a reference frame co-rotating with the planet. In this frame, an atmospheric parcel is traced by its position vector \(\mathbf{r}\) in spherical coordinates \((r,\theta,\varphi)\), such that \(\theta\) is the colatitude, \(\varphi\) is the longitude, and the radial distance \(|\mathbf{r}|=R_{\rm p}+z\), where \(R_{\rm p}\) is the planet's radius and \(z\) is the parcel's atmospheric altitude. The atmosphere is characterized by the scalar fields of pressure \(p\), temperature \(T\), density \(\rho\), and the three-dimensional vectorial velocity field \(\mathbf{\mathcal{V}}\). Each of these fields varies in time and space, and is decomposed linearly into two terms: a background, equilibrium state field, subscripted with 0, and a tidally forced perturbation term of significantly smaller amplitude such that \(p=p_{0}+\delta p,\,T=T_{0}+\delta T,\,\rho=\rho_{0}+\delta\rho,\text{and}\, \mathbf{\mathcal{V}}=\mathbf{V}_{0}+\mathbf{V}\).
Our fiducial atmosphere is subject to the perturbative gravitational tidal potential \(U\) and the thermal forcing per unit mass \(J\). We shall define the latter component precisely in Section 2.2, but for now it suffices to say that \(J\) accounts for the net amount of heat, per unit mass, provided to the atmosphere, allowing for thermal losses driven by radiative dissipation. We take the latter effect into account by following the linear Newtonian cooling hypothesis2(Lindzen and McKenzie, 1967), where radiative losses, \(J_{\rm rad}\), are parameterized by the characteristic frequency \(\sigma_{0}\); namely \(J_{\rm rad}=p_{0}\sigma_{0}/(\kappa\rho_{0}T_{0})\delta T\), where \(\kappa=(\Gamma_{1}-1)/\Gamma_{1}=0.285\) and \(\Gamma_{1}\) is the adiabatic exponent. Similar to Leconte et al. (2015), we associate with \(\sigma_{0}\) a radiative cooling timescale \(\tau_{\rm rad}=4\pi/\sigma_{0}\).
Footnote 2: noting that surface friction is another dissipative mechanism as discussed by Lindzen and Blake (1972).
### The vertical structure of tidal dynamics
We are interested in providing a closed form solution for the frequency3 dependence of the thermogidal torque, which results from tidally driven atmospheric mass redistribution. By virtue of the hydrostatic approximation, this mass redistribution is encoded in the vertical profile of pressure. As such, it is required to solve for the vertical structure of tidal dynamics. With fellow non-theoreticians in mind, we delegate the detailed development of the governing system of equations describing the tidal response of the atmosphere to A. Therein, we employ the classical system of primitive equations describing momentum and mass conservation (e.g., Siebert, 1961; Chapman and Lindzen, 1970), atmospheric heat transfer augmented with linear radiative transfer a la Lindzen and McKenzie (1967), and the ideal gas law, all formulated in a dimensionless form.
Footnote 3: The frequency in this case being the tidal forcing frequency \(\sigma\), typically a linear function of the planet’s spin rate \(\Omega\) and the stellar perturber’s mean motion \(n_{\star}\). The semi-diurnal tidal frequency, for instance, \(\sigma_{22}=2(\Omega-n_{\star})\).
Aided by the so-called traditional approximation (e.g., Unno et al., 1989, see also A), the analytical treatment of the said system is feasible as it decomposes into two parts describing, separately, the horizontal and vertical structures of tidal flows. The former part is completely described by the eigenvalue-eigenfunction problem defined as Laplace's tidal equation (Laplace, 1798; Lee and Saio, 1997):
\[\mathcal{L}^{m,\nu}\Theta_{n}^{m,\nu}=-\Lambda_{n}^{m,\nu}\Theta_{n}^{m,\nu}, \tag{1}\]
where the set of Hough functions \(\{\Theta_{n}^{m,\nu}\}\) serves as the solution (Hough, 1989), \(\{\Lambda_{n}^{m,\nu}\}\) is the associated set of eigenvalues, \(\mathcal{L}^{m,\nu}\) is a horizontal operator defined in Eq.(A.23) of A, while \(\nu=2\Omega/\sigma\), where \(\Omega\) is the rotational velocity of the planet and \(\sigma\) is the tidal forcing frequency. In the tidal system under study, the variables and functions \((\delta p,\delta\rho,\delta T,\mathbf{V},J,\Theta,\Lambda)\) are written in the Fourier domain using the longitudinal order \(m\) and frequency \(\sigma\) (Eq.A.20), and expanded in horizontal Hough modes with index \(n\) (Eq.A.21). We denote hereafter their coefficients \(f_{n}^{m,\nu}\) by \(f_{n}\) to
lighten the expressions. This horizontal structure of tidal dynamics is merely coupled to the vertical structure via the set of eigenvalues \(\{\Lambda_{n}^{m,\nu}\}\). To construct these sets of eigenfunctions-eigenvalues we use the spectral method laid out by Wang et al. (2016).
The vertical structure on the other hand requires a more elaborate manipulation of the governing system, a procedure that we detail in Appendix B. The outcome is a wave-like equation that describes vertical thermotidal dynamics and reads as:
\[\frac{d^{2}\Psi_{n}}{dx^{2}}+\hat{k}_{n}^{2}\Psi_{n}=\Phi^{-1}C_{n}. \tag{2}\]
Here, as is the common practice (e.g., Siebert, 1961; Chapman and Lindzen, 1970), we use the reduced altitude \(x=\int_{0}^{z}dz/H(z)\) as the vertical coordinate, where the pressure scale height \(H(z)=\mathcal{R}_{\text{s}}T_{0}(z)/g\); \(\mathcal{R}_{\text{s}}\) being the specific gas constant and \(g\) the gravitational acceleration. The quantity \(\Psi_{n}(x)\) is a calculation variable from which, once solved for, all the tidal scalar and vectorial quantities would flesh out (Appendix C). The vertical wave number \(\hat{k}_{n}(x)\) is defined via
\[\hat{k}_{n}^{2}(x) =-\frac{1}{4}\Biggl{\{}\!\!\left(\!\!1\!-\!\frac{i\kappa}{\alpha -i}(\gamma-1)\right)\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
the wave equation in the following sections, what is left for us to quantify the mass redistribution and compute the resulting tidal torque is to retrieve the vertical profile of pressure given the solution of the wave equation, \(\Psi_{n}(x)\). In A.3, we derive the vertical profiles of all the tidal variables, and specifically for the dimensionless pressure anomaly we obtain:
\[\tilde{\delta p}_{n}(x)=\frac{1}{i\beta\Lambda_{n}}\left(\frac{d\tilde{G}_{n}}{ dx}-\tilde{G}_{n}\right)+\frac{1}{\beta}\left(1+\frac{1}{\beta\Lambda_{n}}\frac{d}{ dx}\right)\tilde{U}_{n}, \tag{7}\]
where \(\tilde{\delta p}_{\rm g}(x)=\delta p_{n}(x)/p_{0}\), and the calculation variable \(G_{n}(x)=\Psi_{n}(x)\Phi(x)\).
### The thermal forcing profile
To solve the non-homogeneous wave equation (2), it is necessary to define a vertical profile for the tidal heating power per unit mass \(\tilde{J}_{n}\) (or equivalently in dimensional form, \(J_{n}\)). We adopt a vertical tidal heating profile of the form
\[J_{n}(x)=J_{\rm s}e^{-b_{\rm J}x}, \tag{8}\]
where \(J_{\rm s}\) is the heat absorbed at the surface and \(b_{\rm J}\) is a decay rate that characterizes the exponential decay of heating along the vertical coordinate. As we are after a generic planetary model, this functional form of \(J_{n}\) allows the distribution of heat to vary between the Dirac distribution adopted by Dobrovolskis and Ingersoll (1980) where \(b_{\rm J}\rightarrow\infty\), and a uniform distribution where the whole air column is uniformly heated (\(b_{\rm J}=0\)).
To determine \(J_{\rm s}\), we invoke its dependence on the total vertically propagating flux \(\delta F_{\rm tot}\) by computing the energy budget over the air column. The net input of energy corresponds to the difference between the amount of flux absorbed by the column and associated with a local increase of thermal energy, and the amount that escapes into space or into the mean flows defining the background profiles. We quantify the fraction of energy transferred to the atmosphere and that is available for tidal dynamics by \(\alpha_{\rm A}\), where \(0\leq\alpha_{\rm A}\leq 1\); the rest of the flux amounting to \(1-\alpha_{\rm A}\) escapes the thermotidal interplay. We thus have
\[\int_{0}^{\infty}J(x)\rho_{0}(x)H(x)dx=\alpha_{\rm A}\delta F_{\rm tot}. \tag{9}\]
To define \(\delta F_{\rm tot}\), we establish the flux budget for a small thermal perturbation at the planetary surface. We start with \(\delta F_{\rm inc}\), a variation of the effective incident stellar flux, after the reflected component has been removed. \(\delta F_{\rm inc}\) generates a variation \(\delta T_{\rm s}\) in the surface temperature \(T_{\rm s}\). The proportionality between \(\delta F_{\rm inc}\) and \(\delta T_{\rm s}\) is parameterized by \(\tau_{\rm bl}\), a characteristic diffusion timescale of the ground and atmospheric surface thermal responses. We detail on this proportionality in A.3, but for now it suffices to state that \(\tau_{\rm bl}\) is a function of the thermal inertia budgets in the ground, \(I_{\rm gr}\), and the atmosphere \(I_{\rm atm}\). We associate with \(\tau_{\rm bl}\) the frequency \(\sigma_{\rm bl}=\tau_{\rm bl}^{-1}\), a characteristic frequency that reflects the thermal properties of the diffusive boundary layer. It will serve as another free parameter of our tidal model, besides the Newtonian cooling frequency \(\sigma_{0}\), and the atmospheric opacity parameter \(\alpha_{\rm A}\). In analogy to \(\alpha=\sigma/\sigma_{0}\), we define the dimensionless parameter for the boundary layer \(\zeta=\sqrt{|\sigma|\tau_{\rm bl}}=\sqrt{|\sigma|/\sigma_{\rm bl}}\).
By virtue of the power budget balance established in D, we define the total propagating flux \(\delta F_{\rm tot}\) as
\[\delta F_{\rm tot}=\delta F_{\rm inc}\left[1-\mu_{\rm gr}\zeta\frac{1+si}{1+(1 +si)\zeta}\right]. \tag{10}\]
Here, \(s={\rm sign}(\sigma)\), and \(\mu_{\rm gr}\) is a dimensionless characteristic function weighing the relative contribution of ground thermal inertia to the total inertia budget; namely \(\mu_{\rm gr}=I_{\rm gr}/(I_{\rm gr}+I_{\rm atm})\).
The generic form of the flux in Eq.(10) clearly depicts two asymptotic regimes of thermotid forcing:
* Ignoring the surface layer effects associated with the term on the right, i.e. setting \(\zeta=\mu_{\rm gr}=0\), leaves us with thermotid heating that is purely attributed to the direct atmospheric absorption of the incident flux. This limit can be used to describe the present understanding of thermotid forcing on Earth where, to first order, direct insolation absorption in the shortwave by ozone and water vapor appears sufficient to explain the observed tidal amplitudes in barometric measurements (e.g., Chapman and Lindzen, 1970; Schindeleger and Ray, 2014). Nevertheless, it is
noteworthy that the observed tidal phases of pressure maxima could not be explained by this direct absorption, a discrepancy later attributed to an additional semidiurnal forcing, namely that of latent heat release associated with cloud and raindrop formation (e.g., Lindzen, 1978; Hagan and Forbes, 2002; Sakazaki et al., 2017).
* Allowing for the surface layer term on the other hand (\(\zeta\neq 0\), \(\mu_{\rm gr}\neq 0\)) places us in the limit where the ground radiation in the infrared and heat exchange processes occurring in the vicinity of the surface would dominate the thermotidal heating. The total tidal forcing in this case is non-synchronous with the incident flux due to the delayed thermal response of the ground, which here is a function of \(\tau_{\rm bl}\). This limit better describes dry Venus-like planets, as is the fiducial setting studied using GCMs in Leconte et al. (2015) and Auclair-Desrotour et al. (2019).
Finally, as we are interested in the semi-diurnal tidal response, we decompose the thermal forcing in E to obtain the amplitude of the quadrupolar component as \(\delta F_{\rm inc}=\delta F_{22}=(\sqrt{30\pi}/16)F_{\star}\), where \(F_{\star}=L_{\star}/4\pi a_{\rm p}^{2}\), \(L_{\star}\) being the stellar luminosity, and \(a_{\rm p}\) the star-planet distance.
## 3 The tidal response
### The tidal torque in the neutral stratification limit
Under the defined forcing in the previous section, to solve the wave equation analytically, a choice has to be made on the Brunt-Vaisala frequency, \(N_{\rm B}\) (Eq.4), which describes the strength of atmospheric buoyancy forces and consequently the resulting vertical temperature profile. Earlier analytical solutions have been obtained in the limit of an isothermal atmosphere (Lindzen and McKenzie, 1967; Auclair-Desrotour et al., 2019), in which case the scale height \(H\) becomes independent of the vertical coordinate, and by virtue of Eq. (4), \(N_{\rm B}^{2}=\kappa g/H=\rm const\).
Motivated by the Earth's atmosphere, where the massive troposphere (\(\sim\)80% of atmospheric mass) controls the tidal mass redistribution, we derive next an analytical solution in a different, and perhaps more realistic limit. Namely, the limit corresponding to the case of a neutrally stratified atmosphere, where \(N_{\rm B}^{2}=0\). In fact, \(N_{\rm B}^{2}\) can be expressed in terms of the potential temperature \(\Theta_{0}\)(e.g., Section 2.10 of Vallis, 2017):
\[N_{\rm B}^{2}=\frac{g}{H}\frac{d\ln\Theta_{0}}{dx}. \tag{11}\]
whereby the stability of the atmosphere is controlled by the slope of \(\Theta_{0}\). That said, atmospheric temperature measurements (e.g., Figures 2.1-2.3 of Pierrehumbert, 2010) clearly depict that the troposphere is characterized by a negative temperature gradient, and a very weak potential temperature gradient, which is closer to an idealised adiabatic profile than it is to an idealised isothermal profile. Moreover, the heating in the troposphere generates strong convection and efficient turbulent stirring, thus enhancing energy transfer and driving the layer towards an adiabatic temperature profile. As such, the temperature profile being adiabatic would prohibit the propagation of buoyancy-restored gravity waves, which compose the baroclinic component of the atmospheric tidal response (e.g., Gerkema and Zimmerman, 2008). This leaves the atmosphere with the barotropic component of the tidal flow, a feature consistent with tidal dynamics under the shallow water approximation (A).
Hereafter, we focus on the thermotidal heating as the only tidal perturber, and we ignore the much weaker gravitational potential \(\tilde{U}\). It follows, in the neutral stratification limit, that \(\gamma=0\) (Table 1), and the vertical wavenumber (Eq. 3) reduces to4
Footnote 4: It is noteworthy that the wavenumber in the neutral stratification limit is not longer dependent on the horizontal structure.
\[\hat{k}^{2}=\left[\frac{1+\kappa+i\alpha}{2(\alpha-i)}\right]^{2}. \tag{12}\]
It also follows that the background profiles of the scalar variables read as (Auclair-Desrotour et al.,
2017a):
\[p_{0}(x) =p_{0}(0)e^{-x},\ \ \ \ \rho_{0}(x)=\frac{p_{0}(0)}{gH(0)}e^{( \kappa-1)x},\] \[T_{0}(x) =\frac{gH(0)}{\mathcal{R}_{\rm s}}e^{-\kappa x}. \tag{13}\]
We thus obtain for the heating profile (using Eqs. 8, 9, and 10)
\[J_{\rm s}=\delta F_{22}\frac{\alpha_{\rm A}g(b_{\rm J}+1)}{p_{0}(0)}\!\!\left[1 -\mu_{\rm gr}\zeta\frac{1+si}{1+(1+si)\zeta}\right]. \tag{14}\]
As such, the wave equation (2) is rewritten as
\[\frac{d^{2}\Psi_{n}}{dx^{2}}+\hat{k}^{2}\Psi_{n}=\mathcal{A}_{n}e^{-\mathcal{B} x}, \tag{15}\]
where the complex functions \(\mathcal{A}_{n}\) and \(\mathcal{B}\) are defined as:
\[\mathcal{A}_{n}=\frac{\kappa\Lambda_{n}}{R_{\rm p}^{2}\sigma^{3} }\frac{\alpha^{2}+i\alpha}{\alpha^{2}+1}J_{\rm s}, \tag{16}\] \[\mathcal{B}=\frac{1}{2(\alpha^{2}+1)}\left[(2b_{\rm J}+1)( \alpha^{2}+1)-\kappa+i\kappa\alpha\right]. \tag{17}\]
The wave equation (15) admits the general solution
\[\Psi_{n}(x)=c_{1}e^{ikx}+c_{2}e^{-ikx}+\frac{\mathcal{A}_{n}}{\mathcal{B}^{2} +\hat{k}^{2}}e^{-\mathcal{B}x}. \tag{18}\]
We consider the following two boundary conditions:
* First, the energy of tidal flows, \(\mathcal{W}\), should be bounded as \(x\rightarrow\infty\). In F, we derive the expression of the tidal energy following Wilkes (1949), and it scales as \(\mathcal{W}\propto|\Psi|^{2}|\Phi|^{2}\). Accordingly, the non-divergence of the flow condition is satisfied if one sets \(c_{2}=0\) and takes the proper sign of the wavenumber (Eq. 12), namely: \[\hat{k}=\frac{1}{2(\alpha^{2}+1)}\left[\kappa\alpha+i(1+\kappa+\alpha^{2}) \right].\] (19)
* The second condition is the natural wall condition imposed by the ground interface, which enforces \(\tilde{V}_{r;n}(x=0)=0\). We derive the expression of the profile of the vertical velocity in C, and by virtue of Eq.(C.6), this condition allows us to write \(c_{1}\) in the form: \[c_{1}(x)=\frac{\mathcal{A}_{n}}{\mathcal{B}+\hat{k}^{2}}\times\frac{\mathcal{ B}-\frac{1}{2}\left(1+\frac{i\kappa}{\alpha-i}\right)-\beta\Lambda_{n}+1}{i \hat{k}+\frac{1}{2}\left(1+\frac{i\kappa}{\alpha-i}\right)+\beta\Lambda_{n}- 1}.\] (20)
Under these boundary conditions, we are now fully geared to analytically compute the solution of the wave equation, \(\Psi_{n}(x)\) (or equivalently \(\tilde{G}_{n}(x)\)), but we are specifically interested in retrieving a closed form solution of the quadrupolar tidal torque. The latter takes the general form5 (G):
Footnote 5: We note that this form corresponds to the quadrupolar component of the torque about the spin axis, and it is only valid assuming a thin atmospheric layer under the hydrostatic approximation. In the case of a thick atmosphere, one should integrate the mass redistribution over the radial direction.
\[\mathcal{T}=\sqrt{\frac{6\pi}{5}}\frac{M_{\star}}{M_{\rm p}}\frac{R_{\rm p}^{ 6}}{a_{\rm p}^{3}}\Im\left\{\delta p_{\rm s}\right\}. \tag{21}\]
Here \(M_{\star}\) and \(M_{\rm p}\) designate the stellar and planetary masses respectively, and \(\Im\) refers to the imaginary part of a complex number, the latter in this case being the quadrupolar pressure anomaly at the surface \(\delta p_{\rm s}=\delta p_{2}^{2,\nu}(x=0)\). We further note that while this torque is computed for the atmosphere, it does act on the whole planet since the atmosphere is a thin layer that features no differential rotation with respect to the rest of the planet.
Taking the solution \(\Psi(0)\) of Eq. (18) (with \(c_{2}=0\) and \(c_{1}\) defined in Eq. 20), we retrieve \(\delta p_{\rm s}\) from Eq. (7). After straightforward, but rather tedious manipulations, we extract the imaginary part of the pressure anomaly and write it in the simplified form:
\[\Im\{\delta p_{\rm s}\}= \alpha_{\rm A}\delta F_{22}\frac{\kappa g\Lambda_{2}}{R_{\rm p}^{ 2}\sigma^{3}}\frac{(\mathcal{X}\alpha+\mathcal{Y})\alpha}{(1+2\zeta+2\zeta^{2 })}\] \[\times\underbrace{\left[(\kappa-\beta\Lambda_{2}+1)^{2}+\alpha^{2 }(\beta\Lambda_{2}-1)^{2}\right]^{-1}}_{\text{position of the Lamb resonance}}, \tag{22}\]
where we have defined the complex functions \(\mathcal{X}\) and \(\mathcal{Y}\) as \[\mathcal{X} =(\beta\Lambda_{2}-1)\left[2\zeta^{2}(1-\mu_{\rm gr})+\zeta(2-\mu_{ \rm gr})+1\right],\] \[\mathcal{Y} =-s\mu_{\rm gr}\zeta(\kappa-\beta\Lambda_{2}+1).\] (23) We note that we provide the full complex transfer function of the surface pressure anomaly, along with further analysis on its functional form in Appendix H. Before embarking on any results, we pause here for a few remarks on the provided closed form solution of the torque.
* The parameter \(\alpha_{\rm A}\), defined earlier (Eq.9) as the fraction of radiation actually absorbed by the atmosphere, can evidently be correlated with the typical transmission function of the atmosphere and therefore its optical depth. Presuming that thermotid heating on Earth is driven by ozone and water vapor, \(\alpha_{\rm A}\) can then characterize atmospheric opacity parameter in the visible. Explicitly showing this dependence now takes us too far afield, though we compute and infer estimates of \(\alpha_{\rm A}\) in Section 4.1 and Appendix I.
* The quadrupolar component of the equilibrium stellar flux, entering through a fraction of \(F_{\star}\) (E), is directly proportional to the stellar luminosity \(L_{\star}\). Standard models suggest that the Sun's luminosity was around 80% of its present value \(\sim\)3 Ga (Gough, 1981). Such luminosity evolution of Sun-like stars can be directly accommodated in the model if one were to study the evolution of the tidal torque with time.
* As we mentioned earlier, upon separating the horizontal and vertical structure of tidal dynamics, the only remaining coupling factor between the two structures is the eigenvalue of horizontal flows, \(\Lambda_{n}\), in our case reducing to the dominant fundamental mode \(\Lambda_{2}\). Noting that we have dropped the superscripts, we remind the reader that for the semidiurnal (\(m=2\)) response, \(\Lambda_{2}=\Lambda_{2}^{2,\nu}\), thus \(\Lambda\) is frequency-dependent in the general case. The Earth however, over its lifetime, lives in the asymptotic regime of \(\nu\approx 1\) since \(2\Omega\gg n_{\star}\), thus it is safe to assume that \(\Lambda_{2}\) is invariant over the geological history with a value of 11.159 that we compute using the spectral method of Wang et al. (2016).
* Of significance to us in the Precambrian rotational equilibrium hypothesis is the tidal frequency, and consequently the LOD, at which the Lamb resonance occurs. It is evident from the closed form solution (22) that the position of the resonance is controlled by the highlighted term. Had it not been for the introduced radiative losses, entering here through \(\alpha\), this term would have encountered a singularity at the spectral position of the resonance, i.e. for \(\beta\Lambda_{2}=1\). Here, however, the amplitude of the tidal peak is finite, and its position is a function of the planetary radius, gravitational acceleration, average surface temperature, eigenvalue of the fundamental Hough mode of horizontal flows, and the Newtonian cooling frequency. We detail further on this dependence in Section 4.2.
In Fig. 2, we plot the spectrum of the tidal
Figure 2: The spectrum of semi-diurnal atmospheric thermal tides. Plotted is the imaginary part of the normalized pressure anomaly (\(\delta\tilde{p}=\delta p/p_{\rm s}\); Eq.22) as a function of the normalized forcing frequency \(\omega=(\Omega-n_{\star})/n_{\star}=\sigma/2n_{\star}\), where \(n_{\star}\) is the mean motion of the stellar perturber. The planetary-stellar parameters are those of the fiducial planetary system defined in Section 3.1.
response for a fiducial system in terms of the normalized surface pressure anomaly over a wide range of tidal frequencies covering the low and high frequency regimes. The system describes a Venus-like dry planet (\(M_{\rm p}=0.815M_{\oplus},\,R_{\rm p}=0.95R_{\oplus},\,a_{\rm p}=0.73\,\)au, \(g=8.87\,\)m s\({}^{-2}\)), with a 10 bar atmosphere and a scale height at the surface \(H_{0}=10\) km, thermally forced by a solar-like star (\(M_{\star}=1M_{\odot}\), \(L_{\star}=1L_{\odot}\)). We further ignore here the thermal inertia in the ground and the atmosphere by taking \(\sigma_{\rm bl}\to\infty\), or \(\zeta\to 0\), thus assuming an synchronous response of the ground with the thermal excitation.
Key tidal response features are recovered in this spectrum: First, we obtain a tidal peak near synchronization (\(\omega=0\)) that generates a positive torque for \(\sigma>0\) and a negative torque for \(\sigma<0\), driving the planet in both cases away from its destined spin-orbit synchronization due to the effect of solid tides (e.g., Gold and Soter, 1969; Correia and Laskar, 2001; Leconte et al., 2015). The peak has often been modelled by a Maxwellian functional form, though this form does not always capture GCM-generated spectra when varying the planetary setup (e.g., Auclair-Desrotour et al., 2019). Second, we recover the Lamb resonance in the high frequency regime. The resonance is characterized here by two symmetric peaks of opposite signs. Thus upon passage through the resonance, the thermotidal torque shifts from being a rotational brake to being a rotational pump. In this work, we are more interested in the high frequency regime, thus we delegate further discussion and analysis on the low frequency tidal response to a forthcoming work, and we focus next on the Lamb resonance.
### The longwave heating limit: Breaking the symmetry of the Lamb resonance
We now allow for variations of the characteristic time scale associated with the boundary layer diffusive processes, \(\tau_{\rm bl}\) (Eq.D.12), or equivalently \(\sigma_{\rm bl}\). Variations in \(\sigma_{\rm bl}\) are physically driven by variations in the thermal conductive capacities of the ground and the atmosphere, and are significant when infrared ground emission and boundary layer turbulent processes contribute significantly to the thermotidal heating.
In such a case, the value of \(\sigma_{\rm bl}\) plays a significant role in the tidal response of the planet. Namely, the ratio \(\sigma/\sigma_{\rm bl}\) determines the angular delay of the ground temperature variations. For our study of the global tidal response, this frequency ratio determines whether the ground response is synchronous with the thermal excitation (when \(\sigma\ll\sigma_{\rm bl}\)), meaning thermal inertias vanish, the ground and the surface layer do not store energy, and the ground response is instantaneous; or if due to the combination of thermal inertias, the energy reservoir of the ground is huge, and the ground response lags the excitation, imposing another angular shift on the generated tidal bulge (when \(\sigma\gtrsim\sigma_{\rm bl}\)). We now reap from the analytical model the signature of \(\sigma_{\rm bl}\) in order to explain the Lamb resonance asymmetry - as opposed to its symmetry in Figure 2 - observed in GCM simulations of an atmosphere forced by a longwave flux (Auclair-Desrotour et al., 2019).
In Figure 3, we plot the tidal spectrum around the Lamb resonance, in terms of the normalized pressure anomaly at the surface, for different values of \(\sigma_{\rm bl}\). For \(\sigma_{\rm bl}=5\times 10^{-2}\,s^{-1}\), the almost instantaneous response of the ground leaves us with two pressure peaks that are symmetric around the resonant frequency. Decreasing \(\sigma_{\rm bl}\) and allowing for a delayed ground response, the two pressure peaks of the resonance are attenuated in amplitude, but not with the same magnitude; namely, the amplitude damping is stronger against the positive pressure peak. Decreasing \(\sigma_{\rm bl}\) to \(10^{-5}\,s^{-1}\) in the panel in the middle, the positive pressure peak completely diminishes, leaving only the negative counterpart. Decreasing \(\sigma_{\rm bl}\) further, both peaks are amplified, thus the positive peak emerges again. However, the spectral position of the peaks is now opposite to what it was in the limit of an instantaneous ground response.
Given the direct proportionality between the tidal torque and the surface pressure anomaly (Eq.21), the effect of thermal inertia thus contributes to the rotational dynamics when encountering the Lamb resonance. If a planet is decelerating and is losing rotational angular momentum,
\(L_{\Omega}\), due to solid or oceanic gravitational tides, \(\omega\) decreases, and the planet encounters the resonance from the right. In the first panel of Figure 3, the thermotidal torque in this regime is also negative, thus it complements the effect of gravitational tides. When the resonance is encountered, the thermotidal torque shifts its sign to counteract the effect of gravitational tides, with an amplified effect in the vicinity of the resonance. However, with the introduction of thermal inertia into the linear theory of tides, the \(L_{\Omega}\)-pumping part of the atmospheric torque is attenuated, and for some values of \(\sigma_{\rm bl}\), it completely disappears. This modification of the analytical theory allows us to explain the asymmetry of the Lamb resonance depicted in the 3D GCM simulations of Auclair-Desrotour et al. (2019). In J, we show that we are able to recover from our model the essential features of the tidal spectrum computed in the mentioned simulations.
To understand the signature of the surface response further, in Figure 4, we generate snapshots of the tidal pressure variation in the equatorial plane, seen from a top view. The snapshots thus show the thermally induced atmospheric mass redistribution and the resulting tidal bulge, if any. To generate these plots, we compute the vertical profile of the pressure anomaly from Eq. (7), and augment it with the latitudinal and longitudinal dependencies from Eqs. (A.20-A.21). As the massive troposphere dominates the tidal mass redistribution, we use the mass-based vertical coordinate \(\varsigma=p/p_{\rm s}\) (i.e. \(x=-\ln\varsigma\), and \(\varsigma\) ranges between 1 at the surface and 0 in the uppermost layer).
In Figure 4, we show the tidal bulge as the planet passes through the Lamb resonance, for two values of \(\sigma_{\rm bl}\) that correspond to the limits of synchronous atmospheric absorption (top row), and a delayed thermal response in the ground (bottom row). First, the accumulation of mass and its culmination on a tidal bulge is indicated by the color red, with varying intensity depicting varying pressure amplitudes. In the case of synchronous atmospheric absorption, for \(\omega=253\), i.e. before encountering the resonance, the bulge leads the substellar point and acts to accelerate the planet's rotation. Increasing \(\omega\) and encountering the resonance, the bulge reorients smoothly towards lagging the substellar point thus decelerating the planet's rotation. This behavior is consistent with the established response spectrum in the first panel of Figure 3, and is relevant to the Earth's case, assuming that thermotidal heat
Figure 3: The (a-)symmetry of the Lamb resonance. Similar to Figure 2, plotted is the imaginary part of the normalized pressure anomaly (Eq.22), associated with the semidiurnal tide, as a function of the normalized forcing frequency \(\omega=(\Omega-n_{\star})/n_{\star}=\sigma/2n_{\star}\), for the same planetary-stellar parameters. We focus here on the high frequency regime around the Lamb resonance. Different panels correspond to different values of \(\sigma_{\rm bl}\), or different thermal inertias in the ground and the atmosphere. Allowing for thermal inertia results in a delayed ground response, of which the signature is clear in inducing an asymmetry in the spectral behavior around the resonance.
ing is predominantly driven by direct synchronous absorption. In the bottom row, the delayed response of the ground imposes another shift on the bulge: for the prescribed value of \(\sigma_{\rm bl}\), the passage through the resonance only amplifies the response, but the bulge barely leads the tidal vector, leaving us with a tidal torque that mainly complements the gravitational counterpart, as seen in the fourth panel of Figure 3.
From what preceded, the reader can find it quite natural that the effect of thermal inertias in the ground and the boundary layer should be accounted for when studying planetary rotational dynamics using the linear theory, especially under longwave forcing. The results also make it tempting to revisit these effects in the case of the dominant shortwave forcing on Earth, as they have been often ignored from the theory (e.g., Chapman and Lindzen, 1970) on the basis of the small-amplitude non-migrating tidal components they produce (e.g., Schindelegger and Ray, 2014).
## 4 A fixed Precambrian LOD for the Earth?
So where does all this leave us with the Precambrian rotational equilibrium hypothesis? The occurrence of this scenario straddles several factors, the most significant of which is that the Lamb resonance amplifies the thermotidal response when the opposing gravitational tide is attenuated. Consequently, to investigate the scenario, the two essential quantities that need to be well constrained are the amplitude of the thermotidal torque when the resonance is encountered, and the geological epoch of its occurrence. Having provided a closed form solution for the tidal torque, it is straightforward for us to investigate these elements.
### Was the resonance resonant enough? A parametric study
Constraints on the amplitude of the gravitational tide during the Precambrian are model
Figure 4: The thermally induced tidal bulge revealed. Shown are polar snapshots of the radial and longitudinal variations of the tidal pressure anomaly \(\delta p(x)\) in the equatorial plane. The snapshots are shown from a top view, and the troposphere is puffed in size by virtue of the used mass-based vertical coordinate \(\varsigma=p/p_{\rm s}\). The longitudinal axes are shown in increments of \(30^{\circ}\) with \(0^{\circ}\) at the substellar point, while the radial axes are in increments of \(0.25\). The profile of the pressure perturbation is also normalized by the exponentially decaying pressure background profile. Snapshots are taken at different spectral positions that cover the passage through the Lamb resonance, which specifically occurs at \(\omega=262.6\). In the top row, the response describes the limit of a planet with a synchronous atmospheric absorption, mimicking the Earth’s direct absorption by ozone and water vapor, and it shows the continuous movement of the bulge, function of \(\omega\), from lagging to leading the substellar point. In contrast, in the bottom row, and for the prescribed value of \(\sigma_{\rm bl}\), the delayed response of the ground forces the bulge to always lag the substellar point, thus acting to decelerate the planetary rotation.
dependent. The study in Zahnle and Walker (1987), and later in Bartlett and Stevenson (2016), relied on rotational deceleration estimates fitted to match the distribution of geological proxies available at the time (e.g., Lambeck, 1980). Specifically, the estimate of the Precambrian gravitational torque relied on the tidal rhythmite record preserved in the Weeli-Wolli Banded Iron formation (Walker and Zahnle, 1986). The record is fraught with multiple interpretations featuring different inferred values for the LOD (Williams, 1990, 2000), altogether different from a recent cycloststratigraphic inference that roughly has the same age (Lantink et al., 2022, see Figure 1 for the geological data points \(\sim\)2.45 Ga). Nevertheless, the claim of an attenuated Precambrian torque still holds, as the larger interval of the Precambrian is associated with a "dormant" gravitational torque phase, lacking any significant amplification in the oceanic tidal response, in contrast with the present state where the oceanic response lives in the vicinity of a spectral resonance (e.g., Farhat et al., 2022).
That said, we explore the atmospheric parameter space of our analytical model to check the potential outcomes of the torques' competition. Given that the dominant thermotidal forcing on Earth is the direct absorption of the incident flux, we consider the synchronous limit of \(\zeta\to 0\), whereby the Lamb resonance is symmetric (first panel of Figure 3; top row of Figure 4). In Figure 5, on a grid of values of our free parameters \(\sigma_{0}\) and \(\alpha_{\rm A}\), we contour the surface of the maximum value of the imaginary part of the positive pressure anomaly that is attained when the Lamb resonance is encountered. The two parameters have a similar signature on the tidal response. Moving vertically upwards and increasing the rate of Newtonian cooling typically attenuates the amplitude of the peak. For very high cooling rates corresponding to \(\sigma_{0}\gtrsim 10^{-4}\,s^{-1}\), we severely suppress the amplified pressure response around the resonance. Conversely, for values of \(\sigma_{0}\lesssim 10^{-6.5}\,s^{-1}\), we approach the adiabatic limit of the tidal model where the Lamb resonance becomes a singularity. A similar signature is associated with increasing the opacity parameter of the atmosphere.
On the contour surface, we highlight with the solid black isoline the pressure anomaly value required to generate a thermotidal torque of equal magnitude to the Precambrian gravitational tidal torque. The latter (\(\sim\)\(1.13\times 10^{16}\) N m) is roughly a quarter of the present gravitational torque (\(\sim\)\(4.51\times 10^{16}\) N m) (e.g., Zahnle and Walker, 1987; Farhat et al., 2022), thus requiring, via Eq.(21), \(\Im\{\delta p_{\rm s}\}\) on the order of 880 Pa6. This isoline bounds from below a
Figure 5: A parametric study of the tidal response. Plotted is a contoured surface of the amplitude of the imaginary part of the positive semidiurnal pressure anomaly at the Lamb resonance, over a grid of values of our free parameters \(\sigma_{0}\) and \(\alpha_{\rm A}\). The solid black isoline marks the level curve of \(\Im\{\delta p_{\rm s}\}=880\) Pa, and defines from below a region in \((\alpha_{\rm A},\sigma_{0})\)-space where the thermotidal response is sufficient to cancel the gravitational counterpart in the Precambrian. Analogously, the dashed isoline defines the threshold (\(\Im\{\delta p_{\rm s}\}=2275\) Pa) needed in the early Mesozoic, 250 Ma. The horizontal shaded area corresponds to typical values of the radiative cooling rate as described in the main text. The other shaded area defines the region of parameter space that yields the presently observed semi-diurnal tidal bulge. The gray area on the left covers the parametric region where the resonance features a lower pressure amplitude than the present.
parameter space where the thermal tide is sufficiently amplified upon the resonance encounter. It is noteworthy that this Precambrian value of the torque is the minimum throughout the Earth's history. We mark by the dashed isoline, for comparison, the threshold needed if the Lamb resonance is encountered in the Mesozoic (\(\Im\{\delta p_{\rm s}\}=2275\) Pa). The solid gray region on the left side of the parameter space is bounded by the isoline corresponding to the present value \(\Im\{\delta p_{\rm s}\}=224\) Pa (Schindelegger and Ray, 2014). Thus it defines to the left an area where the present thermal tide is stronger than it would be around the resonance.
Footnote 7: We note that in Leconte et al. (2015), the tidal frequency under study is that of the diurnal component, thus we multiply their \(\omega_{0}\) value by 2; i.e. \(\sigma_{0}=2\omega_{0}\).
We take this parametric study one step further to study whether typical values of the parameters \(\sigma_{0}\) and \(\alpha_{\rm A}\) can place the Earth's atmosphere in the identified regions. Stringent constraints on \(\sigma_{0}\) are hard to obtain for the Earth since \(\sigma_{0}\) is an effective parameter that in reality is dependent on altitude. Furthermore, in the linear theory of tides, we are forced to ignore the layer-to-layer radiative transfer and assume a gray body atmospheric radiation directly into space. However, radiative transfer can be consistently accommodated in numerical GCMs using the method of correlated k-distributions (e.g., Lacis and Oinas, 1991) as performed in Leconte et al. (2015) and in Auclair-Desrotour et al. (2019), both studies using the LMD GCM (Hourdin et al., 2006). In fact, Leconte et al. (2015) fitted their numerically obtained atmospheric torques to effective values of \(\sigma_{0}\) for various atmospheric parameters (see Table 1 of Leconte et al., 2015). The closest of these settings to the Earth yields a radiative cooling timescale \(\tau_{\rm rad}=32\) days. In contrast, Lindzen and Chapman (1968) and later Lindzen and Blake (1972) estimated the timescale to be on the order of 1 day. We presume that these estimates should encompass the possible effective values for the Earth's atmosphere, and we highlight with the horizontal shaded area the range of these values7.
Footnote 7: We note that in Leconte et al. (2015), the tidal frequency under study is that of the diurnal component, thus we multiply their \(\omega_{0}\) value by 2; i.e. \(\sigma_{0}=2\omega_{0}\).
Another constraint on the free parameters emerges from present in situ barometric observations of the semidiurnal (\(S_{2}\)) tidal response. We use the analysis of compilations of measurements performed in Haurwitz and Cowley (1973); Dai and Wang (1999); Covey et al. (2014) and Schindelegger and Ray (2014), which constrain the amplitude of the semi-diurnal surface pressure oscillation to within \(107-150\) Pa, occurring around 0945 LT. The narrow shaded area defines the region of parameter space that can explain these observables using the present semi-diurnal frequency, placing the opacity parameter in the region \(\alpha_{\rm A}\)\(\sim\)14%. In I, we compute estimates of the present value of \(\alpha_{\rm A}\) by studying distributions of heating rates that are obtained either by direct measurements of the Earth's atmosphere (Chapman and Lindzen, 1970), or using GCM simulations (Vichare and Rajaram, 2013). Our analysis of the data suggests that the efficiency parameter is around \(\alpha_{\rm A}\)\(\sim\)17\(-\)18%, which is consistent with the \(S_{2}\) constraint we obtain. Finally, it is also noteworthy how the plotted \(S_{2}\) constraint is insensitive to variations in \(\sigma_{0}\) over a wide interval, which prohibits the determination of the present value of \(\sigma_{0}\) using this constraint.
Evidently, the overlap of the parametric constraints lives outside the region where the thermoidal response is sufficient for the rotational equilibrium condition. The present thermotidal torque (\(2.89\times 10^{15}\) N m) needs to be amplified by a factor of 3.9 to reach the absolute minimum of the opposing gravitational torque8 in the Precambrian, and by a factor of 12.3 to reach the Mesozoic value. Our parametric exploration precludes these levels of amplification. It is important to also note that larger amplification factors would be required if one were focused on the modulus of the pressure oscillation, rather than its imaginary part. This derives from Figure 8 where we show that the amplification in the imaginary part is almost half that of the modulus of the surface pressure oscillation.
Footnote 8: subject to the uncertainty of the present measurement of the semi-diurnal surface pressure oscillation discussed earlier
One can argue, however, that the used con
straints derive from present measurements, and the likelihood of the scenario still hinges on possible atmospheric variations as we go backwards in time. Nonetheless, the radiative cooling timescale exhibits a strong dependence on the equilibrium temperature of the atmosphere (\(\sigma_{0}\propto T_{0}^{3}\); Auclair-Desrotour et al., 2017, Eq. 17). As such, a warmer planet in the past would yield a shorter cooling timescale, and consequently, more efficient damping of the resonant amplitude (see K). On the other hand, atmospheric compositional variations can change the opacity parameter of the atmosphere in the visible and the infrared. An increase of the opacity in the visible to \(\alpha_{\rm A}=24\%\) can indeed place the response beyond the Precambrian threshold for some values of \(\sigma_{0}\). An increase to four times the present value of \(\alpha_{\rm A}\) is required to cross the threshold in the Mesozoic. These increases, however, can be precluded, based on the fact that the Archean lacked a stratospheric ozone layer (e.g., Catling and Zahnle, 2020). In contrast, an increase in atmospheric opacity in the infrared, which accompanies the abundance of Precambrian greenhouse gases, delivers the opposite effect by attenuating the resonant tidal response, as we elaborate in K. Furthermore, the latter increase would also trigger the contribution of asynchronous tidal heating, which further attenuates the amplitude of the positive peak as we show in Section 3.2. Thus, with these analyses, it is unlikely that the resonance could have amplified the thermotidal response beyond the required threshold. This conclusion can be further regarded as conservative, since the employed linear model tends to overestimate the resonant amplification of the tidal response. This derives from the fact that, in the quasi-adiabatic regime, the model ignores the associated non-linearities of dissipative mechanisms. The remaining question is therefore: when did the Lamb resonance actually occur?
### The spectral position of the Lamb resonance
The spectral position of the Lamb resonance, or equivalently, the geological time of its occurrence, is identified in the analytical model via the denominator highlighted in Eq.(22). The latter is a function of \(\sigma\) and is dependent on the planetary radius, gravitational acceleration, eigenvalue of the fundamental Hough mode, the radiative frequency, and the equilibrium surface temperature at the surface \(T_{\rm s}\), and is independent of \(\sigma_{\rm bl}\). Thus for the Earth, the resonance position is merely dependent on the equilibrium temperature at the surface and the radiative cooling frequency.
In Figure 6, we plot the dependence of the spectral position of the resonance, in terms of LOD, on \(T_{\rm s}\). The apparent single curve is actually a bundle of curves with different values of \(\sigma_{0}\), but the effect of the latter is unnoticeable (if one varies \(\sigma_{0}\) by two orders of magnitude, the resonant rotational period varies by few minutes). As such, the resonant frequency is predominantly controlled by \(T_{\rm s}\), which allows us to take the adiabatic limit of Eq.(22), and straightforwardly derive the tidal frequency that minimizes the denominator. In terms of the rotational period, the position of the
Figure 6: The dependence of the resonant rotational period on the mean surface temperature. By virtue of Eq.(24), the LOD at which the Lamb resonance occurs scales as the inverse square root of the mean surface temperature. The gray shaded area highlights 95% confidence intervals for the past temperature evolution according to the carbon cycle model of Krissansen-Totton et al. (2018). The identified geological eras correspond to the LOD evolution model of Farhat et al. (2022). The overlap between the modelled temperature evolution and the black curve places the resonance occurrence in the early Mesozoic.
resonance then reads:
\[\text{LOD}_{\text{res}}=\frac{4\pi R_{\text{p}}}{\sqrt{\mathcal{R}_{\text{s}} \Lambda_{n}T_{\text{s}}}+2R_{\text{p}}n_{\star}}. \tag{24}\]
The resonant rotational period thus scales as the inverse square root of the surface equilibrium temperature. However, the evolution of the latter for the early Earth is widely debated. For instance, marine oxygen isotopes have been interpreted to indicate Archean ocean temperatures around \(60-80^{\circ}\text{C}\)(e.g., Knauth, 2005; Robert and Chaussidon, 2006). This interpretation is in contrast with geochemical analysis using phosphates (e.g., Blake et al., 2010), geological evidence of Archean glacial deposits (e.g., de Wit and Furnes, 2016), geological carbon cycle models (e.g., Sleep and Zahnle, 2001; Krissansen-Totton et al., 2018), numerical results of 3D GCMs (e.g., Charnay et al., 2017), and the fact that solar luminosity was 10-25% lower during the Precambrian (e.g., Charnay et al., 2020), altogether predicting a temperate climate and moderate temperatures throughout the Earth's history.
We highlight with the gray shading on top of the curve modelled mean surface temperature variations adopted from Krissansen-Totton et al. (2018). As the latter temperature evolution is established in the time domain, we use the LOD evolution in Farhat et al. (2022) to map from time-dependence to LOD-dependence, and we further identify the corresponding geological eras of the LOD evolution with the color shadings. Given the present day equilibrium surface temperature, the resonance occurs at LOD = 22.8 hr. This value is in agreement with the \(11.38\pm 0.16\) hr semi-diurnal period obtained by analyzing the spectrum of normal modes using pressure data on global scales (see Table 1 of Sakazaki and Hamilton, 2020, first symmetric gravity mode of wavenumber \(k=-2\)). In L, we compute the resonant rotational period assuming an isothermal profile of the atmosphere, and we show that it is roughly one hour less than that in the neutrally stratified limit, placing it closer to 21.3 hr estimate of Zahnle and Walker (1987) and Bartlett and Stevenson (2016). We emphasize here, however, that the resonant period does not exactly mark the period at which the thermotidal torque is maximum. The latter occurs at the peaks surrounding the resonance (see Figures 2 and H.8), the difference between the two being dependent on the radiative cooling frequency.
Taking the LOD evolution model of Farhat et al. (2022) at face value, the temperature variations predicted in Krissansen-Totton et al. (2018) locate the resonance encounter in the Triassic, and not in the Precambrian. In fact, for the resonance to be encountered in the Precambrian, even in the latest eras of it, the resonant period should move to less than \(\sim\)21 hr, but this requires an increase in the equilibrium temperature of at least 55\({}^{\circ}\)C, which is inconsistent with the studies mentioned above. Such an increase in temperature would also increase \(\sigma_{0}\) by almost 19% (as we discuss in the previous section, \(\sigma_{0}\propto T_{0}^{3}\); see also K), reducing the radiative cooling timescale and prompting more efficient damping of the tidal amplitude at the resonance. Moreover, such an increase in temperature would most probably accompany increased greenhouse effects in the past, which in turn would increase the atmospheric absorption and thermotidal heating in the infrared. The latter would then place the Earth's atmosphere in the regime of asynchronous thermotidal heating studied in Section 3.2, whereby the accelerative peak of the torque is further attenuated.
## 5 Summary and Outlook
We were drawn to the problem of atmospheric thermal tides by the hypothesized scenario of a constant length of day on Earth during the Precambrian. Our motivation in investigating the scenario lies in its significant implications on paleoclimatic evolution, and the evident mismatch between LOD geological proxies and the predicted LOD evolution if this rotational equilibrium is surmised. The scenario hinges on the occurrence of a Lamb resonance in the atmosphere whereby an amplified thermotidal torque would cancel the opposing torque generated by solid and oceanic gravitational tides. Naturally, the atmospheric tidal torque is that of two flavors: it can either
pump or deplete the rotational angular momentum budget of the planet, depending on the orientation of the generated tidal bulge.
With this rotational equilibrium scenario in mind, we have developed a novel analytical model that describes the tidal response of thermally forced atmospheres on rocky planets. The model derivation is based on the secure ground of the first principles of linear atmospheric dynamics, studied under classical approximations that are commonly drawn in earlier analytical works and in more recent numerical frameworks. The distinct feature that we imposed in this model is that of neutral atmospheric stratification, which presents a more realistic description of the Earth's troposphere than the isothermal profile imposed in earlier analytical studies. In this limit, we derive from the model a closed form solution of the tidal torque that can be efficiently used to study the evolution of planetary rotational dynamics. We accommodate into the model dissipative thermal radiation via linear Newtonian cooling, and turbulent and diffusive processes related to thermal inertia budgets in the boundary layer and the ground. As such, the model can be used to study a planetary thermotidal response when heated either by direct synchronous absorption of the incident stellar flux, or by a delayed infrared radiation from the ground.
We probed the spectral behavior of the tidal torque using this developed model in the two aforementioned limits. In the limit of longwave heating flux, the inherently delayed thermal response in the planetary boundary layer maneuvers the tidal bulge in such a way that, for typical values of thermal inertia in the ground and atmosphere, the accelerating effect of the tidal torque at the Lamb resonance is attenuated, and possibly annihilated. In the case of the Earth, - where we apply the opposite limit of shortwave thermotidal heating and ignore the attenuating effect of asynchronous forcing - while the encounter of the resonance in the atmosphere is guaranteed, the epoch of its occurrence and the tidal amplitude it generates are uncertain. As such, we attempted a cautious incursion on constraining them and learned that:
* Assuming that temperate climatic conditions have prevailed over the Earth's history, the resonance is likely to have occurred in the early Mesozoic, and not in the Precambrian. The early Mesozoic, unlike the Precambrian, is characterized by an amplified decelerating luni-solar gravitational torque.
* For judiciously constrained estimates of our atmospheric model parameters, the resonance does not amplify the accelerating thermotidal torque to a level comparable in magnitude to the gravitational counterpart.
These model predictions presume that thermotidal heating in the Earth has always been dominated by the shortwave. Compositional variations however, namely those associated with increased greenhouse contributions in the past would amplify the asynchronous thermotidal forcing in the longwave. The latter in turn, as we show in this work, further attenuates the accelerating flavor of the resonant torque. Exploring this end is certainly worthy of future efforts, but with the present indications at hand, we conclude that the occurrence of the rotational equilibrium is contingent upon a drastic increase in the Earth's surface temperature (\(\geq 55^{\circ}\)C), a long enough radiative cooling timescale (\(\geq 40\) days), an increase in the shortwave flux opacity of the atmosphere, and that infrared thermotidal heating remained negligible in the past. We cannot completely preclude these requirements when considered separately, especially given the uncertainty in reconstructing the Earth's temperature evolution in the Proterozoic. However, a warmer paleoclimate goes hand in hand with a shorter radiative cooling timescale, along with increased greenhouse gases that amplify the asynchronous thermotidal forcing. Both effects damp the accelerating flavor of the thermotidal torque. Put together, these indications suggest that the occurrence of the rotational equilibrium for the Earth is unlikely. To that end, future GCM simulations that properly model the Precambrian Earth to provide stringent constraints on our analytical predictions of the resonant amplification are certainly welcome.
Ultimately though, even if the locking into the
resonance did not occur, the effect of the thermodidal torque at the resonance remains a robust and significant feature, and it should be accommodated in future modelling attempts of the Earth's rotational evolution. Our model sets the table for efficiently studying such a complex interplay between several tidal players, both for the Earth and duly for its analogues. Interestingly, the question of the climatic response to the Lamb resonance, or similarly to oceanic tidal resonances, where abrupt and significant astronomical variations occur, largely remains an unexplored territory, perhaps requiring an armada of rigorous GCM simulations. This only leaves us with anticipated pleasure in weaving yet another thread in the rich tidal history of the Earth. Furthermore, we anticipate that the growing abundance of geological proxies, especially robust inferences associated with cycloststratigraphy, may help detect the whereabouts of these resonances and provide further constraints to our modeling efforts.
## Acknowledgments
M.F. expresses his gratitude to Kevin Heng for his hospitality at the LMU Munich Observatory where part of this work was completed. This work has been supported by the French Agence Nationale de la Recherche (AstroMeso ANR-19-CE31-0002-01) and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (Advanced Grant AstroGeo-885250). This work was granted access to the HPC resources of MesoPSL financed by the Region Ile-de-France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche.
|
2310.20408 | Time-varying double-peaked emission lines following the sudden ignition
of the dormant galactic nucleus AT2017bcc | We present a pan-chromatic study of AT2017bcc, a nuclear transient that was
discovered in 2017 within the skymap of a reported burst-like gravitational
wave candidate, G274296. It was initially classified as a superluminous
supernova, and then reclassified as a candidate tidal disruption event. Its
optical light curve has since shown ongoing variability with a structure
function consistent with that of an active galactic nucleus, however earlier
data shows no variability for at least 10 years prior to the outburst in 2017.
The spectrum shows complex profiles in the broad Balmer lines: a central
component with a broad blue wing, and a boxy component with time-variable blue
and red shoulders. The H$\alpha$ emission profile is well modelled using a
circular accretion disc component, and a blue-shifted double Gaussian which may
indicate a partially obscured outflow. Weak narrow lines, together with the
previously flat light curve, suggest that this object represents a dormant
galactic nucleus which has recently been re-activated. Our time-series
modelling of the Balmer lines suggests that this is connected to a disturbance
in the disc morphology, and we speculate this could involve a sudden violent
event such as a tidal disruption event involving the central supermassive black
hole, though this cannot be confirmed, and given an estimated black hole mass
of $\gtrsim10^7-10^8$ M$_\odot$ instabilities in an existing disc may be more
likely. Although we find that the redshifts of AT2017bcc ($z=0.13$) and G274296
($z>0.42$) are inconsistent, this event adds to the growing diversity of both
nuclear transients and multi-messenger contaminants. | E. J. Ridley, M. Nicholl, C. A. Ward, P. K. Blanchard, R. Chornock, M. Fraser, S. Gomez, S. Mattila, S. R. Oates, G. Pratten, J. C. Runnoe, P. Schmidt, K. D. Alexander, M. Gromadzki, A. Lawrence, T. M. Reynolds, K. W. Smith, L. Wyrzykowski, A. Aamer, J. P. Anderson, S. Benetti, E. Berger, T. de Boer, K. C. Chambers, T. -W. Chen, H. Gao, C. P. Gutiérrez, C. Inserra, T. Kangas, G. Leloudas, E. A. Magnier, L. Makrygianni, T. Moore, T. E. Müller-Bravo, S. J. Smartt, K. V. Sokolovsky, R. Wainscoat, D. R. Young | 2023-10-31T12:31:55Z | http://arxiv.org/abs/2310.20408v2 | Time-varying double-peaked emission lines following the sudden ignition of the dormant galactic nucleus AT2017bcc
###### Abstract
We present a pan-chromatic study of AT2017bcc, a nuclear transient that was discovered in 2017 within the skymap of a reported burst-like gravitational wave candidate, G274296. It was initially classified as a superluminous supernova, and then reclassified as a candidate tidal disruption event. Its optical light curve has since shown ongoing variability with a structure function consistent with that of an active galactic nucleus, however earlier data shows no variability for at least 10 years prior to the outburst in 2017. The spectrum shows complex profiles in the broad Balmer lines: a central component with a broad blue wing, and a boxy component with time-variable blue and red shoulders. The H\(\alpha\) emission profile is well modelled using a circular accretion disc component, and a blue-shifted double Gaussian which may indicate a partially obscured outflow. Weak narrow lines, together with the previously flat light curve, suggest that this object represents a dormant galactic nucleus which has recently been re-activated. Our time-series modelling of the Balmer lines suggests that this is connected to a disturbance in the disc morphology, and we speculate this could involve a sudden violent event such as a tidal disruption event involving the central supermassive black hole. Although we find that the redshifts of AT2017bcc (\(z=0.13\)) and G274296 (\(z>0.42\)) are inconsistent, this event adds to the growing diversity of both nuclear transients and multi-messenger contaminants.
keywords: black hole physics - gravitational waves - galaxies: active - transients: tidal disruption events
## 1 Introduction
It is now established that most galaxies host a central supermassive black hole (SMBH) (Magorrian et al., 1998; Kormendy and Ho, 2013). Accretion onto the SMBH can allow a galactic nucleus to outshine its host. Flickering like candles, these active galactic nuclei (AGN) vary in luminosity over time as the rate of their accretion changes. This emission illuminates gas hundreds of light years from the SMBH, producing narrow-line spectral features. In some AGN, Doppler-broadened emission is also visible, arising from fast-moving gas surrounding the central accretion disc. In the optical, this is mainly visible in the Balmer lines. While almost all AGN show variable luminosity and narrow-line emission, they are categorised based on whether the broad-line region is visible. Type I AGN show both broad- and narrow-line features, while Type II AGN lack the broad-line component. This is attributed to a viewing-angle effect, with the broad line region obscured by a dusty torus in Type II AGN (Antonucci, 1993; Urry and Padovani, 1995).
Due to their persistent variable emission, AGN have historically been treated as contaminants during searches for transient objects. However, with the rise of wide-field sky surveys and automated source detection, we have been able to uncover the more unusual behaviour of nuclear emitters. For example, some AGN have been seen to transition from Type I and Type II, or vice versa, between epochs. These are known as changing-look or changing-state AGN and such objects challenge the paradigm that the difference between the types is purely viewing angle. Though we have observations dating back to the 1970s (Khachikian and Weedman, 1971; Antonucci and Cohen, 1983), they have been detected more frequently in the last decade (LaMassa et al., 2014; Runnoe et al., 2016; Lawrence et al., 2016; MacLeod et al., 2019; Ricci and Trakhtenbrot, 2022).
We have also learned that AGN are not the only source of luminous emission from galactic nuclei. All SMBHs can produce flares by tidally disrupting stars that wander too close, even those which are non-accreting or "quiescent". The existence of such tidal disruption events (TDEs) was proposed in the 1970s and 80s (Hills, 1975; Rees, 1988), and the first candidates were identified in the late 1990s (Komossa and Bade, 1999). Since then, we have detected almost 100 TDEs and broadly classified them based on whether they radiate in the X-ray (Auchettl et al., 2017; Sazonov et al., 2021), optical (Gezari
et al., 2012; Arcavi et al., 2014; Holoien et al., 2014; van Velzen et al., 2021), or both and if they produce a relativistic jet (Alexander et al., 2020; Andreoni et al., 2022; Cendes et al., 2022; Pasham et al., 2023). As with AGN, some of this diversity of emission is thought to be a result of viewing angle dependence, arising from a non-spherically-symmetric envelope of reprocessing material around the SMBH (Dai et al., 2018).
AGN accretion discs may also contain sources of gravitational wave (GW) emission. It is expected that galactic nuclei host a dense population of stellar-mass black holes, many of which reside in the plane of the central accretion disc (Morris, 1993). Binary black holes (BBHs) in such an environment would merge rapidly due to drag from the surrounding gas (McKernan et al., 2020). Shocks in the gas and super-Eddington accretion onto the black holes would also produce a fast, bright electromagnetic (EM) transient (Bartos et al., 2017; McKernan et al., 2019). Thus, these mergers could be multi-messenger events, producing both GW and EM emission. A candidate for such an event was reported by Graham et al. (2020), along with the proposal to monitor AGN when searching for future EM counterparts to GW - including BBH - detections.
In this paper, we present observations of a nuclear transient, AT2017bcc, which was first observed as a candidate EM counterpart to a GW detection. We explore the possibility that this is a genuine multi-messenger event by re-analysing the GW signal, finding that the GW and EM sources are likely unrelated. However, the EM source itself shows a number of unusual properties that provide insight into the diversity of AGN variability and nuclear transients, and may shed light on the nature of the AGN changing look phenomenon. We have obtained an extensive data set from X-ray to radio, including time series spectroscopy over a period of six years.
In particular, this event is a rare example (unique to our knowledge) of an AGN that is both changing state and a 'double-peaked emitter' (Storchi-Bergmann et al., 1993; Eracleous et al., 1994) with complex broad-line profiles that vary over time (Gezari et al., 2007). Double-peaked emitters often exhibit distinct blue and red shoulders in their broad Balmer lines, consistent with an accretion disc origin (Eracleous et al., 1995). Understanding the line profiles and their evolution in AT2017bcc provide new clues to the processes that have switched this AGN on after a period of quiescence.
The paper is structured as follows. In Section 2 we describe the discovery of AT2017bcc and our re-analysis of G274296. We detail our spectroscopic and photometric follow-up in Section 3. Using fits to archival photometry, we examine the host galaxy in Section 4. We interpret the multi-wavelength light curve in Section 5. In Section 6 we present our optical spectroscopy and model the emission profiles to infer physical parameters. Finally we discuss our results in Section 7, and conclude in Section 8. Unless otherwise stated, we adopt a flat \(\Lambda\)CDM cosmology with \(H_{0}=67.7\,\mathrm{kms}^{-1}\), \(\Omega_{0}=0.31\) throughout (Planck Collaboration et al., 2020).
## 2 Discovery
On 17th February 2017, the LIGO collaboration reported the identification of candidate signal, labelled G274296 (Shawhan et al., 2017), during real-time burst analysis. The signal was flagged by the coherent waveBurst (CWB; Drago et al., 2020) pipeline; designed to identify GW transients in detector streams without prior knowledge of a signal waveform. As such, G274296 did not obviously resemble the typical waveform "chirp" expected from a binary inspiral (though this could not be ruled out), but was still significant due to its low false alarm rate of \(\sim 1\) per 2 months.
During the subsequent search for EM counterparts, the Panoramic Survey Telescope and Rapid Response System (Pan-STARRS; Chambers et al., 2016) discovered a bright nuclear transient on 19th February, then dubbed PS17bgn, in the galaxy SDSS J113152.97+295944.8 within the 90% contour of the G274296 skymap (Chambers et al., 2017). It was spectroscopically classified by the Public ESO Spectroscopic Survey of Transient Objects (PESSTO; Smartt et al., 2015) as a type II superluminous supernova (SLSN-II) due to its broad H\(\alpha\) emission at redshift \(z=0.133\), and corresponding absolute magnitude of \(-21\)(Taddia et al., 2017). The transient was then renamed to SN2017bcc and dropped from the counterpart search. Later re-examination by Sokolovsky et al. (2017) revealed blue optical and near-UV continuum, suggesting high temperatures. They also detected non-thermal X-ray emission. These features are characteristic of tidal disruption events (TDEs) and active galactic nucleus (AGN) flares, and though they do not rule out a SLSN-II origin, X-rays have not been detected in the majority of such objects.
Given the uncertainty in the classification of this transient (and the results of our own analysis), we will refer to this source using its pre-classification IAU name AT2017bcc.
### GW analysis
Since AT2017bcc was discovered during optical follow-up of a GW source, we here assess whether the EM transient could be the true counterpart to the GW signal.
As G274296 was identified by a GW pipeline using unmodeled analysis, a distance to the source could not be determined in real time. Burst-style gravitational wave signals are expected to originate from stellar core collapse, thought to be detectable at current GW sensitivity only at kiloparsec-scale distances (Gossan et al., 2016) (though collapse to a rapidly rotating black hole - a collapsar - may be detectable at \(\sim 100\,\mathrm{Mpc}\); Gossan et al., 2016). As such, it is unlikely that G274296 is a true burst source, as it would most likely have been accompanied by an easily identifiable nearby supernova. In any case, a GW burst from a stellar core collapse would not be detectable at the distance of AT2017bcc (625 Mpc) (Gossan et al., 2016). Therefore if G274296 did originate in this way, the EM and GW sources are not associated.
The alternative scenario is that the signal arose from a compact binary merger. A sufficiently massive system may enter the LIGO
Figure 1: Posterior probability density on redshift from waveform analysis of G274296. The vertical dashed lines indicate the \(1\sigma\) region (black) and the spectral redshift of AT2017bcc (red).
bandpass only in the final stages of merger, such that the distinctive chirp signal from the inspiral phase goes unseen. This is a plausible origin of an AGN flare-like counterpart, if the merger occurred within an accretion disc around an SMBH. Indeed, the only previous EM candidate for a merger in an AGN disc (Graham et al., 2020) corresponded to one of the most massive BBH mergers detected by LIGO to date, GW190521: a system with a total mass of \(\sim 150\,\mathrm{M}_{\odot}\)(Abbott et al., 2020). Under the assumption of a compact binary merger, to determine the source properties of G274296 we carried out a full Bayesian analysis (Veitch et al., 2015) using two state-of-the-art waveform models, IMRPhenomXPH (Pratten et al., 2021) and NRSu7dq4 (Varma et al., 2019), and found it to be consistent with a very massive compact binary with median source-frame progenitor masses of \(119(120)\,\mathrm{M}_{\odot}\) and \(77(73)\,\mathrm{M}_{\odot}\) using IMRPhenomXPHM (NRSu7dq4). In this scenario, due to the large component masses, the GW signal is very short in duration (\(\approx 1\) s) and dominated by the merger phase, emulating a burst signal, which is consistent with the initial detection by a burst pipeline.
This analysis also produced a luminosity distance posterior, shown in Figure 1. In this scenario, using the cosmological parameters from Ade et al. (2016) we find the median redshift of G274296 to be \(z_{\mathrm{GW}}=0.83^{+0.65}_{-0.41}\). The spectroscopic redshift of AT2017bcc, measured from its narrow [O iii] lines, is \(z_{\mathrm{EM}}=0.133\) which is outside the 99th percentile of the GW distance posterior. Thus we conclude that, with the absence of further models for the GW signal, AT2017bcc is not a valid EM counterpart for G274296. In the rest of this paper, we will analyse our extensive EM data set and the implications of this very unusual nuclear source for understanding extreme AGN variability.
## 3 Observations
### Photometry
We observed AT2017bcc during the first optical peak in the _g, r, i_ bands using KeplerCam on the 1.2-m telescope at the Fred Lawrence Whipple Observatory (FLWO), and in _g, r, i, z, J, H, K_ using the ALFOSC optical imager and spectograph and NOTCam NIR imager and spectrograph on the 2.5-m Nordic Optical Telescope (NOT)
Figure 3: Optical spectroscopic evolution of AT2017bcc, containing 20 epochs from day one (\(19^{\mathrm{th}}\) February 2017) to day 2157 (\(15^{\mathrm{th}}\) January 2023) post-discovery. Each epoch is labelled with the rest-frame days since discovery and the telescope / observatory which took the data. All spectra are telluric-corrected, continuum-subtracted, and normalised for visual clarity. Some spectra have been smoothed using a Savitzky-Golay filter.
Figure 2: Optical and infra-red light curve of AT2017bcc. All magnitudes are corrected for galactic extinction and still include host light. Each point is the result of binning fluxes to a weekly cadence for visual clarity. Epochs below a \(5\sigma\) detection limit are plotted as upper limits with a lower opacity. The vertical dashed line indicates the discovery by Pan-STARRS in February 2017. The epochs of spectroscopy are marked below the magnitudes, with the same colors used in Figure 3.
at the Roque de los Muchachos Observatory. Bias subtraction and flat fielding were applied either using iraf or instrument specific pipelines, and sky subtraction was applied to the NOTCam images. Aperture photometry was performed on all images using photutils (Bradley et al., 2021) with field star magnitudes from Pan-STARRS and the Two Micron All Sky Survey (2MASS; Skrutskie et al., 2006) to calculate the photometric zero-points. We used an aperture size of 5 arcsec to include flux from the compact host galaxy consistently (SDSS reports a petrosian radius in the optical of \(\sim 2\) arcsec for the host).
We imaged AT2017bcc in _uvw1_, _uvw2_, _num2, U, B, V_ using the Ultraviolet/Optical Telescope (UVOT; Roming et al., 2005) on the Neil Gehrels _Swift_ Observatory. Count rates were obtained using the _Swift_ uvotsource tool and converted to magnitudes (in the AB system) using the UVOT photometric zero points (Breeveld et al., 2011).
We also downloaded reduced data from various public surveys covering 2005-2022. Images in _g, r, i, z_ were obtained from Pan-STARRS via the PS1 Image Cutout Service; and in _g, r, i_ from the Zwicky Transient Facility (ZTF; Bellm et al., 2019), via the NASA/IPAC Infrared Science Archive. These were analysed using photutils in order to use a consistent aperture on science images, rather than difference magnitudes, to avoid any unwanted effects from transient contamination in template images.
We acquired survey magnitudes for AT2017bcc in _c, o_ from the Asteroid Terrestrial-Impact Last Alert System (ATLAS; Tonry et al., 2018; Smith et al., 2020; Shingles et al., 2021) forced photometry server, using reduced rather than difference images; in \(V\) from the Catalina Real-time Transient Survey (CRTS; Djorgovski et al., 2011) cone search service; in _W1_, W2 from the Near-Earth Object Wide-field Infrared Survey Explorer (_NEOWISE_; Mainzer et al., 2011) via the NASA/IPAC Infrared Science Archive; and in _u, g, r, i, z_ from the Sloan Digital Sky Survey (SDSS; Alam et al., 2015).
Figure 2 shows all of our optical and infrared (IR) photometry in AB magnitudes. The _Swift_ UVOT counts had very large uncertainties so were not included in the light curve for visual clarity, but we do show them in Figure 5.
### X-ray data
X-ray observations were acquired using the X-ray Telescope (XRT) on board _Swift_ over 12 epochs from 2017-03-10 to 2018-03-14. Data were downloaded and analysed using the automated tools provided by the UK Swift Science Data Centre (Evans et al., 2007, 2009). We stacked all available data to produce a high-S/N X-ray spectrum from 0.3-10 keV with a total exposure of 20.5 ks, and fit this with both power-law and blackbody models. The power-law provides an adequate fit to the data, with a w-stat of 197 for 245 degrees of freedom. The blackbody fit is a poor visual match and provides a far inferior fit with a w-stat of 359.
The best-fitting photon index for our preferred power-law model is \(\Gamma=1.54^{+0.14}_{-0.11}\). This is consistent with the lower end of the distribution for AGN, which have a population averaged \(\langle\Gamma\rangle\approx 1.7-1.8\)(Tozzi et al., 2006; Winter et al., 2009). The fit does not require any intrinsic hydrogen column absorption. The model gives a mean unabsorbed flux \(F_{\rm X}=1.21\times 10^{-12}\) erg cm\({}^{-2}\) s\({}^{-1}\). We also produced a light curve binned by visit, finding some indication of fading by a factor \(\sim 2\) between 2017 and 2018, however this is also comparable to the visit-to-visit scatter in the light curve. The count-rate light curve is given in Table A1.
### Radio data
AT2017bcc was observed by the Jansky Very Large Array (VLA) over 4 logarithmically-spaced epochs between 2017-03-21 and 2018-01-17 from 3-25 GHz (PI: Alexander). These were reduced using the standard NRAO pipeline in Common Astronomy Software Applications (CASA; McMullin et al., 2007). There is no obvious variability at any frequency, with typical flux of \(\sim 300\,\mu\)Jy. If the radio emission were due to obscured on-going star-formation, the implied SFR would be \(\sim 9\) M\({}_{\odot}\) yr\({}^{-1}\), using the average flux at 5 GHz with the relation from Yun & Carilli (2002). Alternatively, this may indicate historical AGN activity.
### Spectra
We obtained 20 epochs of spectroscopy between 2017-02-19 and 2023-01-15. Five epochs were taken with EFOSC2 on the New Technology Telescope (NTT) as part of ePESSTO+ and reduced via the PESSTO pipeline (Smartt et al., 2015). We took six with the OMOS spectrograph on the 2.4 m telescope at the MDM Observatory (Martini et al., 2011), three with ALFOSC on the NOT and six at the MMT Observatory (five with Bluenchannel and one with Binospec) all of which were debiased, flat fielded, and cosmic ray corrected using either iraf or standard Python libraries. Finally, we acquired a spectrum with FORS2 on the Very Large Telescope (VLT) which was reduced using ESO Reflex. Relative flux calibration was achieved for all spectra using standard stars observed with the same instrument setups; calibrated spectra were then re-scaled to match contemporaneous photometry. Figure 3 shows these spectra, further corrected for telluric absorptions and continuum subtracted to emphasise the emission lines.
## 4 Host galaxy
We retrieved archival magnitudes of the host galaxy, SDSS J113152.98+295944.8 (or WISEA J113152.97+295944.9), from a
Figure 4: Fit to the archival spectral energy distribution of the AT2017bcc host galaxy using ProspDetor. The inset shows the implied star-formation history. Shaded areas show the 16th and 84th percentiles of the model posteriors.
number of catalogs. Optical data were obtained from the Sloan Digital Sky Survey (Alam et al., 2015) in the \(u,g,r,i,z\) bands. Near-infrared photometry was available from the 2 Micron All Sky Survey (2MASS; Skrutskie et al., 2006) in \(J,H,K\), and mid-IR from the Wide-field Infrared Survey Explorer (_WISE_; Wright et al., 2010) in the bands \(W1-3\). UV data was collected from the Galaxy Evolution Explorer (GALEX; Martin et al., 2005). In all cases, we used the default reported magnitudes from each instrument.
We modelled the resulting spectral energy distribution using Prospector(Johnson et al., 2021). We used the Prospector-\(\alpha\) model setup from Leja et al. (2017), where the free parameters are the total mass formed in the galaxy, the metallicity, the current specific star-formation rate in the last 50 Myr, the widths in time of five equal-mass bins to model the star-formation history, three parameters that control dust absorption by both the interstellar medium and stellar birth clouds, and three parameters governing dust emission. We refer the reader to Leja et al. (2017) for an in-depth discussion of these parameters and their degeneracies. In particular, the default model does not work well for galaxies with a substantial AGN contribution, so we fit additionally for the fraction of IR emission originating from an AGN torus. The model posteriors were explored using emce(Foreman-Mackey et al., 2013), and the resulting corner plot is shown in Appendix B.
The resulting spectral energy distribution (SED) fit is shown in Figure 4. This is a red galaxy with an old stellar population, with most star-formation occurring - 9 Gyr ago, and a present-day SFR of \(\sim 1\,\rm M_{\odot}\,yr^{-1}\). This is in contrast to the majority of TDE host galaxies, which typically have star-formation histories peaking \(\sim 0.1-1\) Gyr ago (French et al., 2020). The star-formation rate is significantly lower than that implied by the radio luminosity of this galaxy, suggesting the radio emission is unlikely to originate from obscured star formation.
The Prospector fit prefers a solar or slightly super-solar metallicity, \(\log(Z/Z_{\odot})=0.1^{+0.07}_{-0.15}\), and a stellar mass \(\log(M_{*}/\rm M_{\odot})=10.70\pm 0.08\). Assuming the mean bulge-to-total light ratio for this galaxy mass \(B/T\sim 0.7\)(Stone et al., 2018), this implies a SMBH mass of \(\approx 1.5\times 10^{8}\,\rm M_{\odot}\) using the relation of Kormendy & Ho (2013). An SMBH of this mass would disfavour a TDE origin for the variability in AT2017bcc, as this is above the Hills mass for direct capture (no disruption outside the event horizon) of a solar mass star (Hills, 1975), unless the BH is rapidly rotating and the star entered on a prograde orbit (Leloudas et al., 2016). However, the bulge mass of this galaxy is consistent with the most massive TDE host in Ramsden et al. (2022), and their flatter BH-bulge relation for TDE host galaxies would suggest a SMBH more like \(\sim 10^{7}\,\rm M_{\odot}\).
Our fit does not require the existence of a powerful AGN prior to the onset of variability in 2017. The IR luminosity fraction is \(f_{\rm AGN}=0.08\pm 0.02\), i.e. a contribution of \(\lesssim 10\%\) to the total IR emission could arise from a pre-existing dusty torus. However, this contribution is above zero at \(4\sigma\), indicating that this structure likely does exist, even if it is not illuminated by a large accretion rate onto the central SMBH at that time. Overall, our SED model points to a largely dormant AGN with a SMBH mass in the range of a few \(\times 10^{7}-10^{8}\,\rm M_{\odot}\).
## 5 Photometric analysis
### Light curve
We have compiled a light curve for AT2017bcc consisting of over 15 years of survey photometry with almost unbroken coverage, shown in Figure 2. This photometry includes flux contributions from both the transient/variable nuclear source, and the host galaxy. We show in Figure 6 a light curve with host light (from our prospector fit) subtracted, but we do not consider it here as these archival magnitudes may include different contributions from a variable nuclear source.
The light curve in Figure 2 shows two distinct phases separated by the discovery in early 2017. The first phase is one of quiescence, covered by SDSS, CRTS, and Pan-STARRS, which shows negligible variance in magnitude over \(\sim 10\) years. CRTS is particularly well sampled, showing little deviation (\(\sigma=0.1\) mag) from a \(C\)-band apparent magnitude of 18.2 (absolute magnitude of \(-21.0\)). Two epochs of _WISE_ data in 2010 are also consistent with the first _NEOWISE_ magnitudes in 2014, both reporting a \(W2\)-band magnitude of \(\sim 17.4\), suggesting very little activity at both optical and infrared wavelengths.
The IR luminosity begins to increase in 2015/2016, with an eventual rise time on the scale of a year. It is difficult to quantify the
Figure 5: Spectral energy distribution of AT2017bcc at multiple epochs. Each point is an average flux from a 3-month time bin centred on the labelled epoch, plotted at the filter’s effective wavelength. The archival SDSS, 2MASS, and WISE magnitudes are shown in red (labelled “Host”). The lower panel shows the same SEDs as multiples of the interpolated host luminosities.
Figure 6: Structure functions for AT2017bcc in \(g\), \(r\), \(i\), using photometry from 2014 onward with logarithmic time bins. The dashed lines show power-law fits to the structure functions, where \(A\), the variability amplitude at 1 year, and \(\gamma\), the power law exponent, are the parameters being optimised. The best-fit parameters for each band are as follows; \(g:A=0.24\), \(\gamma=0.30\); \(r:A=0.13\), \(\gamma=0.26\); \(i:A=0.13\), \(\gamma=0.37\).
rise time in the optical, but the Pan-STARRS magnitudes in the \(r\)- and \(i\)-bands remain constant from the end of the _CRTS_ coverage until the first epoch of _ATLAS_ photometry. This constrains the optical rise to the \(\sim 200\) day gap before the peak in 2017. The first peak in _NEOWISE_ occurs when the optical is already fading, potentially lagging the optical peak by \(\sim 6\) months. This implies a distance of \(\sim 10^{17}\) cm between the optical and IR emission regions, consistent with the radius of an AGN dusty torus (Hickox & Alexander, 2018).
The flare in early 2017 marks the beginning of a second, more active phase. This phase is well covered by ATLAS, _NEOWISE_, and ZTF which capture repeated flaring at irregular times and magnitudes. The first flare reaches a peak \(o\)-band magnitude of 17.5 in ATLAS, marking a rise of 0.5 mag from the previous year, and then fades back to a magnitude of 18.0 in mid-2018. The object then spends the next two years slowly rising to a second peak in late 2020. Notably, the variability is more pronounced in bluer bands: while the \(o\)-band returns to its previous peak magnitude, the second \(r\)- and \(g\)-band peaks appear to be brighter than the first. This is then followed by a faster fade and re-brightening throughout 2021 and 2022, and then a third peak in 2023 exceeding the luminosity of the first two.
The second phase rules out a supernova origin for AT2017bcc, as they aren't known to re-brighten so dramatically on these timescales. Some TDEs are known to re-brighten after the initial peak, though they do so on timescales shorter than a year, and have not been observed to rise back to the original peak luminosity. Additionally, a third peak in the light curve is unprecedented for the known sample of TDEs (Yao et al., 2023). However, AGN do exhibit repeated flaring. Therefore, we investigate whether the statistical properties of the variability in AT2017bcc is consistent with an AGN.
### Structure function
To compare the active phase of AT2017bcc's light curve to a typical AGN, we characterise its variability using the structure function,
Figure 8: Time-series evolution of the H\(\alpha\) emission profile of AT2017bcc. All epochs have been flux-corrected using survey photometry and telluric-corrected. The left and right panels show the same spectra with and without vertical offsets respectively for visual clarity.
Figure 7: Three representative epochs of spectroscopy for AT2017bcc. All spectra were taken at the MMT observatory, the 2017 and 2018 epochs using Bluechannel, and the 2020 epoch using Binospec. All epochs have been flux-corrected using survey photometry and telluric-corrected. Relevant emission and absorption lines are marked with dashed vertical lines.
following the definition provided by Schmidt et al. (2010). The variability \(V\) in a given time bin \(\Delta t\) is defined as
\[V(\Delta t)=\left\langle\sqrt{\frac{\pi}{2}}|\Delta m_{i,j}|-\sqrt{\sigma_{i}^{2 }+\sigma_{j}^{2}}\right\rangle_{\Delta t}\, \tag{1}\]
where \(\Delta m_{i,j}\) is the difference in magnitude between observations \(i\) and \(j\), \(\sigma_{i}\) and \(\sigma_{j}\) are the photometric errors on those magnitudes, and \(\langle\rangle_{\Delta t}\) signifies the average taken over all epoch pairs \(i\), \(j\) that fall in the bin \(\Delta t\). Figure 6 shows this function plotted for the \(g\), \(r\), \(i\) bands using photometry from 2014 onward. This covers the epoch of variability, as well as the ambiguous period after the CRTS coverage.
We fit these structure functions with a power-law in the form \(V=\Delta t^{\gamma}\), where \(A\), the variability amplitude at 1 year, and \(\gamma\), the power law exponent, are the parameters being optimised. Focusing on the \(r\)-band to be consistent with Schmidt et al. (2010), we find that \(A=0.13,\gamma=0.26\). This lies within the quasi-stellar object (QSO) region of the parameter space described in Schmidt et al. (2010). The caveat to this analysis is that we have very few cycles of the variability on the longer timescales, so these results may change over a longer observational baseline.
### Spectral energy distribution
Due to the wide range of wavelengths probed by the photometry for this object, we are able to examine its UV-IR spectral energy distribution (SED) at multiple epochs. Figure 5 shows the SED for AT2017bcc at the epochs of the first peak in luminosity, the first minimum, and just after the second peak. The source can be seen clearly above the host at all three epochs, with the largest flux excess in the UV and in the mid-infrared. These features are characteristic of outputs from galactic nuclei, a population dominated by TDEs and AGN flares, with the mid-IR emission thought to originate from light echoes from pre-existing dust (Mattila et al., 2018; Kool et al., 2020; Jiang et al., 2021; Reynolds et al., 2022).
## 6 Spectroscopic analysis
### Spectral evolution and comparisons
The optical spectra of AT2017bcc show a blue continuum and broad, multi-component hydrogen emission. Figure 7 presents three representative epochs of spectroscopy: shortly after discovery, during the first minimum, and during the rise to the second peak in luminosity.
The most notable difference between these epochs is the evolution of the H\(\alpha\) line profile. The profile in 2017 shows two main components: an asymmetric, very broad component with an excess in flux on the red shoulder, and a narrower component which appears to be blue-shifted by \(\sim 500\) kms\({}^{-1}\). In 2018, after the initial flare has faded, the narrower component becomes distinctly asymmetric with a blue wing, and peaks at the rest wavelength of 6568 A. As the luminosity rises to a second peak in 2020, the broad component develops a blue shoulder as the profile becomes overall 'boxy' in shape.
Figure 8 shows a zoom-in on H\(\alpha\) and its subsequent development over the next 3 years. Between 2020 and 2021, the red shoulder of the broad component becomes stronger, such that the two wings are once again unequal in height. The relative heights of the blue and red shoulders do not appear to evolve further past 2021. Unfortunately, the spectral resolution of the data after 2020 is not sufficient to
Figure 10: Overplot in velocity space of the H\(\alpha\) and H\(\beta\) regions of the 2020 epoch of spectroscopy. The two regions have been normalised to the same height for visual comparison. The narrow [0 m] emission lines are marked to distinguish them from the H\(\beta\) profile.
Figure 9: Spectral comparison between two epochs of AT2017bcc (coloured black) and other nuclear transients. The comparison objects were selected from the literature for their similar features to AT2017bcc, and their high signal to noise ratios. All spectra have been normalised for visual clarity. The right panel shows a zoom-in on the H\(\alpha\) profiles with alternative normalisation, for ease of comparison.
Figure 11: Examples of the best-fit H\(\alpha\) disc and broad line models for 6 different spectral epochs of AT2017bcc. The fits were made to continuum-subtracted and telluric-corrected spectra. The residuals after subtracting both models and the narrow-line component are shown below each profile. A diagram of the emission regions is shown in Appendix F.
determine whether the narrower component is still asymmetric or simply blueshifted.
Figure 9 compares the 2017 and 2020 epochs of AT2017bcc to optical spectra from other nuclear transients. The early-time broad component of H\(\alpha\) resembles that of SDSS J1144+5602, a double-peaked AGN (Paris et al., 2017), though the shape of the wings is more similar to those of AT2019qiz, a nearby TDE (Nicholl et al., 2020; Hung et al., 2021). This same broad component in the later profile has much steeper sides, bearing more resemblance to the H\(\alpha\) profile of AT2018hyz, a unique TDE with double-peaked emission lines (Short et al., 2020; Gomez et al., 2020; Hung et al., 2020). In the case of AT2018hyz, this broad emission profile has been attributed to an exposed accretion disc.
The narrow component in the later spectrum has a steep drop in flux on the red side, and a shallower slope on the blue side. This is most similar to the central component and blue wing of H\(\alpha\) in SDSS J1144+5602 (though the asymmetry is much more pronounced in AT2017bcc). These comparisons may suggest that the H\(\alpha\) profile of AT2017bcc is comprised of a broad, accretion disc-like component similar to some TDEs as well as double-peaked AGN (Eracleous et al., 1994), which has evolved over time; and a narrower, asymmetric "shark fin" component similar to some double-peaked AGN.
The broad, double-peaked component can also be seen in the H\(\beta\) profile, with the same width in velocity space as in H\(\alpha\), as shown in Figure 10. There is also an asymmetry in the central component of the H\(\beta\) profile, though its blue shoulder is less smooth. Unlike most AGN, the narrow [O iii] emission at 4959 Aand 5007 A is weak. The [O iii] flux appears comparable to the narrow component of H\(\beta\). As seen in SDSSJ1144+5602, AGN typically have an [O iii] / H\(\beta\) flux ratio of \(\gg 1\)(Stern and Laor, 2012).
### Line profile fitting
We applied the circular accretion disc model from Chen and Halpern (1989) to the double-peaked spectra of AT2017bcc to determine the time-varying accretion disc properties. We first modelled the high S/N continuum-subtracted spectrum from 2020-01-28 with the following components: a circular accretion disc model for the H\(\alpha\) and H\(\beta\) broad emission line regions; the narrow emission lines from H\(\alpha\), H\(\beta\), [S ii] \(\lambda\)6717, 6731, [N ii] \(\lambda\)6550, 6575, [O i] \(\lambda\)6302, 6366 and [O iii] \(\lambda\)5007, 4959; and a broad two Gaussian component close to the H\(\alpha\) and H\(\beta\) narrow lines to describe the shark fin feature.
The disc models had 2 parameters in common for both the H\(\alpha\) and H\(\beta\) emission regions: inclination angle \(i\) where 0 degrees is
Figure 12: Time series of 4 key parameters from fitting the optical emission profile of AT2017bcc, described in Section 6.2. These show the inferred evolution of the accretion disc’s inclination angle, wind, and spiral arm.
face-on and 90 degrees is edge-on, and a local turbulent broadening parameter \(\sigma\) (km/s). Three disc parameters were allowed to differ for each of the H\(\alpha\) and H\(\beta\) emission regions: the emissivity power law index \(q\), and the inner and outer dimensionless gravitational radii of the disc \(\xi_{1}\) and \(\xi_{2}\). The simple circular disc model did not adequately describe the flat red shoulder of the 2018-2023 spectra, so we enabled a wind component to the model to increase the 'boxiness' of the double-peaked profile (Nguyen et al., 2018). The disc wind had 3 free parameters: the opening angle of the wind \(\theta\), the wind optical depth \(\tau\), and the optical depth normalisation \(t_{0}\) which affects the strength of the wind (Murray and Chiang, 1996; Flohic et al., 2012). Finally, we enabled a single spiral arm in the accretion disc with free parameters: amplitude \(A_{8}\) (expressed as a contrast ratio relative to the rest of the disc), orientation angle \(\phi\) (deg), width \(w\) (deg), and pitch angle \(\psi\) (deg). This was required to describe the flux ratio of the red and blue shoulders being \(>\)1, as is common amongst disc emitters (e.g. Storchi-Bergmann et al. (2003)).
The narrow lines were fitted with respect to 5 narrow line flux ratio parameters: [N ii] \(\lambda\)6583 /H\(\alpha\), [S ii] \(\lambda\)6731/H\(\alpha\), [O i] \(\lambda\)6366/H\(\alpha\), [O iii] \(\lambda\)5007/H\(\beta\); H\(\alpha\)/H\(\beta\). The [N ii], [S ii], [O i], and [O iii] doublet flux ratios were fixed to theoretical values of 2.95, 1.3, 0.33, and 2.88 respectively. The narrow lines were described by two component Gaussians of the same central wavelength with 3 free parameters which were common for all narrow lines: the width of the first Gaussian component \(\sigma_{1}\), the width of the second Gaussian component \(\sigma_{2}\), and the flux ratio of the two components \(f_{1}/f_{2}\). The shark fin feature was fitted with a double Gaussian, with each Gaussian having 2 free parameters which are separate for H\(\alpha\) and H\(\beta\): the width \(\sigma_{b}\) and the rest wavelength offset from the narrow H\(\alpha\) or H\(\beta\) line central wavelength \(\Delta\). The optimal amplitudes for the sets of narrow lines and broad lines were obtained by solving the covariance matrix for a given set of narrow line, broad line and disc model components, so they were not fitted as parameters during optimization.
We first found a reasonable initial fit using the least-squares optimisation implemented in Python using the scipy package. We then explored the posteriors using emccee(Foreman-Mackey et al., 2013) with 60 walkers initialized at the best-fit values from the least-squares fit, distributed according to the 1\(\sigma\) error found from the least-squares covariance matrix. The emccee fitting was run for 5000 iterations with a burn-in time of 4000 iterations.
For the remainder of the spectra, we fixed the narrow line shapes and amplitudes to the best-fit values found from the 2020-01-28 spectrum, but left all broad-line and disc parameters free for re-fitting. The fitting procedure was repeated using the best-fit values from emccee fitting of the 2020-01-28 spectrum to initialize the walkers, but with only 2500 iterations and a burn-in time of 2000 iterations.
All epochs were well-described by our disc+two-Gaussian central broad-line model (Figure 11). The best-fit H\(\alpha\) disc parameters are shown in Table 4 and the best-fit shark fin parameters are shown in Table 5. We note that the absolute flux of the central shark fin structure and the disc profile increased over time relative to the narrow line fluxes. The full width half maximum (FWHM) of the central outflow structure was approximately 70 A and showed some changes in morphology through the 6 years of data (Figure 11). Evolution in the shape of the shark fin profile, as described by the free parameters for the two-Gaussian model in Table 5, contributed substantially to changes in the appearance of the left shoulder of the disc profile. Most disc parameters did not evolve substantially over the 6 years of data, and changes to the width and boxiness of the profile from 2018-2023 could be accounted for solely by changes to the inclination angle and the spiral arm strength and phase. We caution that the various parameters needed to describe the spectra may lead to inevitable degeneracies, especially given the cross-contamination of the evolving shark fin-shaped broad line and the double-peaked disc profile.
The disc profiles had a constant emissivity power law index of \(q\sim 1.75\) for both H\(\alpha\) and H\(\beta\) emission regions and a constant turbulent broadening parameter of \(\sigma\sim 1267\) km/s. The inner radius stayed constant at \(\xi_{1}\sim 700\) for the H\(\alpha\) emission region while the outer radius was constant at \(\xi_{2}\sim 2000\). The best-fit inclination angle of the disc stayed within the range of \(75<i<90^{\circ}\) in 2017 before decreasing further to \(i\sim 45^{\circ}\) in later epochs, accounting for the decrease in the width of the double-peaked profile (Figure 12 a).
A modest wind of opening angle \(\theta\sim 0.8^{\circ}\) and optical depth \(\tau\sim 0.6\) was found to improve the fit to the boxy disc profiles over all epochs (Figure 12 b). The spiral arm parameters evolved over time, with the phase of the arm evolving from \(\phi\sim 25\) to \(\phi\sim 160\) and the amplitude increasing from \(A\sim 2.5\) to \(A\sim 11.5\) in later epochs when it became more essential to describe the relative flux of the red and blue peaks of the double-peaked profile (Figure 12 c). The gradual variation in these parameters over multi-year timescales matches the variability timescales reported in other double-peaked
Figure 13: BPT diagrams showing the best-fit narrow emission line ratios fit from modelling of the high S/N 2020-01-28 spectrum.
emitters (Schimoia et al., 2017). The spiral arm had a best-fit width of \(\sim 50\)'', and the pitch angle of the arm was approximately \(\sim 25\)''.
The narrow line ratios were well within the star-forming regions of the BPT diagram indicating no presence of long-lived AGN activity (Figure 13). We show only the [N ii] \(\lambda 6583\) /H\(\alpha\) and [O i] \(\lambda 6366\)/H\(\alpha\) BPT diagrams because the [S ii] \(\lambda 6731\)/H\(\alpha\) fitting may be subject to contamination from the edge of the red disc shoulder.
In order to see if a disc profile alone could be responsible for both the double-peaked structure and the central shark fin structure, we attempted to fit an alternative model for the 2020-01-28 spectrum which lacked the additional multi-Gaussian broad line for the shark fin component. This did not produce a high quality fit because the circular disc model was unable to account for both the shark fin feature and the blue shoulder of the double-peaked spectrum without the additional multi-Gaussian component.
In summary, our modeling of the H\(\alpha\) and H\(\beta\) broad-line regions finds that the apparent changes to the spectrum over time can be accounted for by: changes to the relative flux of the broad line and narrow line components, changes to the shark fin morphology, and gradual changes to the disc inclination angle along with the spiral arm location and amplitude.
## 7 Discussion
### Presence of an active SMBH
The repeated re-brightening of its light curve rules out a supernova or a 'typical' IDE as the origin of AT2017bcc. After the original flare in 2017, the luminosity faded for a year, and then rose again to match the first peak, and even exceeded this later on. Even if the first flare was caused by a one-off event, like a TDE, the years-long rise to the later peaks would far exceed the fall-back time for a plausible disruption (Guillochon and Ramirez-Ruiz, 2013; Coughlin and Nixon, 2022). Although repeating partial TDEs are possible and may have been observed (Payne et al., 2021; Wevers et al., 2023), the light curve of AT2017bcc appears stochastic rather than varying on an orbital period. The most plausible cause of this slower evolution is a gas reservoir in the form of an accretion disc around the central SMBH, i.e. a pre-existing but dormant AGN.
This scenario is also supported by a statistical treatment of the light curve. We measured the variability of the recent optical emission from AT2017bcc using its structure function. Power-law fits to this structure function showed best-fit parameters for the \(r\)-band of \(A=0.13,\gamma=0.26\). This is well within the region defined in Schmidt et al. (2010) which separates quasars from other variable sources.
We also find similarities between the spectra of AT2017bcc and those of double-peaked AGN, especially at later times. Once the initial flare had faded the H\(\alpha\) emission profile developed a distinctly asymmetric peak resembling a "shark fin". This unusual profile is also seen in SDSS J1144+5602, a known quasar with double-peaked emission features (Pairs et al., 2017).
Double-peaked AGN are a subclass of AGN, spectroscopically selected for their unique emission features. They make up a significant portion of known AGN, accounting for 16% of those observed with ZTF (Ward et al., 2023). Half of this sample show dramatic evolution in the heights of the red/blue peaks in their H\(\alpha\) profiles over time, attributed to the migration of hotspots in the accretion disc. This kind of evolution can also be seen in the wings of AT2017bcc's H\(\alpha\) profile, further implying that it is currently behaving as a double-peaked AGN. These objects typically host higher mass SMBHs and accrete at larger Eddington ratios, and are more likely to be X-ray and radio bright (Ward et al., 2023). AT2017bcc seems to represent the discovery of a double-peaked AGN undergoing a dramatic change in its observed properties after a long period of quiescence.
### Narrow line emission
Narrow-line [O iii] emission is a defining characteristic of AGN spectra. The strength of [O iii] emission around \(\sim 5000\) A relative to emission at other wavelengths, is used to categorise active galaxies. One of the most common distinctions is that between high-ionisation Seyferts and low-ionisation nuclear emission line regions (LINERs) (Stern and Laor, 2013). These groups are usually separated by their position on BPT diagrams (Baldwin et al., 1981), which compare [O iii] line strength to [N ii], [S ii], and [O i]. On the other hand, comparing [O iii] emission directly to broad H\(\alpha\) emission (Stern and Laor, 2012), or X-ray luminosity (Panessa et al., 2006), describes a continuum of AGN as opposed to a bimodal distribution.
In the case of AT2017bcc, its position on the BPT diagrams in Figure 13 indicate that its narrow line ratios are characteristic of a star-forming galaxy. It is likely that the observed narrow [O iii] emission did not purely arise from star formation, but any AGN contribution is presumably sub-dominant (otherwise the ratios would lie closer to the AGN region of the BPT diagram).
We also measured the relative strength of its [O iii] lines to its broad H\(\alpha\) line and X-ray emission. The relations in Stern and Laor (2012) and Panessa et al. (2006) provide useful points of reference for these measurements, so we have compared our results to them in Figure 14. It is clear from these metrics that AT2017bcc has much weaker narrow [O iii] emission than would be expected for a typical AGN: beyond 3\(\sigma\) for its H\(\alpha\) luminosity, and beyond 1\(\sigma\) for its X-ray luminosity.
### Evidence for phase change
Examples of AGN with weak or non-existent narrow emission lines have recently begun to emerge (Greenwell et al., 2021). These outliers may provide insight into transitional phases of AGN evolution. Two potential scenarios have been proposed to explain this transition.
The first is that the nucleus has been obscured by an influx of material caused by a recent galaxy merger. This cocoon prevents the radiation emitted by accretion onto the SMBH from exciting the narrow line regions further out in the galaxy. Eventually, the new material causes the AGN to accrete at a higher rate, increasing the emitted radiation pressure and expelling the obscuring material. There is then a short epoch (\(\sim 100\) yrs), as the newly unobscured light from the nucleus travels to the narrow line regions, when the AGN appears to have very weak [O iii] emission. If this were the case, we would expect to see signs of a recent merger in the host's morphology, and significant ongoing star formation caused by the material influx (Goulding and Alexander, 2009). As discussed in Section 4, AT2017bcc's host is a red galaxy with no significant ongoing star formation. Higher-resolution imaging could reveal traces of a recent merger, though given the SED fits this seems unlikely.
The second scenario is that the AGN activity has been recently triggered. This would not require a recent galaxy merger or a cocoon of obscuring material, but instead the serendipitous detection of a significant increase in the accretion rate. AT2017bcc showed negligible variation in luminosity for almost a decade of coverage, prior to the flare in 2017 and subsequent variability. It is well known that AGN go through phases of increased activity, thought to last on the order of \(\sim 10^{5}\) yrs (Schawinski et al., 2015). The increase in
luminosity and onset of significant variability in the last few years may mark the beginning of one of these active phases. The fact that the narrow [O iii] lines are weak also implies that the central SMBH had been quiescent for at least \(\sim 100\) years (the approximate light travel time to the narrow-line region).
### Origin of initial flare
Flares in galactic nuclei which defy classification are a relatively new class of object (Kankare et al., 2017; Frederick et al., 2021), and in the absence of a single physical explanation have been dubbed ambiguous nuclear transients (ANTs, Holoien et al. (2022)). Some ANTs have been attributed to gravitational microlensing (Lawrence et al., 2016; Bruce et al., 2017), but in the case of AT2017bcc the clear evolution in the spectral profile rules this out. Double-peaked emission features were also thought to arise from binary AGN (Gaskell et al., 1983). This hypothesis was refuted by Eracleous et al. (1997), and Kelley (2020) describes how double peaks would not be observable in such systems, so we do not consider it here.
We thus conclude that the initial flare most likely signals the beginning of an increase in the rate of accretion onto the central SMBH. This enhanced accretion rate is likely caused by a rapid influx of material, either in the form of infalling gas or a disrupted star. As discussed above, a significant influx of gas from a galaxy merger would have implications for the morphology of the galaxy and on-going star formation rate that are inconsistent with observations of AT2017bcc. Therefore, we explore the possibility that a TDE was the cause of the flare in 2017.
Observationally, the flare bears some resemblance to a TDE. Its SED shows a strong blue continuum, with excess emission in the infra-red. This is seen in both TDEs and AGN flares (Jiang et al., 2021). Spectral comparisons show that AT2017bcc's broad H\(\alpha\) emission at early times is most similar to TDEs in literature, while it develops more AGN-like features after the initial flare. Interestingly, the only other source for which we could identify a shark in asymmetry in the narrow component of H\(\alpha\) is ASASSN-14ko, a periodic nuclear transient in a galaxy hosting dual AGN (Tucker et al., 2021). The broad component in that source could also be modelled with an accretion disc profile, suggesting AT2017bcc and ASASSN-14ko likely share a similar geometry in their line-forming regions. This may strengthen the case for a TDE in both objects, however this does not appear to repeat periodically in AT2017bcc (at least for periods to which we are sensitive, i.e. \(\lesssim\) a few years). The persistence of the broad line region in the spectra, years after the initial flare, indicates that there may have been a dormant AGN-like accretion disc and broad line region already present. There is a precedent for this kind of event; PS164m was a TDE observed in an already active AGN (Blanchard et al., 2017). In that case, X-ray emission from the pre-existing AGN disappeared during the flare, suggesting a disruption or obscuration of the AGN disc.
This is supported by fits to the host galaxy's SED, which show a small (\(<10\%\)) AGN contribution to the IR luminosity. The host appears to contain an SMBH in the range of a few \(\times 10^{7}-10^{8}\) M\({}_{\odot}\), which straddles the Hills mass for a solar-type star. Thus a TDE origin would require either a rather massive star, or for the SMBH to exist in the lower (\(<10^{8}\) M\({}_{\odot}\)) part of this range, or to be rapidly rotating.
## 8 Conclusions
We have conducted an extensive study of a nuclear transient, AT2017bcc, discovered by Pan-STARRS (Chambers et al., 2017). As it was found during the counterpart search for a GW signal, G274296 (Shawhan et al., 2017), we explored the possibility that this was a genuine multi-messenger event. Although we found the two signals to be likely unrelated, AT2017bcc was unique enough to study in its own right. We thus presented photometric follow-up in the radio, UV, optical, infra-red, and X-ray as well as optical spectroscopy.
Modelling archival SDSS magnitudes with Prospector showed the host to be a red galaxy with an old stellar population, and suggested the presence of a dormant AGN with a SMBH mass of a few \(10^{7}-10^{8}\) M\({}_{\odot}\). Before it was discovered in 2017 the galaxy showed very little activity, varying in luminosity by \(<0.1\) mags in survey
Figure 14: Distributions of [O iii] to H\(\alpha\) (left) and X-ray (right) luminosities for AGN, with AT2017bcc marked in black. The distribution in H\(\alpha\) luminosity is shown in Stern & Laor (2012), where they observe a scatter of \(\sigma=0.36\) dex. The distribution in X-ray luminosity is described in Panessa et al. (2006), from which we infer a scatter of \(\sigma\sim 1\) dex.
photometry. This historical quiescence was also shown in the weak [O iii] emission in recent spectra, which implied that the narrow-line emitting regions of the AGN had not been illuminated for at least \(\sim 100\) years.
The flare which marked the discovery of AT2017bcc in 2017 was a long-lived nuclear transient with a strong blue continuum and broad H\(\alpha\) emission. Since then, it has re-brightened multiple times on variable timescales. In some bands the subsequent peaks were greater in luminosity than the first. We calculated the structure function of this variability, and found it to be consistent with the observed quasar population (Schmidt et al., 2010).
The broad spectral features resembled both TDEs (Nicholl et al., 2020; Short et al., 2020) and double-peaked AGN (Eracleous et al., 1994), with an asymmetrical central component and a boxy broad component. These features showed evolution in their shape for years after the first luminosity peak. We modelled this spectral evolution using a circular accretion disc with a wind and a single spiral arm, and a double Gaussian representing a partially obscured outflow. This analysis suggested that the changing profiles were driven by a precession of the disc's inclination, the strength and location of the spiral arm, and the morphology of the outflow.
We conclude that the counterpart search in 2017 serendipitously caught the re-ignition of an AGN which had been dormant for at least a century. A plausible cause of the boosted accretion onto the SMBH is a TDE, or another dramatic event which appears to have disturbed the pre-existing accretion disc.
## Acknowledgements
ER is supported by the Science and Technology Facilities Council (STFC).
MN is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 948381) and by UK Space Agency Grant No. ST/Y000692/1.
MF is supported by a Royal Society - Science Foundation Ireland University Research Fellowship.
SM acknowledges support from the Research Council of Finland project 350458.
GP gratefully acknowledges support from a Royal Society University Research Fellowship URFR17221500 and RFERE221015. GP and PS acknowledge support from STFC grant ST/V005677/1.
TMR acknowledges the financial support of the Vilho, Yrjo and Kalle Vaisala Foundation of the Finnish academy of Science and Letters.
LW acknowledges funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 101004719 (OPTICON-RadioNET Pilot, ORP).
JPA acknowledges funding from ANID, Millennium Science Initiative, ICN12_009
T-W. Chen thanks the Max Planck Institute for Astrophysics for hosting her as a guest researcher.
CPG acknowledges financial support from the Secretary of Universities and Research (Government of Catalonia) and by the Horizon 2020 Research and Innovation Programme of the European Union under the Marie Sklodowska-Curie and the Beatriu de Pinos 2021 BP 00168 programme, from the Spanish Ministerio de Ciencia e Innovacion (MCIN) and the Agencia Estatal de Investigacion (AEI) 10.13039/501100011033 under the PID2020-115253GA-I00 HOST-FLOWS project, and the program Unidad de Excelencia Maria de Maeztu CEX2020-001058-M.
G.L. is supported by a research grant (19054) from VILLUM FONDEN.
The Pan-STARRS telescopes are supported by NASA Grants NNX12AR65G and NNX14AM74G.
Based on observations collected at the European Organisation for Astronomical Research in the Southern Hemisphere, Chile, as part of ePESSTO/ePESSTO+ (the extended/advanced Public ESO Spectroscopic Survey for Transient Objects Survey).
ATLAS is primarily funded through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575. The ATLAS science products are provided by the University of Hawaii, QUB, STScI, SAAO and Millennium Institute of Astrophysics in Chile.
This work is based in part on observations obtained at the MDM Observatory, operated by Dartmouth College, Columbia University, Ohio State University, Ohio University, and the University of Michigan.
This research has made use of data or software obtained from the Gravitational Wave Open Science Center (gwosc.org), a service of the LIGO Scientific Collaboration, the Virgo Collaboration, and KAGRA. This material is based upon work supported by NSF's LIGO Laboratory which is a major facility fully funded by the National Science Foundation, as well as the Science and Technology Facilities Council (STFC) of the United Kingdom, the Max-Planck-Society (MPS), and the State of Niedersachsen/Germany for support of the construction of Advanced LIGO and construction and operation of the GEO600 detector. Additional support for Advanced LIGO was provided by the Australian Research Council. Virgo is funded, through the European Gravitational Observatory (EGO), by the French Centre National de Recherche Scientifique (CNRS), the Italian Istituto Nazionale di Fisica Nucleare (INFN) and the Dutch Nikhef, with contributions by institutions from Belgium, Germany, Greece, Hungary, Ireland, Japan, Monaco, Poland, Portugal, Spain. KAGRA is supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan Society for the Promotion of Science (JSPS) in Japan; National Research Foundation (NRF) and Ministry of Science and ICT (MSIT) in Korea; Academia Sinica (AS) and National Science and Technology Council (NSTC) in Taiwan.
Computations were performed using resources provided by Supercomputing Wales, funded by STFC grants ST/I006285/1 and ST/V001167/1 supporting the UK Involvement in the Operation of Advanced LIGO.
## Data availability
All data in this paper will be made publicly available via WISeREP (Yaron & Gal-Yam, 2012).
|
2307.16575 | Persistent homology analysis of type 2 diabetes genome-wide association
studies in protein-protein interaction networks | Genome-wide association studies (GWAS) involving increasing sample sizes have
identified hundreds of genetic variants associated with complex diseases, such
as type 2 diabetes (T2D); however, it is unclear how GWAS hits form unique
topological structures in protein-protein interaction (PPI) networks. Using
persistent homology, this study explores the evolution and persistence of the
topological features of T2D GWAS hits in the PPI network with increasing
P-value thresholds. We define an $n$-dimensional persistent disease module as a
higher-order generalization of the largest connected component (LCC). The
0-dimensional persistent T2D disease module is the LCC of the T2D GWAS hits,
which is significantly detected in the PPI network (196 nodes and 235 edges,
P$<$0.05). In the 1-dimensional homology group analysis, all 18 1-dimensional
holes (loops) of the T2D GWAS hits persist over all P-value thresholds. The
1-dimensional persistent T2D disease module comprising these 18 persistent
1-dimensional holes is significantly larger than that expected by chance (59
nodes and 83 edges, P$<$0.001), indicating a significant topological structure
in the PPI network. Our computational topology framework potentially possesses
broad applicability to other complex phenotypes in identifying topological
features that play an important role in disease pathobiology. | Euijun Song | 2023-07-31T11:17:30Z | http://arxiv.org/abs/2307.16575v2 | Persistent homology analysis of type 2 diabetes genome-wide association studies in protein-protein interaction networks
###### Abstract
Genome-wide association studies (GWAS) involving increasing sample sizes have identified hundreds of genetic variants associated with complex diseases, such as type 2 diabetes (T2D); however, it is unclear how GWAS hits form unique topological structures in protein-protein interaction (PPI) networks. Using persistent homology, this study explores the evolution and persistence of the topological features of T2D GWAS hits in the PPI network with increasing P-value thresholds. We define an \(n\)-dimensional persistent disease module as a higher-order generalization of the largest connected component (LCC). The 0-dimensional persistent T2D disease module is the LCC of the T2D GWAS hits, which is significantly detected in the PPI network (196 nodes and 235 edges, P\(<\)0.05). In the 1-dimensional homology group analysis, all 18 1-dimensional holes (loops) of the T2D GWAS hits persist over all P-value thresholds. The 1-dimensional persistent T2D disease module comprising these 18 persistent 1-dimensional holes is significantly larger than that expected by chance (59 nodes and 83 edges, P\(<\)0.001), indicating a significant topological structure in the PPI network. Our computational topology framework potentially possesses broad applicability to other complex phenotypes in identifying topological features that play an important role in disease pathobiology.
**Keywords:** persistent homology; topological data analysis; disease module; genome-wide association study; systems biology
## 1 Introduction
Understanding the genotype-phenotype relationships is challenging owing to their polygenicity and nonlinearity. Complex diseases result from interactions between diverse cellular processes and genes. Elucidating the genetic basis of complex diseases in the context of protein-protein interaction (PPI) networks is essential (Barabasi et al., 2011; Barrio-Hernandez et al., 2023). In the PPI network, genes (or gene products) that have similar biological functions are likely to interact closely with each other. Thus, genes associated with a specific phenotype tend to be clustered into a connected component called a _disease module_ in the PPI network (Goh et al., 2007). Disease modules that significantly overlap with each other exhibit similar pathobiological pathways, co-expression patterns, and clinical manifestations (Menche et al., 2015). This disease module concept is useful in identifying novel disease-disease or disease-drug relationships
[Menche et al., 2015, Guney et al., 2016], enabling the implementation of network-based drug repurposing for complex traits/diseases [Song et al., 2020].
Genome-wide association studies (GWAS) have identified numerous genetic variants associated with various complex diseases and can be used to characterize disease-associated modules in the PPI network [Barrio-Hernandez et al., 2023]. Genes associated with GWAS loci or GWAS hits tend to be mapped onto coherent network modules in the PPI network. As the P-value threshold increases from 0 to the standard genome-wide significance threshold of 5\(\times\)10\({}^{-8}\), the GWAS hits mapped to the PPI network tend to gradually form a single, connected component [Menche et al., 2015, Leopold and Loscalzo, 2018]. This largest connected component (LCC) of disease genes is occasionally called an _observable disease module_[Menche et al., 2015, Paci et al., 2021]. However, owing to the limited sample size and coverage of current GWAS data as well as the interactome incompleteness, disease-associated seed genes are often scattered in the PPI network. To detect disease modules, various seed-expanding and/or heuristic-based algorithms have been developed to expand and merge the scattered seed genes in the PPI network [Choobdar et al., 2019, Ghiassian et al., 2015, Vlaic et al., 2018, Wang and Loscalzo, 2018]. In addition, deep learning and graph neural network models have been used to predict disease-associated genes [Hou et al., 2022, Yang et al., 2019, Yang et al., 2022] and disease treatment mechanisms [Ruiz et al., 2021, Zitnik et al., 2018] in the context of biological networks. Most studies have focused on identifying connected components by mapping disease-associated seed genes onto the PPI network and expanding these seed genes. However, the mechanism by which GWAS hits are mapped onto the LCC or other unique topological structures in the PPI network as the P-value threshold increases remains unclear. Therefore, the topological features of GWAS hits mapped onto the PPI network warrant investigation.
One mathematical method for analyzing the topological features of complex networks is simplicial homology [Hatcher, 2002, Salnikov et al., 2019]. Simplicial homology is an algebraic topology tool used to analyze the topological features of a simplicial complex, which is a collection of higher-order interactions called simplices, including points (0-simplices), line segments (1-simplices), triangles (2-simplices), and higher-dimensional simplices. Simplicial homology can be used to examine the connectivity patterns within biological networks, such as gene-regulatory networks or brain connectivity networks. It can identify topological features, such as connected components (0-holes), loops (1-holes), voids (2-holes), and higher-dimensional holes in the data. For example, the LCC is the largest 0th homology class (connected component). Persistent homology is a method for capturing the persistence of simplicial homology features across multiple thresholds corresponding to a filtration of the simplicial complex [Salnikov et al., 2019, Otter et al., 2017]. It can identify important topological features that are persistent across different levels of interaction, rather than artifacts of noise or parameter uncertainty. Persistent homology features of biological networks potentially correspond to biologically relevant components that play a crucial role in disease mechanisms [Masoomy et al., 2021, Sizemore et al., 2019]. The mathematical details of simplicial complex and homology concepts are described in the Method section.
This study analyzes the persistent homology features of GWAS hits in the PPI network to identify important topological structures that potentially play a significant role in disease pathobiology. We analyze the simplicial homology features of GWAS hits in the PPI network as the P-value threshold increases from 0 to 5\(\times\)10\({}^{-8}\). For example, the LCC of the mapped GWAS hits, which is occasionally called an observable disease module, can be considered a connected component (0th homology class) that lives forever. This study aims to expand the LCC concept using higher-order topological structure analysis. We use GWAS
summary statistics data of type 2 diabetes (T2D) because T2D has undergone extensive genetic study across diverse ancestry populations with large sample sizes. GWAS with increasing sample sizes have recently identified more than 300 genetic loci associated with T2D [20]; however, many of these GWAS loci have small effect sizes of unclear pathobiological meaning. Therefore, this study systematically explores the evolution and persistence of the topological features of T2D GWAS hits in the PPI network as the P-value threshold increases from 0 to 5\(\times 10^{-8}\). We also analyze biological pathways, transcription factors, and microRNAs associated with the persistent homology features.
## 2 Methods
### Overview of the computational topology framework
This study analyzes the topological features of GWAS hits in the human PPI network. Using persistent homology, we systematically explore \(n\)-dimensional holes associated with a specific phenotype, as follows:
1. Map GWAS hits onto the human PPI network.
2. Using persistent homology, identify \(n\)-dimensional holes of GWAS hits in the PPI network, as the P-value threshold increases from 0 to 5\(\times 10^{-8}\).
3. Detect \(n\)_-th persistent disease modules_, which we define as unions of \(n\)-dimensional holes that live forever over all P-value thresholds.
4. Compute the statistical significance of \(n\)-th persistent disease modules by comparing the result with the randomized distribution of a set of randomly selected nodes in the PPI network.
Since the LCC can be considered a connected component (0th homology class) that lives forever, the \(n\)-th persistent disease module can be viewed as a higher-order generalization of the LCC concept. We test our computational framework using T2D GWAS summary statistics data, and perform functional enrichment analysis to validate the pathobiological significance of the persistent homology features.
### Consolidated human protein-protein interactome
We used a consolidated human PPI network constructed previously by Wang and Loscalzo [20, 21]. Briefly, the protein-protein interactome was compiled from various sources, including high-throughput yeast-two-hybrid studies, the Center for Cancer Systems Biology (CCSB) human interactome, binary PPIs from other laboratories, protein-protein co-complex interactions, signaling interactions, kinase-substrate interactions, and the Human Reference Interactome (HuRI) binary PPIs. This network possesses a scale-free topology [20]. The LCC of the protein-protein interactome, comprising 16,422 proteins (nodes) and 233,940 interactions (links), was used for the downstream analyses.
### T2D GWAS hits
We used a GWAS meta-analysis summary statistics dataset of 228,499 T2D cases and 1,178,783 controls encompassing multi-ancestral groups [21] (downloaded from the GWAS catalog [https://www.ebi.ac.uk/gwas/](https://www.ebi.ac.uk/gwas/)). The standard genome-wide significance threshold of 5\(\times 10^{-8}\) was applied.
Each genetic variant was annotated with the closest gene(s) via GWAS catalog gene-mapping data. Only GWAS loci that had been annotated with at most two genes were included. Some genes were linked to multiple GWAS loci with multiple P-values. To extract GWAS hits, for each gene, we assigned the lowest P-value from the different GWAS loci mapped onto that gene (Ratnakumar et al., 2020). We only considered genes (or proteins) in the human PPI network.
### Simplicial complex and homology theory
Here, we briefly describe the fundamentals of the simplicial complex and persistent homology theory (Hatcher, 2002; Salnikov et al., 2019; Otter et al., 2017). An \(n\)-dimensional simplex (\(n\)-simplex) is formed by \(n+1\) nodes
\[\sigma_{n}=\left(v_{0},v_{1},\ldots,v_{n}\right) \tag{1}\]
with an assigned orientation. For example, a 0-simplex is a vertex (node), a 1-simplex is an edge (link), and a 2-simplex is a triangle. An \(n^{\prime}\)-face of an \(n\)-simplex (\(n^{\prime}<n\)) is a proper subset of the nodes of the simplex with order \(n^{\prime}+1\). A simplicial complex \(K\) is a set of simplices closed under the inclusion of the faces of each simplex. Given a set of \(n\)-simplices of a simplicial complex \(K\), an \(n\)-dimensional chain (\(n\)-chain) is defined as a finite linear combination of \(n\)-simplices of \(K\), as follows:
\[c_{n}=\sum_{i}b_{i}\sigma_{n}^{\left(i\right)} \tag{2}\]
where \(b_{i}\in\mathbb{Z}/2\mathbb{Z}\). In this study, we restrict our analysis to homology with \(\mathbb{Z}/2\mathbb{Z}\) coefficients. The set of \(n\)-chains forms an abelian group denoted by \(C_{n}\) (\(n\)-chain group). For any \(n\)-simplex \(\sigma_{n}=\left(v_{0},v_{1},\ \ldots,\ v_{n}\right)\), the boundary operator \(\partial_{n}:C_{n}\to C_{n-1}\) is the homomorphism defined as follows:
\[\partial_{n}\left(\sigma_{n}\right)=\sum_{i=0}^{n}\left(-1\right)^{i}\left(v_ {0},\ldots,v_{i-1},\ v_{i+1},\ldots,v_{n}\right) \tag{3}\]
An \(n\)-chain is said to be a \(n\)-cycle if its boundary is zero; that is, elements of the subgroup \(Z_{n}:=\ker\partial_{n}\subseteq C_{n}\) are \(n\)-cycles. Similarly, elements of the subgroup \(B_{n}:=\text{im }\partial_{n+1}\subseteq C_{n}\) are said to be \(n\)-boundaries. Based on the definition of the boundary operator, it is obvious that any boundary has no boundary (i.e., \(\partial_{n}\partial_{n+1}=0\)). Thus, \(B_{n}\subseteq Z_{n}\subseteq C_{n}\). Hence, the \(n\)-th simplicial homology group \(H_{n}\) of the simplicial complex \(K\) can be defined as the quotient abelian group:
\[H_{n}\left(K\right):=Z_{n}/B_{n}=\ker\partial_{n}/\text{im }\partial_{n+1}. \tag{4}\]
The rank of the \(n\)-th homology group \(H_{n}\) is called the \(n\)-th Betti number \(\beta_{n}\). The \(n\)-th homology group \(H_{n}\) is isomorphic to \(\mathbb{Z}^{\beta_{n}}\), with the basis of independent \(n\)-cycles on \(Z_{n}\) modulo boundaries. Intuitively, it represents \(n\)-dimensional holes in the simplicial complex \(K\). For example, \(\beta_{0}\), \(\beta_{1}\), and \(\beta_{2}\) represent the number of connected components, loops, and voids, respectively.
Persistent homology is a method for analyzing simplicial topological features at different resolutions of a given simplicial complex (Salnikov et al., 2019; Otter et al., 2017). Formally, a filtration of the simplicial complex \(K\) is a finite sequence of subcomplexes \(\left\{K_{i}\ |\ 0\leq i\leq m\right\}\) such that
\[\emptyset=K_{0}\subseteq K_{1}\subseteq\cdots\subseteq K_{m}=K. \tag{5}\]
For \(0\leq i\leq j\leq m\), the inclusion \(K_{i}\hookrightarrow K_{j}\) induces a homomorphism \(h_{n}^{i,\ j}:H_{n}\left(K_{i}\right)\to H_{n}\left(K_{j}\right)\), and the \(n\)-th persistent homology groups \(PH_{n}^{i,\ j}\) are defined as the images of these homomorphisms:
\[PH_{n}^{i,\ j}:=\text{im }h_{n}^{i,\ j}. \tag{6}\]
Intuitively, the \(n\)-th persistent homology groups represent \(n\)-dimensional holes that persist from \(K_{i}\) to \(K_{j}\). We can track when \(n\)-dimensional holes appear (birth) and disappear (death) at different threshold values of the filtration. Persistence diagrams, representations of persistent homology, can be constructed by plotting the birth and death sites of topological features.
### Persistent homology analysis of GWAS hits
In this study, the PPI network \(G=(V,E)\) is considered a simplicial complex \(K\): genes (or proteins) are regarded as 0-simplexes (nodes), PPIs as 1-simplexes (links), and higher-order connections (or cliques) as high-dimensional simplices. The T2D disease module was identified as the LCC of the PPI subnetwork induced by the T2D GWAS hits. The statistical significance of the LCC was calculated by comparing the observed LCC size with the randomized LCC distribution of a set of randomly selected nodes of the same size in a degree-preserving manner over 1,000 repetitions. The \(z\)-score was estimated as \(z=\frac{LCC_{obs}-(LCC)_{rad}}{\sigma_{rnd}}\), where \(LCC_{obs}\) is the observed LCC size, and \(\langle LCC\rangle_{rnd}\) and \(\sigma_{rnd}\) are the mean and SD of the randomized LCC distribution, respectively.
Each T2D GWAS hit's P-value was used as a varying threshold to obtain a filtration of the PPI subnetwork induced by the T2D GWAS hits as a function of the P-value. As the threshold value increases from 0 to \(5\times 10^{-8}\), each node appears at the P-value assigned to that gene. Formally, we define the \(\delta\)-simplicial complex for the P-value threshold \(\delta\geq 0\) as follows:
\[\mathcal{W}_{\delta}:=\{\sigma=(v_{0},v_{1},\ldots,v_{k})\in K\ |\ \forall i\in\{0,1,\ldots,k\},\ p(v_{i})\leq\delta\} \tag{7}\]
where \(p(v)\in[0,1]\) is the GWAS hit P-value assigned to the node \(v\in V\). Using this \(\delta\)-simplicial complex, we define the filtration as \(\left\{\mathcal{W}_{\delta}\hookrightarrow\mathcal{W}_{\delta^{\prime}}\right\} _{0\leq\delta\leq\delta^{\prime}}\). We subsequently examined the persistent homology features (\(n\)-dimensional holes) of this filtration for each dimension as a function of the P-value threshold. Persistence diagrams were used to visualize the birth and death times of topological features. For each dimension, we also computed the Betti numbers (ranks) of the simplicial homology groups as a function of the P-value threshold.
We define an \(n\)_-th persistent disease module_ as a union of \(n\)-dimensional holes that live forever over all P-value thresholds. This definition is concordant with the conventional disease module concept - the 0th persistent disease module is the LCC, which is the persistent 0-dimensional hole (connected component) that lives forever. The statistical significance of the \(n\)-th persistent disease module was calculated by comparing the observed persistent disease module size with the randomized persistent disease module distribution of a set of randomly selected nodes of the same size in a degree-preserving manner. The persistent homology features of randomly selected nodes were analyzed over 1,000 repetitions. The network and homology analyses were performed using the NetworkX and Ripser packages of Python 3.8 ([https://www.python.org/](https://www.python.org/)), and networks were visualized using Cytoscape 3.9.1 ([https://cytoscape.org/](https://cytoscape.org/)). The core code for analyzing persistent homology is publicly available in our GitHub repository ([https://github.com/esong0/PHGWAS](https://github.com/esong0/PHGWAS)).
### Functional enrichment analysis
To infer the biological significance of the persistent disease module, a pathway enrichment analysis was performed based on the Kyoto Encyclopedia of Genes and Genomes (KEGG) 2021 database using the GSEApy Python package [14] with the Enrichr web server [17]. In addition, the
transcription factor target enrichment analysis was conducted based on the ENCODE and ChEA Consensus databases. The microRNA target enrichment analysis was also performed based on the miRTarBase database. Adjusted P-values were computed using the Benjamini-Hochberg method, and statistical significance was set at P\(<\)0.05.
## 3 Results
### The LCC of GWAS hits
We compiled the T2D GWAS hits using the large-scale T2D GWAS summary statistics data, 565 of which are present in the human PPI network. As the P-value threshold increased from 0 to 5\(\times\)10\({}^{-8}\), the LCC of the subnetwork induced by the T2D GWAS hits increased (Figure 1A). When the standard genome-wide significance threshold of 5\(\times\)10\({}^{-8}\) was applied, we identified the LCC comprising 196 nodes and 235 edges, which is significantly larger than that expected by chance (P=0.0487, Figure 1B). We defined a T2D observable disease module as this LCC of the T2D GWAS hits (Supplementary Table S1). Other connected components of the subnetwork induced by the T2D GWAS hits comprised \(\leq\)5 nodes, which were excluded from the downstream analyses.
### Persistent homology analysis
We examined how the topological features of the T2D disease module evolve and persist in the PPI network as the P-value threshold increases from 0 to 5\(\times\)10\({}^{-8}\). In our framework, each node appears at the P-value assigned to that gene. We used the P-value of each T2D GWAS hit as a varying threshold and determined the timing of the appearance (birth) and disappearance (death) of \(n\)-dimensional holes at different threshold values. In the 0th homology group (\(H_{0}\)) analysis, 61 0-dimensional holes (connected components) were identified, of which only one persisted over all P-value thresholds (Figure 2A). This persistent 0-dimensional
Figure 1: The LCC of the subnetwork induced by the T2D GWAS hits in the protein–protein interaction network. (A) The LCC size increases as the P-value threshold increases from 0 to 5\(\times\)10\({}^{-8}\). (B) The LCC of the T2D GWAS hits is significantly larger than that expected by chance. The red arrow indicates the observed LCC size. LCC, the largest connected component; T2D, type 2 diabetes; GWAS, genome-wide association study.
hole is the LCC, that is, the T2D observable disease module. In the 1st homology group (\(H_{1}\)) analysis, 18 1-dimensional holes (loops or 1-cycles) were identified, all of which persisted over all P-value thresholds (Figure 2A). No higher-dimensional hole (\(n\geq 2\)) existed in the T2D GWAS hit data. The Betti numbers (ranks) of the simplicial homology groups are shown in Figure 2B. As the P-value threshold increases, the number of 0-dimensional holes converges to 1 (i.e., the LCC), while the number of 1-dimensional holes increases and converges to 18.
We identified the \(n\)-th persistent disease modules, which were defined as unions of persistent \(n\)-dimensional holes that live forever over all P-value thresholds. The 0th persistent disease module is the LCC, which is the persistent 0-dimensional hole that lives forever. As shown in Figure 1B, the LCC is significantly larger than that expected by chance. Since the LCC concept has been extensively investigated in various complex diseases, this study focused on the 1st persistent disease module. In our T2D GWAS data analysis, we identified 18 persistent 1-dimensional holes (loops or 1-cycles), which constitute the 1st persistent T2D disease module comprising 59 nodes and 83 edges (Figure 3A). This 1st persistent T2D disease module is significantly larger than that expected by chance (P\(<\)0.001, Figure 3B), indicating a significant topological feature of the T2D GWAS hits in the PPI network. Since the smallest P-value in the T2D GWAS data is extremely small as 3e-695 (rs7903146 in _TCF7L2_), we repeated our analysis using the log P-value scale. The same 61 0-dimensional holes and 18 1-dimensional holes were also identified (Supplementary Figure S1).
Figure 2: Persistent homology analysis of the T2D GWAS hits in the protein–protein interaction network. (A) The persistence diagram of the T2D GWAS hits. Using persistent homology, 0-dimensional holes (connected components, marked as blue dots) and 1-dimensional holes (loops, marked as orange dots) were identified as a function of the P-value threshold. The birth and death pairs of topological features are shown. (B) The Betti numbers of the simplicial homology groups as a function of the P-value threshold. T2D, type 2 diabetes; GWAS, genome-wide association study.
### Biological pathways, transcription factors, and microRNAs
To infer the pathobiological significance of the 1st persistent T2D disease module, we identified over-represented KEGG pathways. The top 10 enriched KEGG pathways included mTOR signaling, FoxO signaling, AMPK signaling, the longevity regulating pathway, PI3K-Akt signaling, the transcriptional mis-regulation pathway in cancer, and several cancer pathways (Figure 4A). In addition, the 1st persistent T2D disease module was enriched with targets of transcription factors, including UBTF, YY1, RUNX1, ZBTB7A, KLF4, RCOR1, GATA1, PBX3, E2F1, and CREB1 (Figure 4B). The 1st persistent T2D disease module was also enriched with targets of microRNAs, including hsa-miR-152-3p and hsa-miR-320a (Figure 4C).
## 4 Discussion
Using persistent homology, this study explored the evolution and persistence of the topological features of T2D GWAS hits in the PPI network as the P-value threshold increased from 0 to 5\(\times\)10\({}^{-8}\). The \(n\)-th persistent disease module was defined as a union of persistent \(n\)-dimensional holes that live forever over all P-value thresholds. This is a higher-order generalization of the conventional disease module concept. The
Figure 3: The 1st persistent T2D disease module in the protein–protein interaction network. (A) The 1st persistent T2D disease module comprising persistent 1-dimensional holes (loops) that live forever over all P-value thresholds. The P-values of the T2D GWAS hits are shown. (B) The 1st persistent T2D disease module is significantly larger than that expected by chance. The red arrow indicates the observed module size. T2D, type 2 diabetes; GWAS, genome-wide association study.
0th persistent T2D disease module is the LCC of the T2D GWAS hits, which is significantly larger than that expected by chance. In the 1st homology group analysis, all 18 1-dimensional holes (loops) of the T2D GWAS hits persist over all P-value thresholds. The 1st persistent T2D disease module comprising these 18 persistent 1-dimensional holes is significantly larger than that expected by chance, indicating a significant topological structure in the PPI network. The 1st persistent T2D disease module is enriched with the mTOR, FoxO, AMPK, and PI3K-Akt signaling pathways; longevity regulating pathway; and cancer pathways. It has been known that the mechanisms of T2D, aging, and cancer are closely related to each other (Wei et al., 2017). The pathobiological significance of this persistent disease module is subject to subsequent experimental validation.
Our computational topology framework potentially has broad applicability to other complex phenotypes. By analyzing the persistent homology features, the higher-order topological features that may be closely associated with a specific phenotype can be identified. We plan to expand this preliminary study to systematically analyze the topological features of the large-scale disease-gene networks (Menche et al., 2015; Guney et al., 2016). We expect that there are several mathematical ways to expand our persistent homology approach in PPI networks. The weighted topology (Baccini et al., 2022) of weighted PPI networks reflecting proteome-wide binding affinity and concentration information should provide more biologically plausible and reliable information. In addition, relational persistent homology (Stolz et al., 2023) may be a useful tool for dissecting multispecies data, such as multiomics data or multilayer biological networks.
Notwithstanding, this study has several potential limitations. While the conventional disease mod
Figure 4: Functional enrichment analysis of the 1st persistent T2D disease module. The 1st persistent T2D disease module was enriched by KEGG pathways (the top 10 pathways are shown) (A), targets of transcription factors (B), and targets of microRNAs (C). T2D, type 2 diabetes; KEGG, the Kyoto Encyclopedia of Genes and Genomes.
ule concept typically relies on connected components (0th homology class) of disease seeds, the proposed persistent disease module concept is a higher-order generalization of the LCC. Hence, it is hard to directly compare our homology approach to most other disease module identification algorithms. Therefore, It is essential to develop higher-order versions of seed-expanding algorithms to detect robust and reliable persistent disease modules. The role of seed connectors (Wang and Loscalzo, 2018) in homology features also warrants elucidation. As the uncertainty and incompleteness of GWAS and PPI network data are inevitable (Menche et al., 2015), how these errors and uncertainty affect the robustness of persistent disease modules remains unclear. Although no higher-dimensional hole (\(n\geq 2\)) was present in our T2D GWAS hit data, higher-order interactions may play a significant role in disease pathobiology. Dynamic topological data analysis approaches based on sequential data would provide more rigorous and robust results (Ciocanel et al., 2021). Tissue- or cell-type-specific networks (Greene et al., 2015) should provide more biological information regarding persistent disease modules. Determining whether oncogenic mutations perturb PPI or higher-order interactions in the PPI network is a worthwhile endeavor (Cheng et al., 2021).
## Acknowledgement
This study received no external funding. The author has no conflicts of interest to declare. The author would like to thank Dr. Ruisheng Wang for kindly sharing the consolidated human protein-protein interactome data and the anonymous reviewers for their valuable comments.
## Authorship Contribution
**Euijun Song:** Conceptualization, Methodology, Formal analysis, Investigation, Software, Visualization, Writing - original draft.
|
2301.13783 | An analytical approach to Bayesian evidence computation | The Bayesian evidence is a key tool in model selection, allowing a comparison
of models with different numbers of parameters. Its use in analysis of
cosmological models has been limited by difficulties in calculating it, with
current numerical algorithms requiring supercomputers. In this paper we give
exact formulae for the Bayesian evidence in the case of Gaussian likelihoods
with arbitrary correlations and top-hat priors, and approximate formulae for
the case of likelihood distributions with leading non-Gaussianities (skewness
and kurtosis). We apply these formulae to cosmological models with and without
isocurvature components, and compare with results we previously obtained using
numerical thermodynamic integration. We find that the results are of lower
precision than the thermodynamic integration, while still being good enough to
be useful. | Juan Garcia-Bellido | 2023-01-31T17:32:21Z | http://arxiv.org/abs/2301.13783v1 | # An analytical approach to Bayesian evidence computation
###### Abstract
The Bayesian evidence is a key tool in model selection, allowing a comparison of models with different numbers of parameters. Its use in analysis of cosmological models has been limited by difficulties in calculating it, with current numerical algorithms requiring supercomputers. In this paper we give exact formulae for the Bayesian evidence in the case of Gaussian likelihoods with arbitrary correlations and top-hat priors, and approximate formulae for the case of likelihood distributions with leading non-Gaussianities (skewness and kurtosis). We apply these formulae to cosmological models with and without isocurvature components, and compare with results we previously obtained using numerical thermodynamic integration. We find that the results are of lower precision than the thermodynamic integration, while still being good enough to be useful.
## 1 Introduction
Model selection refers to the statistical problem of deciding which model description of observational data is the best [1, 2]. It differs from parameter estimation, where the choice of a single model (i.e. choice of parameters to be varied) has already been made and the aim is to find their best-fitting values and ranges. While there have been widespread applications of parameter estimation techniques, usually likelihood fitting, to cosmological data, there has so far been quite limited application of model selection statistics [3, 4, 5]. This is unfortunate, as model selection techniques are necessary to robustly distinguish between models with different numbers of parameters, and many of the most interesting issues in cosmology concern the desirability or otherwise of incorporating additional parameters to describe new physical effects.
Within the context of Bayesian inference, model selection should be carried out using the _Bayesian evidence_[1, 2], which measures the probability of the model in light of the observational data (i.e. the average likelihood over the prior distribution). The Bayesian evidence associates a single number with each model, and the models can then be ranked in order of the evidence, with the ratios of those values interpretted as the relative probability of the models. This process sets up a desirable tension between model simplicity and ability to fit the data.
Use of the Bayesian evidence has so far been limited by difficulties in calculating it. The standard technique is thermodynamic integration [6, 7], which varies the temperature in a Monte Carlo Markov Chain (MCMC) approach in order that the distribution is sampled in a way covering both posterior and prior distributions. However, in recent work [5] we showed that in order to obtain sufficiently-accurate results in a cosmological context, around \(10^{7}\) likelihood evaluations are required per model. Such analyses are CPU-limited by the time needed to generate the predicted spectra to compare with the data, and this requirement pushes the problem into the supercomputer class (for comparison, parameter estimation runs typically employ \(10^{5}\) to \(10^{6}\) likelihood evaluations).
In this paper, we propose and exploit a new analytic method to compute the evidence based on an expansion of the likelihood distribution function. The method pre-supposes that the covariance of the posterior distribution has been obtained, for instance via an MCMC parameter estimation run, and in its
present form requires that the prior distributions of the parameters are uniform top-hat priors.1 While the method will not be applicable for general likelihood distributions, we include the leading non-gaussianities (skewness and kurtosis) in approximating the likelihood shape, with the expectation of obtaining good results whenever the likelihood distribution is sufficiently simple. Cosmological examples commonly exhibit likelihood distributions with only a single significant peak.
Footnote 1: An extension to gaussian priors should be feasible, but not one to arbitrary priors.
We apply the method both to toy model examples and to genuine cosmological situations. In particular, we calculate the evidences for adiabatic and isocurvature models, which we previously computed using thermodynamic integration in Ref. [5]. We find that the discrepancies between the methods are typically no worse than 1 in \(\ln(\text{Evidence})\), meaning that the analytic method is somewhat less accurate than would be ideal, but is accurate enough to give a useful indication of model preference.
## 2 The Bayesian evidence
The posterior probability distribution \(\mathcal{P}(\theta,\mathcal{M}|\mathbf{D})\) for the parameters \(\theta\) of the model \(\mathcal{M}\), given the data \(\mathbf{D}\), is related to the likelihood function \(\mathcal{L}(\mathbf{D}|\theta,\mathcal{M})\) within a given set of prior distribution functions \(\pi(\theta,\mathcal{M})\) for the parameters of the model, by Bayes' theorem:
\[\mathcal{P}(\theta,\mathcal{M}|\mathbf{D})=\frac{\mathcal{L}(\mathbf{D}|\theta,\mathcal{M})\,\pi(\theta,\mathcal{M})}{E(\mathbf{D}|\mathcal{M})}\,, \tag{1}\]
where \(E\) is the Bayesian evidence, i.e. the average likelihood over the priors,
\[E(\mathbf{D}|\mathcal{M})=\int d\theta\ \mathcal{L}(\mathbf{D}|\theta, \mathcal{M})\,\pi(\theta,\mathcal{M})\,, \tag{2}\]
where \(\theta\) is a vector with \(n\)-components characterising the \(n\) independent parameters. The prior distribution function \(\pi\) contains all the information about the parameters before observing the data, i.e. our theoretical prejudices, our physical understanding of the model, and input from previous experiments.
In the case of a large number of parameters (\(n\gg 1\)), the evidence integral cannot be performed straightforwardly and must be obtained either numerically or via an analytic approximation. Amongst numerical methods the most popular is thermodynamic integration [6, 7] but this can be computationally extremely intensive [5]. The simplest analytical approximation is the Laplace approximation, valid when the distribution can be approximated by a multivariate Gaussian. This may hold when the quantity and quality of the data is optimal, but is likely to be valid only in limited cosmological circumstances.
The Bayesian evidence is of interest because it allows a comparison of models amongst an exclusive and exhaustive set \(\{\mathcal{M}_{i}\}_{i=1...N}\). We can compute the posterior probability for each hypothesis given the data \(\mathbf{D}\) using Bayes theorem:
\[\mathcal{P}(\mathcal{M}_{i}|\mathbf{D})\propto E(\mathbf{D}|\mathcal{M}_{i}) \,\pi(\mathcal{M}_{i})\,, \tag{3}\]
where \(E(\mathbf{D}|\mathcal{M}_{i})\) is the evidence of the data under the model \(\mathcal{M}_{i}\), and \(\pi(\mathcal{M}_{i})\) is the prior probability of the \(i\)th model _before_ we see the data. The ratio of the evidences for the two competing models is called the _Bayes factor_[8]
\[B_{ij}=\frac{E(\mathbf{D}|\mathcal{M}_{i})}{E(\mathbf{D}|\mathcal{M}_{j})}\,, \tag{4}\]
and this is also equal to the ratio of the posterior model probabilities if we assume that we do not favour any model _a priori_, so that \(\pi(\mathcal{M}_{1})=\pi(\mathcal{M}_{2})=...=\pi(\mathcal{M}_{N})=1/N\).
The Bayes factor Eq. (4) provides a mathematical representation of Occam's razor, because more complex models tend to be less predictive, lowering their average likelihood in comparison to simpler, more predictive models. More complex models can only be favoured if they are able to provide a significantly improved fit to the data. In simple cases where models give vastly different maximum likelihoods there is no need to employ model selection techniques, but they are essential for properly discussing cases where the improvement
of fit is marginal. This latter situation is more or less inevitable whenever the possibility of requiring an additional parameter arises from new data, unless the new data is of vastly greater power than that preceding it; cosmological examples include the inclusion of spectral tilt, dark energy density variation, or the case explored later in this paper of trace isocurvature perturbations.
In this paper we will obtain an analytical formula which approximates the Bayesian evidence by considering the higher-order cumulants of the distribution in a systematic way. The advantage is that with these analytical formulae one can compute the evidence for a given model with an arbitrary number of parameters, given the hierarchy of cumulants of the distribution, assumed previously computed for the likelihood distribution function within the parameter estimation programme.
The evidence needs to be calculated to sufficient precision for robust conclusions to be drawn. The standard interpretational scale, due to Jeffreys [1] and summarized in Ref. [5], strengthens its verdict roughly each time the difference in \(\ln\)(Evidence) increases by one. The evidence therefore needs to be computed more accurately than this, with an uncertainty of 0.1 in \(\ln\)(Evidence) easily sufficient, and a factor two worse than that acceptable. This accuracy requirement ensures that the relative model probabilities are little changed by the uncertainty.
The first thing we need is to characterize the distribution function for the model with \(n\) parameters. Let \(f({\bf x})\) be this function, and let us assume that it is properly normalized,
\[\int_{-\infty}^{\infty}d^{n}{\bf x}\,f({\bf x})=1\,. \tag{5}\]
Then, the \(p\)-point correlation function is given by
\[\langle x_{i_{1}}\ldots x_{i_{p}}\rangle=\int_{-\infty}^{\infty}d^{n}{\bf x} \ x_{i_{1}}\ldots x_{i_{p}}\,f({\bf x})\,. \tag{6}\]
From this distribution function one can always construct the generating functional, \(\phi({\bf u})\), as the Fourier transform
\[\phi({\bf u})=\int_{-\infty}^{\infty}d^{n}{\bf x}\,e^{i\,{\bf u}\cdot{\bf x}} \,f({\bf x})\,. \tag{7}\]
This function can be expanded as
\[\phi({\bf u})=\exp\left[\sum_{p=1}^{\infty}\frac{i^{p}}{p!}\,A_{i_{1}\ldots i _{p}}\,u^{i_{1}}\ldots u^{i_{p}}\right]\,, \tag{8}\]
where \(A_{i_{1}\ldots i_{p}}\) are totally symmetric rank-\(p\) tensors. For instance, if we restrict ourselves to order 4, we can write
\[\phi({\bf u})=\exp\left[i\,\mu_{i}u_{i}-\frac{1}{2!}\,C_{ij}\,u_{i}u_{j}-\, \frac{i}{3!}\,B_{ijk}\,u_{i}u_{j}u_{k}+\,\frac{1}{4!}\,D_{ijkl}\,u_{i}u_{j}u_{ k}u_{l}+\cdots+\,\frac{i^{n}}{n!}\,A_{i_{1}\ldots i_{n}}\,u_{i_{1}}\ldots u_{i_{n}} \right]\,, \tag{9}\]
where \(\mu_{i}\) is the mean value of variable \(x_{i}\); \(C_{ij}\) is the covariance matrix; \(B_{ijk}\) is the trilinear matrix associated with the third cumulant or skewness; \(D_{ijkl}\) is the rank-4 tensor associated with the fourth cumulant or kurtosis, and \(A_{i_{1}\ldots i_{n}}\) is the rank-\(n\) tensor associated with the \(n\)-th cumulant. Their expressions in terms of \(n\)-point correlation functions can be obtained from Eq. (7), by realising that
\[\langle x_{i_{1}}\ldots x_{i_{n}}\rangle=(-i)^{n}\left.\frac{\partial^{n}\phi ({\bf u})}{\partial u_{i_{1}}\ldots\partial u_{i_{n}}}\right|_{{\bf u}=0}\,. \tag{10}\]
For instance, the first-order term gives
\[\langle x_{i}\rangle=(-i)\left.\frac{\partial\phi({\bf u})}{\partial u_{i}} \right|_{{\bf u}=0}=\mu_{i}\,. \tag{11}\]
The second-order correlation function gives
\[\langle x_{i}x_{j}\rangle=(-i)^{2}\left.\frac{\partial^{2}\phi({\bf u})}{\partial u _{i}\partial u_{j}}\right|_{{\bf u}=0}=C_{ij}+\mu_{i}\mu_{j}\,, \tag{12}\]
such that the covariance matrix is obtained, as usual, from
\[C_{ij}=\langle x_{i}x_{j}\rangle-\langle x_{i}\rangle\langle x_{j}\rangle\,.\]
The third-order correlation function gives
\[\langle x_{i}x_{j}x_{k}\rangle=(-i)^{3}\left.\frac{\partial^{3}\phi({\bf u})}{ \partial u_{i}\partial u_{j}\partial u_{k}}\right|_{{\bf u}=0}=B_{ijk}+\mu_{i }C_{jk}+\mu_{j}C_{ki}+\mu_{k}C_{ij}+\mu_{i}\mu_{j}\mu_{k}\,, \tag{13}\]
such that the skewness matrix is obtained from
\[B_{ijk}=\langle x_{i}x_{j}x_{k}\rangle-\langle x_{i}\rangle\langle x_{j}x_{k} \rangle-\langle x_{j}\rangle\langle x_{k}x_{i}\rangle-\langle x_{k}\rangle \langle x_{i}x_{j}\rangle+2\langle x_{i}\rangle\langle x_{j}\rangle\langle x_ {k}\rangle\,. \tag{14}\]
The fourth-order correlation function gives
\[\langle x_{i}x_{j}x_{k}x_{l}\rangle=(-i)^{4}\left.\frac{\partial^ {4}\phi({\bf u})}{\partial u_{i}\partial u_{j}\partial u_{k}\partial u_{l}} \right|_{{\bf u}=0} = D_{ijkl}+C_{ij}C_{kl}+C_{ik}C_{jl}+C_{il}C_{jk}\] \[+ B_{ijk}\mu_{l}+B_{ijl}\mu_{k}+B_{jkl}\mu_{i}+B_{ikl}\mu_{j}\] \[+ C_{ij}\mu_{k}\mu_{l}+C_{ik}\mu_{j}\mu_{l}+C_{il}\mu_{j}\mu_{k}\] \[+ C_{jk}\mu_{i}\mu_{l}+C_{jl}\mu_{i}\mu_{k}+C_{kl}\mu_{i}\mu_{j}\] \[+ \mu_{i}\mu_{j}\mu_{k}\mu_{l}\,,\]
such that the kurtosis matrix is obtained from
\[D_{ijkl} = \langle x_{i}x_{j}x_{k}x_{l}\rangle-\langle x_{i}x_{j}\rangle \langle x_{k}x_{l}\rangle-\langle x_{i}x_{k}\rangle\langle x_{j}x_{l}\rangle -\langle x_{i}x_{l}\rangle\langle x_{j}x_{k}\rangle\] \[- \langle x_{i}x_{j}x_{k}\rangle\langle x_{l}\rangle-\langle x_{i} x_{j}x_{l}\rangle\langle x_{k}\rangle-\langle x_{i}x_{k}x_{l}\rangle\langle x_{j} \rangle-\langle x_{j}x_{k}x_{l}\rangle\langle x_{i}\rangle\] \[+ 2\left\langle x_{i}x_{j}\rangle\langle x_{k}\rangle\langle x_{l} \rangle+2\left\langle x_{i}x_{k}\rangle\langle x_{j}\rangle\langle x_{l} \right\rangle+2\left\langle x_{i}x_{l}\rangle\langle x_{j}\rangle\langle x_{ k}\right\rangle+2\left\langle x_{j}x_{k}\rangle\langle x_{i}\rangle\langle x_{l}\right\rangle\] \[+ 2\left\langle x_{j}x_{l}\rangle\langle x_{i}\rangle\langle x_{k} \rangle+2\left\langle x_{k}x_{l}\rangle\langle x_{i}\rangle\langle x_{j} \right\rangle-6\left\langle x_{i}\rangle\langle x_{j}\rangle\langle x_{k} \rangle\langle x_{l}\right\rangle,\]
and so on, for the higher order cumulants.
## 3 The Gaussian approximation
Let us first evaluate the evidence for a multivariate Gaussian distribution, that is, one in which all the cumulants are zero except the covariance matrix \(C_{ij}\) and the means \(\mu_{i}\). In this case, the generating functional and the distribution are given by
\[\phi({\bf u})=\exp\Big{[}-i\mu_{i}u_{i}-\frac{1}{2}\,C_{ij}\,u_{i}u_{j}\Big{]}\,, \tag{17}\]
\[f({\bf x})=\frac{1}{(2\pi)^{n}}\int_{-\infty}^{\infty}d^{n}{\bf u}\;e^{-i\,{\bf u }\cdot{\bf x}}\;\phi({\bf u}) \tag{18}\]
\[=\frac{1}{(2\pi)^{n/2}\sqrt{\det C}}\exp\Big{[}-\frac{1}{2}C_{ij}^{-1}(x_{i}- \mu_{i})(x_{j}-\mu_{j})\Big{]}\,, \tag{19}\]
which satisfies
\[\langle x_{i}\rangle=\mu_{i}\,,\qquad\quad\langle x_{i}x_{j}\rangle=C_{ij}+\mu_{i} \mu_{j}\,,\qquad\quad\langle x_{i}x_{j}x_{k}\rangle=\mu_{(i}C_{jk)}+\mu_{i}\mu_ {j}\mu_{k}\,,\quad\quad\ldots \tag{20}\]
where the subindices in parenthesis, \((ijk)\), indicate a cyclic sum. Notice that all the n-point correlation functions can be written in terms of the first two moments of the distribution, and all the higher-order cumulants vanish.
### Centred priors
For initial calculations, we assume a top-hat prior and make the unrealistic assumption, to be lifted later, that it is centered at the mean value:
\[\pi(x,a)\equiv\left\{\begin{array}{ll}(2a)^{-1}&\qquad-a<x-\mu<a\,,\\ 0&\qquad\mbox{otherwise}\,.\end{array}\right. \tag{21}\]
Since the Fourier transform of a top-hat function is
\[\int_{-\infty}^{\infty}dx\,e^{iux}\,\pi(x,a)=\frac{\sin au}{au}\,\exp[i\mu u]\,,\]
we can write the evidence either way
\[E(a_{1},\ldots,a_{n}) = \int_{-\infty}^{\infty}d^{n}{\bf x}\,f({\bf x})\,\prod_{i=1}^{n} \pi(x_{i},a_{i})\ =\ \prod_{i=1}^{n}(2a_{i})^{-1}\!\int_{-a_{1}}^{a_{1}}dx_{1}\!\cdots\!\int_{-a_{ n}}^{a_{n}}dx_{n}\,f(\tilde{\bf x}) \tag{22}\] \[= \frac{1}{(2\pi)^{n}}\int_{-\infty}^{\infty}d^{n}{\bf u}\,\phi({ \bf u})\,\prod_{i=1}^{n}\frac{\sin a_{i}u_{i}}{a_{i}u_{i}}\,. \tag{23}\]
In Eq. (22) we integrate over the displaced coordinate, \(\tilde{x}_{i}\equiv x_{i}-\mu_{i}\), such that \(\langle\tilde{x}_{i}\rangle=0\) and \(\langle\tilde{x}_{i}\tilde{x}_{j}\rangle=C_{ij}\). From now on, we ignore the tildes, and assume we have moved to those coordinates. Note that the choice of prior is not crucial. We could have chosen a Gaussian prior, and the result would not be very different, except that the window functions, \(\sin z/z\), would then be Gaussians. Let us now perform the integration Eq. (22) in the case of 1, 2 and then \(n\) variables.
**1 variable**. Suppose the covariance is just \(C=\sigma^{2}\). The evidence is then
\[E(a)=\frac{1}{2a\,\sigma\sqrt{2\pi}}\int_{-a}^{a}dx\ e^{-\frac{x^{2}}{2 \sigma^{2}}}=\frac{1}{2\pi}\int_{-\infty}^{\infty}du\,\frac{\sin au}{au}\,e^{ -\frac{1}{2}\sigma^{2}u^{2}}=\frac{1}{2a}{\rm Erf}\Big{[}\frac{a}{\sigma\sqrt {2}}\Big{]}\,, \tag{24}\]
where \({\rm Erf}[x]\) is the error function, which asymptotes very quickly to one for \(x\geq 2\), or \(a\geq 3\sigma\). Therefore, the evidence of a model with centred top-hat prior of width \(2a\) is well approximated by \((2a)^{-1}\). The wider is the theoretical prior, the smaller is the evidence, as expected.
**2 variables**. Suppose we have two correlated variables, \(x_{1}\) and \(x_{2}\), with covariance matrix
\[C=\left(\begin{array}{cc}C_{11}&C_{12}\\ C_{12}&C_{22}\end{array}\right)=\left(\begin{array}{cc}\sigma_{1}^{2}&\rho \sigma_{1}\sigma_{2}\\ \rho\sigma_{1}\sigma_{2}&\sigma_{2}^{2}\end{array}\right)\,. \tag{25}\]
where the cross-correlation \(\rho\) is defined by
\[\rho=\frac{\langle x_{1}x_{2}\rangle}{\sqrt{\langle x_{1}^{2}\rangle\langle x _{2}^{2}\rangle}}=\frac{\langle x_{1}x_{2}\rangle}{\sigma_{1}\sigma_{2}}\,,\]
with \(\sigma_{1}\) and \(\sigma_{2}\) the corresponding quadratic dispersions. In this case, the normalized 2-dimensional distribution function is
\[f({\bf x})=\frac{1}{2\pi\sigma_{1}\sigma_{2}\sqrt{1-\rho^{2}}}\,\exp\Big{[} \frac{-1}{1-\rho^{2}}\Big{(}\frac{x_{1}^{2}}{2\sigma_{1}^{2}}-\frac{\rho x_{1} x_{2}}{\sigma_{1}\sigma_{2}}+\frac{x_{2}^{2}}{2\sigma_{2}^{2}}\Big{)}\Big{]}\,, \tag{26}\]
which has the property that integrating ("marginalizing") over one of the two variables, leaves a properly-normalized Gaussian distribution for the remaining variable,
\[\int_{-\infty}^{\infty}dx_{2}\ f({\bf x})=\frac{1}{\sigma_{1}\sqrt{2\pi}}\,e^{ -\frac{x_{1}^{2}}{2\sigma_{1}^{2}}}\,. \tag{27}\]
Let us now evaluate the evidence Eq. (22) by integrating first over the prior in \(x_{2}\),
\[\frac{1}{2a_{2}}\int_{-a_{2}}^{a_{2}}dx_{2}\ f({\bf x})=\frac{e^{-\frac{x_{1} ^{2}}{2\sigma_{1}^{2}}}}{\sigma_{1}\sqrt{2\pi}}\cdot\frac{1}{4a_{2}}\left[{ \rm Erf}\Big{[}\frac{a_{2}\sigma_{1}+\rho\sigma_{2}\,x_{1}}{\sigma_{1}\sigma_ {2}\sqrt{2(1-\rho^{2})}}\Big{]}+{\rm Erf}\Big{[}\frac{a_{2}\sigma_{1}-\rho \sigma_{2}\,x_{1}}{\sigma_{1}\sigma_{2}\sqrt{2(1-\rho^{2})}}\Big{]}\right]\,. \tag{28}\]
The first term is the result we would have obtained if we had been marginalizing over \(x_{2}\); the second is a sum of error functions that still depend on \(x_{1}\), and modulates the marginalization. We can use the series expansion of the error function to second order,
\[\frac{1}{2}\Big{(}{\rm Erf}[a+x]+{\rm Erf}[a-x]\Big{)}={\rm Erf}[a]-\frac{2a \,x^{2}}{\sqrt{\pi}}\,e^{-a^{2}}+{\cal O}(x^{4})\,,\]
to write Eq. (28) to order \(x_{1}^{2}\) as
\[\frac{1}{2a_{2}}\int_{-a_{2}}^{a_{2}}dx_{2}\ f({\bf x})=\frac{e^{-\frac{x_{1} ^{2}}{2\sigma_{1}^{2}}}}{\sigma_{1}\sqrt{2\pi}}\left[\frac{1}{2a_{2}}\,{\rm Erf }\Big{[}\frac{a_{2}}{\sigma_{2}\sqrt{2(1-\rho^{2})}}\Big{]}-\frac{\rho^{2}\,x _{1}^{2}\,e^{-\frac{a_{2}^{2}}{2\sigma_{1}^{2}(1-\rho^{2})}}}{2\sigma_{1}^{2} \sigma_{2}(1-\rho^{2})\sqrt{2\pi(1-\rho^{2})}}\right]\,. \tag{29}\]
Integrating now over the \(x_{1}\) prior, we finally obtain the evidence
\[E(a_{1},a_{2}) = \frac{1}{4a_{1}a_{2}}\int_{-a_{1}}^{a_{1}}dx_{1}\int_{-a_{2}}^{a _{2}}dx_{2}\ f({\bf x})\] \[= \frac{1}{4a_{1}a_{2}}\,{\rm Erf}\Big{[}\frac{a_{2}}{\sigma_{2} \sqrt{2(1-\rho^{2})}}\Big{]}{\rm Erf}\Big{[}\frac{a_{1}}{\sigma_{1}\sqrt{2}} \Big{]}\] \[- \frac{\rho^{2}\,e^{-\frac{a_{2}^{2}}{2\sigma_{2}^{2}(1-\rho^{2})} }}{2\sigma_{1}\sigma_{2}(1-\rho^{2})\sqrt{2\pi(1-\rho^{2})}}\,\frac{{\rm Erf} \Big{[}\frac{a_{1}}{\sigma_{1}\sqrt{2}}\Big{]}}{2a_{1}}+\frac{\rho^{2}\,e^{- \frac{a_{2}^{2}}{2\sigma_{2}^{2}(1-\rho^{2})}-\frac{a_{1}^{2}}{2\sigma_{1}^{2} }}}{4\pi\sigma_{1}^{2}\sigma_{2}\sqrt{1-\rho^{2}}}\,.\]
Note that in the limit of no cross-correlations, \(\rho\to 0\), the integral factorizes and we can write an exact expression for the evidence,
\[E(a_{1},a_{2}) = \frac{1}{4a_{1}a_{2}}\,\frac{1}{2\pi\sigma_{1}\sigma_{2}}\int_{-a _{1}}^{a_{1}}dx_{1}\int_{-a_{2}}^{a_{2}}dx_{2}\ e^{-\frac{x_{1}^{2}}{2\sigma_{1}^ {2}}-\frac{x_{2}^{2}}{2\sigma_{2}^{2}}} \tag{31}\] \[= \frac{1}{4\pi^{2}}\int_{-\infty}^{\infty}du_{1}\int_{-\infty}^{ \infty}du_{2}\,\frac{\sin a_{1}u_{1}}{a_{1}u_{1}}\,\frac{\sin a_{2}u_{2}}{a_{ 2}u_{2}}\,e^{-\frac{1}{2}\sigma_{1}^{2}u_{1}^{2}-\frac{1}{2}\sigma_{2}^{2}u_{2 }^{2}}\] (32) \[= \frac{1}{4a_{1}a_{2}}{\rm Erf}\Big{[}\frac{a_{1}}{\sigma_{1} \sqrt{2}}\Big{]}{\rm Erf}\Big{[}\frac{a_{2}}{\sigma_{2}\sqrt{2}}\Big{]}\,. \tag{33}\]
It happens, however, that even in the presence of cross-correlations, if the prior is wide (\(a_{i}\geq 2\sigma_{i}\)), then the terms proportional to exponentials are negligible and the evidence becomes, to very good approximation,
\[E(a_{1},a_{2})=\frac{1}{4a_{1}a_{2}}\,{\rm Erf}\Big{[}\frac{a_{2}}{\sigma_{2} \sqrt{2(1-\rho^{2})}}\Big{]}{\rm Erf}\Big{[}\frac{a_{1}}{\sigma_{1}\sqrt{2}} \Big{]}\,. \tag{34}\]
Moreover, in that case, the error functions are very approximately given by 1.
\(n\) **variables**. Suppose we have \(n\) correlated variables, \({\bf x}=(x_{1},\ldots,x_{n})\), with covariance matrix
\[C_{n}=\left(\begin{array}{cccc}C_{11}&C_{12}&\ldots&C_{1n}\\ C_{12}&C_{22}&\ldots&C_{2n}\\ \vdots&\vdots&\ddots&\vdots\\ C_{1n}&C_{2n}&\ldots&C_{nn}\end{array}\right)\,. \tag{35}\]
In that case, the probability distribution function can be expressed as
\[f({\bf x})=\frac{1}{(2\pi)^{n/2}\sqrt{\det C_{n}}}\exp\Big{[}-\frac{1}{2}{\bf x }^{x}C_{n}^{-1}{\bf x}\Big{]}\,, \tag{36}\]
which has the property that marginalizing over the last variable, \(x_{n}\), we obtain a correlated probability distribution function for the \(n-1\) variables, \({\bf x}=(x_{1},\ldots,x_{n-1})\),
\[f({\bf x})=\frac{1}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\exp\Big{[}-\frac{1}{ 2}{\bf x}^{x}C_{n-1}^{-1}{\bf x}\Big{]}\,, \tag{37}\]
where the \(C_{n-1}\) covariance matrix is given by Eq. (35) without the last column and the last row.
We will now evaluate the evidence Eq. (22) for this multivariate Gaussian, starting with the integration over the last variable, \(x_{n}\),
\[\frac{1}{2a_{n}}\int_{-a_{n}}^{a_{n}}dx_{n}\ f({\bf x}) = \frac{1}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\exp\Big{[}-\frac{1} {2}{\bf x}^{x}C_{n-1}^{-1}{\bf x}\Big{]} \tag{38}\] \[\times\left\{\frac{1}{2a_{n}}\,{\rm Erf}\left[\frac{a_{n}}{ \sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]+{\cal O}\Big{(}e^{- \frac{a_{n}^{2}\,\det C_{n-1}}{2\det C_{n}}}\Big{)}\right\}\,.\]
Integrating now over the next variable, \(x_{n-1}\), we find
\[\frac{1}{4a_{n}a_{n-1}}\int_{-a_{n}}^{a_{n}}dx_{n}\int_{-a_{n-1}}^{a_{n-1}}dx _{n-1}\ f({\bf x})=\frac{1}{(2\pi)^{(n-2)/2}\sqrt{\det C_{n-2}}}\exp\Big{[}- \frac{1}{2}\,{\bf x}^{x}C_{n-2}^{-1}{\bf x}\Big{]}\]
\[\times\left\{\frac{1}{4a_{n}a_{n-1}}\,{\rm Erf}\left[\frac{a_{n}}{\sqrt{2}} \sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]\,{\rm Erf}\left[\frac{a_{n}}{ \sqrt{2}}\sqrt{\frac{\det C_{n-2}}{\det C_{n-1}}}\right]+{\cal O}\Big{(}e^{- \frac{a_{n}^{2}\,\det C_{n-1}}{2\det C_{n}}}\Big{)}\right\}\,. \tag{39}\]
Continuing the integration over the priors, we end up with the evidence for the \(n\)-dimensional distribution,
\[E(a_{1},\ldots,a_{n}) = \frac{1}{\prod_{p=1}^{n}2a_{p}}\int_{-a_{1}}^{a_{1}}\!\!\cdots\int _{-a_{n}}^{a_{n}}d^{n}{\bf x}\ f({\bf x}) \tag{40}\] \[= \prod_{p=1}^{n}\frac{1}{2a_{p}}{\rm Erf}\left[\frac{a_{p}}{\sqrt {2}}\sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]+{\cal O}\left(\exp\Big{[}- \sum_{p=1}^{n}\frac{a_{p}^{2}\,\det C_{p-1}}{2\det C_{p}}\Big{]}\right)\,,\]
where the covariance matrices \(C_{p}\) are constructed as above, by eliminating the \(n-p\) last rows and columns, until we end up with \(C_{0}\equiv 1\). Note that the approximation is very good whenever \(\sum_{p=1}^{n}(a_{p}^{2}\,\det C_{p-1})/(2\det C_{p})\gg 1\), which is often the case. Note also that we recover the previous result Eq. (34) for the particular case \(n=2\).
In the limit that the cross-correlation between the \(n\) variables vanishes, the evidence (40) reduces to the exact result
\[E(a_{1},\ldots,a_{n})=\prod_{p=1}^{n}\frac{1}{2a_{p}}\mbox{Erf}\left[\frac{a_{ p}}{\sigma_{p}\sqrt{2}}\right]\,. \tag{41}\]
Note that the evidence Eq. (40) reflects correctly the limit in which we eliminate the need for a new variable \(x_{n}\), by making its prior vanish,
\[\lim_{a_{n}\to 0}\ E(a_{1},\ldots,a_{n})=E(a_{1},\ldots,a_{n-1})\,\frac{1}{ \sqrt{2\pi}}\,\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\,, \tag{42}\]
and thus we recover in that limit a properly-normalized distribution, \(f(x_{1},\ldots,x_{n})\to f(x_{1},\ldots,x_{n-1})\), while the inspection of the likelihood function alone would not have been able to give a reasonable answer.
On the other hand, in the case that our theoretical prejudice cannot assign a concrete prior to a given variable, we see that the evidence decreases as \(1/2a\) as \(a\) increases. Therefore, the Bayesian evidence seems to be a very good discriminator between theoretical priors, and penalizes including too many parameters, _a la_ Occam's razor.
### Uncentered priors
It is unlikely that the priors will actually be centred on the mean of the distribution, as the priors are not supposed to know what the data will tell us. We therefore need to generalize the above for uncentred priors. We continue to assume that the priors are top hats.
We also continue to assume for the moment that the probability distribution is well approximated by a Gaussian with mean value \(\mu\). We will then use displaced variables \(\tilde{x}_{i}=x_{i}-\mu_{i}\), and write the Gaussian distribution function as in Eq. (36). The normalized top-hat prior is now uncentered with respect to the mean value,
\[\pi(\tilde{x};a,b)\equiv\left\{\begin{array}{ll}(a+b)^{-1}&-a<\tilde{x}<b\,, \\ 0&\mbox{otherwise}\,.\end{array}\right. \tag{43}\]
For a single variable, the result is _exact_,
\[E(a;b)=\int_{-\infty}^{\infty}dx\,f(x)\,\pi(x;a,b)=\frac{1}{2a+2b}\left(\mbox {Erf}\left[\frac{a}{\sigma\sqrt{2}}\right]+\mbox{Erf}\left[\frac{b}{\sigma \sqrt{2}}\right]\right)\,. \tag{44}\]
where we are integrating over the displaced variable \(\tilde{x}\), from now on renamed as \(x\). Note that we recover the result Eq. (24) for the centered prior case in the limit \(b\to a\).
For two variables, with distribution function Eq. (26), the uncentered Bayesian evidence is
\[E(a_{1},a_{2};b_{1},b_{2}) = \frac{1}{(a_{1}+b_{1})(a_{2}+b_{2})}\int_{-a_{1}}^{b_{1}}dx_{1}\, \int_{-a_{2}}^{b_{2}}dx_{2}\,f(x_{1},x_{2})\] \[= \frac{1}{(2a_{1}+2b_{1})(2a_{2}+2b_{2})}\left\{\left(\mbox{Erf} \left[\frac{a_{1}}{\sigma_{1}\sqrt{2}}\right]+\mbox{Erf}\left[\frac{b_{1}}{ \sigma_{1}\sqrt{2}}\right]\right)\right.\] \[\left.\times\left(\mbox{Erf}\left[\frac{a_{2}}{\sigma_{2}\sqrt{2 (1-\rho^{2})}}\right]+\mbox{Erf}\left[\frac{b_{2}}{\sigma_{2}\sqrt{2(1-\rho^{2 })}}\right]\right)\right.\] \[\left.-\,\frac{\rho}{2\pi\sqrt{1-\rho^{2}}}\left(e^{-\frac{a_{1} ^{2}}{2\sigma_{1}^{2}}}-e^{-\frac{b_{1}^{2}}{2\sigma_{1}^{2}}}\right)\left(e^ {-\frac{a_{2}^{2}}{2\sigma_{2}^{2}(1-\rho^{2})}}+e^{-\frac{b_{2}^{2}}{2\sigma_ {2}^{2}(1-\rho^{2})}}\right)\right\}\]
The evidence for the multiple-variable case Eq. (36) is
\[E({\bf a},{\bf b})=\int_{-\infty}^{\infty}d^{n}{\bf x}\,f({\bf x})\,\prod_{i=1}^{ n}\,\pi(x_{i};a_{i},b_{i})=\prod_{i=1}^{n}(a_{i}+b_{i})^{-1}\!\int_{-a_{1}}^{b_{1}}d \tilde{x}_{1}\!\cdots\!\int_{-a_{n}}^{b_{n}}d\tilde{x}_{n}\,f(\tilde{\bf x})\,. \tag{47}\]
Let us now evaluate it for the multivariate Gaussian Eq. (36), starting with the integration over the last variable, \(x_{n}\),
\[\frac{1}{a_{n}+b_{n}}\int_{-a_{n}}^{b_{n}}dx_{n}\ f({\bf x})\ =\ \frac{1}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\exp \left[-\,\frac{1}{2}{\bf x}^{x}\,C_{n-1}^{-1}{\bf x}\right]\frac{1}{(2a_{n}+2 b_{n})}\] \[\times\left\{{\rm Erf}\left[\frac{a_{n}}{\sqrt{2}}\sqrt{\frac{ \det C_{n-1}}{\det C_{n}}}\right]+{\rm Erf}\left[\frac{b_{n}}{\sqrt{2}}\sqrt{ \frac{\det C_{n-1}}{\det C_{n}}}\right]+{\cal O}\left(e^{-\frac{a_{n}^{2}\det C _{n-1}}{2\det C_{n}}}+e^{-\frac{b_{n}^{2}\det C_{n-1}}{2\det C_{n}}}\right)\right\} \tag{48}\]
Integrating now over the next variable, \(x_{n-1}\), we find
\[\frac{1}{(a_{n}+b_{n})(a_{n-1}+b_{n-1})}\int_{-a_{n}}^{b_{n}}dx_{n }\int_{-a_{n-1}}^{b_{n-1}}dx_{n-1}\ f({\bf x})=\] \[\frac{1}{(2\pi)^{(n-2)/2}\sqrt{\det C_{n-2}}}\exp\Big{[}-\,\frac{ 1}{2}\,{\bf x}^{x}\,C_{n-2}^{-1}{\bf x}\Big{]}\ \frac{1}{(2a_{n}+2b_{n})(2a_{n-1}+2b_{n-1})} \tag{49}\] \[\times\left\{\left({\rm Erf}\left[\frac{a_{n}}{\sqrt{2}}\sqrt{ \frac{\det C_{n-1}}{\det C_{n}}}\right]+{\rm Erf}\left[\frac{b_{n}}{\sqrt{2}} \sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]\right)\right.\] (50) \[\times\left({\rm Erf}\left[\frac{a_{n-1}}{\sqrt{2}}\sqrt{\frac{ \det C_{n-2}}{\det C_{n-1}}}\right]+{\rm Erf}\left[\frac{b_{n-1}}{\sqrt{2}} \sqrt{\frac{\det C_{n-2}}{\det C_{n-1}}}\right]\right)\] (51) \[+\ \left.{\cal O}\left(e^{-\frac{a_{n}^{2}\det C_{n-1}}{2\det C _{n}}}+e^{-\frac{b_{n}^{2}\det C_{n-1}}{2\det C_{n}}}\right)\times\left(e^{- \frac{a_{n-1}^{2}\det C_{n-2}}{2\det C_{n-1}}}+e^{-\frac{b_{n-1}^{2}\det C_{n- 2}}{2\det C_{n-1}}}\right)\right\}\,.\]
Continuing the integration over the priors, we end up with the evidence for the \(n\)-dimensional distribution,
\[E({\bf a},{\bf b}) = \frac{1}{\prod_{p=1}^{n}(a_{p}+b_{p})}\int_{-a_{1}}^{b_{1}}\!\! \cdots\!\int_{-a_{n}}^{b_{n}}d^{n}{\bf x}\ f({\bf x})\] \[= \prod_{p=1}^{n}\frac{1}{(2a_{p}+2b_{p})}\left({\rm Erf}\left[ \frac{a_{p}}{\sqrt{2}}\sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]+{\rm Erf} \left[\frac{b_{p}}{\sqrt{2}}\sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]\right)\] \[\qquad\qquad\qquad+\ {\cal O}\left(\prod_{p=1}^{n}\left[\exp\Big{(}-\, \frac{a_{p}^{2}\,\det C_{p-1}}{2\det C_{p}}\Big{)}+\exp\Big{(}-\,\frac{b_{p}^{ 2}\,\det C_{p-1}}{2\det C_{p}}\Big{)}\right]\right)\,,\]
where the covariance matrices \(C_{p}\) are constructed as above, by eliminating the \(n-p\) last rows and columns, until \(C_{0}\equiv 1\). Note that the approximation is very good whenever the exponents are large, \(\sum_{p=1}^{n}(a_{p}^{2}\,\det C_{p-1})/(2\det C_{p})\gg 1\), which is often the case. Note also that we recover the expression of the evidence for the centered priors Eq. (40) in the limit \(b\to a\).
Let us now evaluate the evidence for a distribution normalized to the maximum of the likelihood distribution,
\[f({\bf x})={\cal L}_{\rm max}\exp\left[-\,\frac{1}{2}{\bf x}^{x}\,C_{n}^{-1}{ \bf x}\right] \tag{53}\]
In this case, the evidence is given by Eq. (52), multiplied by a factor \({\cal L}_{\rm max}\times(2\pi)^{n/2}\sqrt{\det C_{n}}\) from the normalization. We can then evaluate the logarithm of the evidence, ignoring the exponentially-small corrections, as
\[\ln E = \ln{\cal L}_{\rm max}+\frac{n}{2}\ln(2\pi)+\frac{1}{2}\ln\det C_{n }-\sum_{p=1}^{n}\ln(2a_{p}+2b_{p}) \tag{54}\] \[+\sum_{p=1}^{n}\ln\left({\rm Erf}\left[\frac{a_{p}}{\sqrt{2}} \sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]+{\rm Erf}\left[\frac{b_{p}}{ \sqrt{2}}\sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]\right)\,.\]
**Uncorrelated case**. Suppose we have a multivariate Gaussian distribution without correlations between variables, i.e. \(C_{ij}=\sigma_{i}^{2}\delta_{ij}\) is a diagonal matrix; then the evidence reads _exactly_,
\[E({\bf a},{\bf b})=\frac{1}{\prod_{p=1}^{n}(a_{p}+b_{p})}\int_{-a_{1}}^{b_{1}} \!\cdots\int_{-a_{n}}^{b_{n}}d^{n}{\bf x}\ f({\bf x})=\prod_{p=1}^{n}\frac{1}{2 (a_{p}+b_{p})}\left({\rm Erf}\left[\frac{a_{p}}{\sigma_{p}\sqrt{2}}\right]+{ \rm Erf}\left[\frac{b_{p}}{\sigma_{p}\sqrt{2}}\right]\right)\,, \tag{55}\]
where \(\sigma_{p}\) are the dispersions of each variable \(\tilde{x}_{p}\), and thus the logarithm of the evidence becomes
\[\ln E=\ln{\cal L}_{\rm max}+\frac{n}{2}\ln(2\pi)+\sum_{p=1}^{n}\ln\sigma_{p}- \sum_{p=1}^{n}\ln(2a_{p}+2b_{p})+\sum_{p=1}^{n}\ln\left({\rm Erf}\left[\frac{ a_{p}}{\sigma_{p}\sqrt{2}}\right]+{\rm Erf}\left[\frac{b_{p}}{\sigma_{p}\sqrt{2}} \right]\right) \tag{56}\]
**Laplace approximation**. The Laplacian approximation to the evidence assumes the distribution is a correlated Gaussian, and that the priors are large enough so that the whole distribution fits easily inside them, in which case the error functions are approximately unity and do not contribute to the evidence; from Eq. (54) we now have
\[\ln E=\ln{\cal L}_{\rm max}+\frac{n}{2}\ln(2\pi)+\frac{1}{2}\ln\det C_{n}- \sum_{p=1}^{n}\ln\Delta\theta_{p}\,, \tag{57}\]
where \(\Delta\theta_{p}=a_{p}+b_{p}\) is the parameter interval associated to the prior. In the next section we will compare the different approximations.
## 4 Non-Gaussian corrections
The advantage of this method is that one can perform a systematic computation of the evidence of a given model with its own priors, given an arbitrary set of moments of the distribution. Here we will consider the first two beyond the covariance matrix, i.e. the skewness and the kurtosis terms, see Eq. (9).
### Skewness
Let us start with the first correction to the Gaussian approximation, the trilinear term \(B_{ijk}\). For this, we write the generating functional (9) as
\[\phi({\bf u})=\exp\left[i\,\mu_{i}u_{i}-\frac{1}{2!}\,C_{ij}\,u_{i}u_{j}-\, \frac{i}{3!}\,B_{ijk}\,u_{i}u_{j}u_{k}\right]\,. \tag{58}\]
By performing a change of variable, \(u_{i}=y_{i}-i\,C_{ik}^{-1}(x_{k}-\mu_{k})\), we can evaluate the Fourier transform integral and obtain the properly-normalized probability distribution function
\[f({\bf x}) = \frac{1}{(2\pi)^{n/2}\sqrt{\det C_{n}}}\exp\left[-\,\frac{1}{2}{ \bf x}^{\tau}C_{n}^{-1}{\bf x}\right] \tag{59}\] \[\times\left(1-\frac{1}{2}B_{ijk}\,C_{ij}^{-1}C_{kl}^{-1}\,x_{l}+ \frac{1}{6}B_{ijk}\,C_{il}^{-1}C_{jm}^{-1}C_{kn}^{-1}\,x_{l}x_{m}x_{n}\right)\,,\]
where \(x_{k}\) are the displaced coordinates \((x_{k}-\mu_{k})\). This skewed distribution function satisfies
\[\langle x_{i}\rangle=0\,,\hskip 28.452756pt\langle x_{i}x_{j}\rangle=C_{ij}\,, \hskip 28.452756pt\langle x_{i}x_{j}x_{k}\rangle=B_{ijk}\,,\hskip 28.452756pt \langle x_{i}x_{j}x_{k}x_{l}\rangle=0\,,\hskip 28.452756pt\ldots \tag{60}\]
as can be confirmed by direct evaluation. Let us now compute the evidence Eq. (22) for this skewed model. Since the extra terms in the parenthesis of Eq. (59) are both odd functions of \(x\), when integrating over an even range like that of the centered top-hat prior Eq. (21), their contribution to the evidence vanish, and thus the final evidence for the skewed model does not differ from that of the Gaussian model Eq. (40). In case the prior is off-centered with respect to the mean, e.g. like in Eq. (43), then the contribution of the odd terms to the evidence would not vanish. Let us evaluate their contribution.
For a single variable (\(n=1\)), the correctly-normalized likelihood function can be written as
\[f(x)=\frac{e^{-x^{2}/2\sigma^{2}}}{\sigma\sqrt{2\pi}}\,\left(1-\frac{B\,x}{2 \sigma^{4}}+\frac{B\,x^{3}}{6\sigma^{6}}\right)\,,\]
satisfying \(\langle x\rangle=0\), \(\langle x^{2}\rangle=\sigma^{2}\), \(\langle x^{3}\rangle=B\), and the Bayesian integral can be computed _exactly_ as
\[E(a,b)=\frac{1}{2a+2b}\left({\rm Erf}\left[\frac{a}{\sigma\sqrt{2}}\right]+{ \rm Erf}\left[\frac{b}{\sigma\sqrt{2}}\right]\right)-\frac{B\sigma^{-3}}{6 \sqrt{2\pi}}\left[\left(1-\frac{a^{2}}{\sigma^{2}}\right)e^{-\frac{a^{2}}{2 \sigma^{2}}}-\left(1-\frac{b^{2}}{\sigma^{2}}\right)e^{-\frac{b^{2}}{2\sigma^{ 2}}}\right]\frac{1}{a+b}\,. \tag{61}\]
Note that for even (centered) priors, with \(b=a\), the evidence reduces to Eq. (24).
For an arbitrary number of variables, the computation is more complicated. Let us start with the \(n\)-th variable and, in order to compute the integral, let us define the auxiliary function
\[g(\lambda) = \int_{-a_{n}}^{b_{n}}dx_{n}\,x_{n}\,\frac{\exp\left[-\,\frac{ \lambda}{2}{\bf x}^{\tau}C_{n}^{-1}{\bf x}\right]}{(2\pi)^{n/2}\sqrt{\det C_{ n}}}\ =\ \frac{\exp\left[-\,\frac{1}{2}{\bf x}^{\tau}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{(n- 1)/2}\sqrt{\det C_{n-1}}}\times \tag{62}\] \[\times\frac{1}{\lambda\sqrt{2\pi}}\,\left(\exp\left[-\frac{ \lambda a_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}}\right]-\exp\left[-\frac{ \lambda b_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}}\right]\right)\,,\]
such that, using \({\rm Erf}^{\prime}[x]=\frac{2}{\sqrt{\pi}}\,e^{-x^{2}}\),
\[-2g^{\prime}(\lambda=1)=\int_{-a_{n}}^{b_{n}}dx_{n}\,x_{n}\,\frac{({\bf x}^{ \tau}C_{n}^{-1}{\bf x})\,\exp\left[-\,\frac{1}{2}{\bf x}^{\tau}C_{n-1}^{-1}{ \bf x}\right]}{(2\pi)^{n/2}\sqrt{\det C_{n}}}\ =\ \frac{\exp\left[-\,\frac{1}{2}{\bf x}^{\tau}C_{n-1}^{-1}{ \bf x}\right]}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\times\]
\[\times\frac{1}{\sqrt{2\pi}}\,\left\{\left(2+a_{n}^{2}\,\frac{\det C_{n-1}}{ \det C_{n}}\right)\exp\left[-\,\frac{a_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C _{n}}\right]-\left(2+b_{n}^{2}\,\frac{\det C_{n-1}}{\det C_{n}}\right)\exp \left[-\,\frac{b_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}}\right]\right\}\,. \tag{63}\]
Therefore, with the use of Eq. (63), the integral of the skewness-corrected distribution function Eq. (59) over the \(x_{n}\) uncentered prior, becomes
\[\int_{-a_{n}}^{b_{n}}dx_{n}\ f({\bf x})=\frac{\exp\left[-\,\frac{1}{2}{\bf x} ^{\tau}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\ \left\{\frac{1}{2}\,\left({\rm Erf}\left[\frac{a_{n}}{\sqrt{2}}\sqrt{\frac{ \det C_{n-1}}{\det C_{n}}}\right]+{\rm Erf}\left[\frac{b_{n}}{\sqrt{2}}\sqrt{ \frac{\det C_{n-1}}{\det C_{n}}}\right]\right)\right.\]
\[-\left.\frac{1}{6}B_{ijn}\,C_{ij}^{-1}\frac{1}{\sqrt{2\pi}}\sqrt{\frac{\det C_{ n-1}}{\det C_{n}}}\left[\left(1-a_{n}^{2}\,\frac{\det C_{n-1}}{\det C_{n}}\right)e^{- \frac{a_{n}^{2}\,\det C_{n-1}}{2\det C_{n}}}-\left(1-b_{n}^{2}\,\frac{\det C_{ n-1}}{\det C_{n}}\right)e^{-\frac{b_{n}^{2}\,\det C_{n-1}}{2\det C_{n}}}\right]\right\}\,, \tag{64}\]
Let us define two new functions,
\[E_{i}(a_{i},b_{i}) = \frac{1}{2}\,\left(\mbox{Erf}\left[\frac{a_{i}}{\sqrt{2}}\sqrt{ \frac{\det C_{i-1}}{\det C_{i}}}\right]+\mbox{Erf}\left[\frac{b_{i}}{\sqrt{2}} \sqrt{\frac{\det C_{i-1}}{\det C_{i}}}\right]\right)\,, \tag{65}\] \[F_{i}(a_{i},b_{i}) = \frac{1}{6\sqrt{2\pi}}\sqrt{\frac{\det C_{i-1}}{\det C_{i}}}\left[ \left(1-a_{i}^{2}\frac{\det C_{i-1}}{\det C_{i}}\right)e^{-\frac{a_{i}^{2}\, \det C_{i-1}}{2\det C_{i}}}-\left(1-b_{i}^{2}\frac{\det C_{i-1}}{\det C_{i}} \right)e^{-\frac{b_{i}^{2}\,\det C_{i}}{2\det C_{i}}}\right]\,.\]
Integrating iteratively over \(x_{n-1},\ldots,x_{1}\), we end up with the Bayesian evidence for the third-order-corrected probability distribution function \(f({\bf x})\),
\[E({\bf a},{\bf b})=\prod_{p=1}^{n}\,\frac{E_{p}(a_{p},b_{p})}{(a_{p}+b_{p})} \,\left[1-\sum_{k=1}^{n}\,B_{ijk}\,C_{ij}^{-1}\,\frac{F_{k}(a_{k},b_{k})}{E_{k }(a_{k},b_{k})}\right]\,. \tag{66}\]
Unless \(B_{ijk}\,C_{ij}^{-1}\) is very large, the correction to the error function is exponentially suppressed, and we do not expect significant departures from the Gaussian case Eq. (40). Note also that if the prior is symmetric, it is easy to see that the skewness part of the integral vanishes, \(F_{k}(a_{k},b_{k})\to 0\), as can be checked explicitly by taking \(b_{k}\to a_{k}\).
### Kurtosis
The next correction beyond skewness is the fourth order moment or kurtosis, given by the \(D_{ijkl}\) term in Eq. (9). Let us ignore for the moment the third order skewness and write
\[\phi({\bf u})=\exp\left[i\,\mu_{i}u_{i}-\frac{1}{2!}\,C_{ij}\,u_{i}u_{j}+\, \frac{1}{4!}\,D_{ijkl}\,u_{i}u_{j}u_{k}u_{l}\right]\,. \tag{67}\]
By performing the same change of variables, \(u_{i}=y_{i}-i\,C_{ik}^{-1}(x_{k}-\mu_{k})\), we can now compute the Fourier transform and obtain the properly-normalized probability distribution function
\[f({\bf x}) = \frac{1}{(2\pi)^{n/2}\sqrt{\det C_{n}}}\exp\left[-\,\frac{1}{2}{ \bf x}^{T}C_{n}^{-1}{\bf x}\right]\left(1+\frac{1}{8}D_{ijkl}\,C_{ij}^{-1}C_{ kl}^{-1}\right. \tag{68}\] \[\left.-\frac{1}{4}D_{ijkl}\,C_{ij}^{-1}C_{km}^{-1}C_{ln}^{-1}\,x_ {m}x_{n}+\frac{1}{24}D_{ijkl}\,C_{im}^{-1}C_{jn}^{-1}C_{kp}^{-1}C_{lq}^{-1}\,x_ {m}x_{n}x_{p}x_{q}\right)\,.\]
Performing the integrals, it is easy to see that this distribution satisfies
\[\langle x_{i}x_{j}\rangle=C_{ij}\,,\qquad\quad\langle x_{i}x_{j}x_{k}x_{l} \rangle=D_{ijkl}+C_{ij}C_{kl}+C_{ik}C_{jl}+C_{il}C_{jk}\,,\ \ \ \ \ldots \tag{69}\]
Note that in order for the new likelihood distribution (68) to be positive definite, it is required that \(D_{ijkl}C_{ij}^{-1}C_{kl}^{-1}<4\), and if we impose that there is only one maximum at the center, then it must satisfy \(D_{ijkl}C_{ij}^{-1}C_{kl}^{-1}<2\). These conditions impose bounds on the maximum possible deviation of the evidence from a that of a gaussian.
Let us now compute the evidence Eq. (22) for this kurtosis model. The extra terms in the parenthesis of Eq. (68) are both even functions of \(x\), and we cannot ignore them, even for centered priors.
For a single variable (\(n=1\)), the correctly-normalized likelihood function can be written as
\[f(x)=\frac{e^{-\frac{x^{2}}{2a^{2}}}}{\sigma\sqrt{2\pi}}\left(1+\frac{D}{8 \sigma^{4}}-\frac{D\,x^{2}}{4\sigma^{6}}+\frac{D\,x^{4}}{24\sigma^{8}}\right)\,,\]
satisfying \(\langle x\rangle=0\), \(\langle x^{2}\rangle=\sigma^{2}\), \(\langle x^{3}\rangle=0\), \(\langle x^{4}\rangle=D+3\sigma^{4}\), etc. The Bayesian integral can be computed _exactly_ as
\[E(a,b)=\frac{1}{2a+2b}\left(\mbox{Erf}\left[\frac{a}{\sigma\sqrt{2}}\right]+ \mbox{Erf}\left[\frac{b}{\sigma\sqrt{2}}\right]\right)+\frac{D\sigma^{-4}}{8 \sqrt{2\pi}}\left(\frac{a}{\sigma}\Big{(}1-\frac{a^{2}}{3\sigma^{2}}\Big{)}\,e ^{-\frac{a^{2}}{2\sigma^{2}}}+\frac{b}{\sigma}\Big{(}1-\frac{b^{2}}{3\sigma^{ 2}}\Big{)}\,e^{-\frac{b^{2}}{2\sigma^{2}}}\right)\frac{1}{a+b}\,. \tag{70}\]
For arbitrary number of variables, the computation is again much more complicated. Let us start with the \(n\)-th variable and, in order to compute the first integral, let us define a new auxiliary function
\[h(\lambda) = \int_{-a_{n}}^{b_{n}}dx_{n}\,\frac{\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n}^{-1}{\bf x}\right]}{(2\pi)^{n/2}\sqrt{\det C_{n}}}\ =\ \frac{\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\times \tag{71}\] \[\times\frac{1}{2\sqrt{\lambda}}\,\left({\rm Erf}\left[\frac{a_{n} \sqrt{\lambda}}{\sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]+{\rm Erf }\left[\frac{b_{n}\sqrt{\lambda}}{\sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C_{ n}}}\right]\right)\,,\]
such that,
\[-2h^{\prime}(\lambda=1) = \int_{-a_{n}}^{b_{n}}dx_{n}\,\frac{({\bf x}^{ T}C_{n}^{-1}{\bf x})\,\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{n/2}\sqrt{\det C_{n}}}\ =\ \frac{\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\times\] \[\times\left\{\frac{1}{2}\,\left({\rm Erf}\left[\frac{a_{n}}{\sqrt {2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]+{\rm Erf}\left[\frac{b_{n}} {\sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]\right)\right.\] \[- \left.\frac{1}{\sqrt{2\pi}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}} \,\left(a_{n}\,\exp\left[-\frac{a_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}} \right]+b_{n}\,\exp\left[-\frac{b_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}} \right]\right)\right\}\,.\]
\[4h^{\prime\prime}(\lambda=1) = \int_{-a_{n}}^{b_{n}}dx_{n}\,\frac{({\bf x}^{ T}C_{n}^{-1}{\bf x})^{2}\,\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n}^{-1}{\bf x}\right]}{(2\pi)^{n}\sqrt{\det C_{n}}}\ =\ \frac{\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\times\] \[\times\left\{\frac{3}{2}\,\left({\rm Erf}\left[\frac{a_{n}}{\sqrt {2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]+{\rm Erf}\left[\frac{b_{n}} {\sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\right]\right)\right.\] \[- \left.\frac{3}{\sqrt{2\pi}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}} \,\left(a_{n}\,\exp\left[-\frac{a_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}} \right]+b_{n}\,\exp\left[-\frac{b_{n}^{2}}{2}\frac{\det C_{n-1}}{\det C_{n}} \right]\right)\right.\] \[- \left.\frac{a_{n}^{2}}{\sqrt{2\pi}}\left(\frac{\det C_{n-1}}{\det C _{n}}\right)^{3/2}\left(a_{n}\,\exp\left[-\frac{a_{n}^{2}}{2}\frac{\det C_{n-1 }}{\det C_{n}}\right]+b_{n}\,\exp\left[-\frac{b_{n}^{2}}{2}\frac{\det C_{n-1}} {\det C_{n}}\right]\right)\right\}\,.\]
Therefore, with the use of Eqs. (72) and (73), the integral of the kurtosis-corrected distribution function (68) over the \(x_{n}\) prior, becomes
\[\int_{-a_{n}}^{b_{n}}dx_{n}\ f({\bf x})=\frac{\exp\left[-\frac{1}{2}{\bf x}^{ T}C_{n-1}^{-1}{\bf x}\right]}{(2\pi)^{(n-1)/2}\sqrt{\det C_{n-1}}}\left\{\frac{1}{2}\, \left({\rm Erf}\left[\frac{a_{n}}{\sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C_{n} }}\right]+{\rm Erf}\left[\frac{b_{n}}{\sqrt{2}}\sqrt{\frac{\det C_{n-1}}{\det C _{n}}}\right]\right)\right.+ \tag{74}\] \[+ \left.\frac{1}{8}D_{ijkl}\,C_{ij}^{-1}C_{kl}^{-1}\frac{1}{\sqrt{2 \pi}}\sqrt{\frac{\det C_{n-1}}{\det C_{n}}}\left[a_{n}\bigg{(}1-\frac{a_{n}^{2 }}{3}\frac{\det C_{n-1}}{\det C_{n}}\bigg{)}\,e^{-\frac{a_{n}^{2}\,\det C_{n-1 }}{2\det C_{n}}}+b_{n}\bigg{(}1-\frac{b_{n}^{2}}{3}\frac{\det C_{n-1}}{\det C _{n}}\bigg{)}\,e^{-\frac{b_{n}^{2}\,\det C_{n-1}}{2\det C_{n}}}\right]\right\}\,.\]
We can now define a new function
\[G_{i}(a_{i},b_{i})=\frac{1}{8\sqrt{2\pi}}\sqrt{\frac{\det C_{i-1}}{\det C_{i}}} \left[a_{i}\left(1-\frac{a_{i}^{2}}{3}\frac{\det C_{i-1}}{\det C_{i}}\right)e^{ -\frac{a_{i}^{2}\,\det C_{i-1}}{2\det C_{i}}}-b_{i}\left(1-\frac{b_{i}^{2}}{3} \frac{\det C_{i-1}}{\det C_{i}}\right)e^{-\frac{b_{i}^{2}\,\det C_{i-1}}{2\det C _{i}}}\right]\,. \tag{75}\]
Integrating iteratively over \(x_{n-1},\ldots,x_{1}\), we end up with the Bayesian evidence for the fourth-order-corrected probability distribution function \(f({\bf x})\),
\[E({\bf a},{\bf b})=\prod_{p=1}^{n}\,\frac{E_{p}(a_{p},b_{p})}{(a_{p}+b_{p})}\, \left[1+D_{ijkl}\,C_{ij}^{-1}\,C_{kl}^{-1}\,\sum_{m=1}^{n}\,\frac{G_{m}(a_{m},b _{m})}{E_{m}(a_{m},b_{m})}\right]\,. \tag{76}\]
so, unless \(D_{ijkl}\,C_{ij}^{-1}C_{kl}^{-1}\) is very large, the correction to the error function is exponentially suppressed, and we do not expect significant departures from the Gaussian case, Eq. (40).
In order to compare models it is customary to compute the logarithm of the evidence. Let us assume that we are given a likelihood distribution function normalized by the maximum likelihood, and with corrections up to fourth order,
\[f({\bf x})={\cal L}_{\rm max}\exp\left[-\frac{1}{2}{\bf x}^{T}C_{n}^{-1}{\bf x }\right]\bigg{(}1+\frac{1}{8}D_{ijkl}\,C_{ij}^{-1}C_{kl}^{-1}\bigg{)}^{-1} \bigg{(}1-\frac{1}{2}B_{ijk}\,C_{ij}^{-1}C_{kl}^{-1}\,x_{l}+\frac{1}{6}B_{ijk} \,C_{il}^{-1}C_{jm}^{-1}C_{kn}^{-1}\,x_{l}x_{m}x_{n}\]
\[+\ \frac{1}{8}D_{ijkl}\,C_{ij}^{-1}C_{kl}^{-1}-\frac{1}{4}D_{ijkl}\,C_{ij}^{-1} C_{km}^{-1}C_{ln}^{-1}\,x_{m}x_{n}+\frac{1}{24}D_{ijkl}\,C_{im}^{-1}C_{jn}^{-1}C_{kp} ^{-1}C_{lq}^{-1}\,x_{m}x_{n}x_{p}x_{q}\bigg{)}. \tag{77}\]
Note that it is normalized so that the maximum corresponds to the mean-centered distribution, i.e. \({\bf x}=0\). In this case, the evidence of the normalized distribution is given by
\[E({\bf a},{\bf b})={\cal L}_{\rm max}\ (2\pi)^{n/2}\sqrt{\det C_{n}}\left(1+ \frac{1}{8}D_{ijkl}\,C_{ij}^{-1}C_{kl}^{-1}\right)^{-1}\times \tag{78}\]
\[\prod_{p=1}^{n}\,\frac{E_{p}(a_{p},b_{p})}{(a_{p}+b_{p})}\,\left[1-\sum_{k=1}^ {n}\,B_{ijk}\,C_{ij}^{-1}\,\frac{F_{k}(a_{k},b_{k})}{E_{k}(a_{k},b_{k})}+D_{ ijkl}\,C_{ij}^{-1}\,C_{kl}^{-1}\,\sum_{m=1}^{n}\,\frac{G_{m}(a_{m},b_{m})}{E_{ m}(a_{m},b_{m})}\right]\,.\]
We can then evaluate the logarithm of the evidence by
\[\ln E = \ln{\cal L}_{\rm max}+\frac{n}{2}\ln(2\pi)+\frac{1}{2}\ln\det C _{n}-\ln\left(1+\frac{1}{8}D_{ijkl}\,C_{ij}^{-1}C_{kl}^{-1}\right)-\sum_{p=1} ^{n}\ln(2a_{p}+2b_{p})\] \[+\ \sum_{p=1}^{n}\ln\left({\rm Erf}\left[\frac{a_{p}}{\sqrt{2}} \sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]+{\rm Erf}\left[\frac{b_{p}}{ \sqrt{2}}\sqrt{\frac{\det C_{p-1}}{\det C_{p}}}\right]\right)\] \[+\ \ln\left(1-\sum_{k=1}^{n}\,B_{ijk}\,C_{ij}^{-1}\,\frac{F_{k}(a_ {k},b_{k})}{E_{k}(a_{k},b_{k})}+D_{ijkl}\,C_{ij}^{-1}\,C_{kl}^{-1}\,\sum_{m=1} ^{n}\,\frac{G_{m}(a_{m},b_{m})}{E_{m}(a_{m},b_{m})}\right)\,.\]
Note that the condition \(D_{ijkl}C_{ij}^{-1}C_{kl}^{-1}<2\) constrains the maximum amount that the kurtosis corrections can contribute to the evidence.
**Uncorrelated case**. In the case where the likelihood distribution had no correlations among the different variables, the _exact_ expression for the Bayesian evidence is
\[\ln E=\ln{\cal L}_{\rm max}+\frac{n}{2}\ln(2\pi)+\sum_{p=1}^{n}\ln\sigma_{p}- \sum_{p=1}^{n}\ln(2a_{p}+2b_{p})+\sum_{p=1}^{n}\ln\left({\rm Erf}\left[\frac{a _{p}}{\sigma_{p}\sqrt{2}}\right]+{\rm Erf}\left[\frac{b_{p}}{\sigma_{p}\sqrt{ 2}}\right]\right) \tag{80}\]
\[-\ \ln\left(1+\frac{1}{8}D_{iijj}\,\sigma_{i}^{-2}\sigma_{j}^{-2}\right)+\ \ln\left(1-\sum_{k=1}^{n}\,B_{ iik}\,\sigma_{k}^{-2}\,\frac{F_{k}(a_{k},b_{k})}{E_{k}(a_{k},b_{k})}+D_{iijj}\, \sigma_{i}^{-2}\sigma_{j}^{-2}\,\sum_{m=1}^{n}\,\frac{G_{m}(a_{m},b_{m})}{E_{ m}(a_{m},b_{m})}\right)\,,\]
where \(\sigma_{p}\) are the corresponding dispersions of variables \(x_{p}\), and the functions \(E_{i},F_{i}\) and \(G_{i}\) are the corresponding limiting functions of Eqs. (65) and (75) for uncorrelated matrices.
## 5 Model comparison
Finally we turn to specific applications of the formalism discussed above. Initially we will carry out some toy model tests of its performance, and then examine real cosmological applications for which we previously obtained results by thermodynamic integration [5].
### A baby-toy model comparison
We begin with a very simple two-dimensional toy model. The purpose of this section is to illustrate the ineffectiveness of the thermodynamic integration and to give an indication of the performance of the method we propose here. In addition, the two-dimensional model is simple enough to allow a brute-force direct numerical integration of evidence allowing us to check the accuracy at the same time. We use the following two forms of likelihood:
\[\mathcal{L}_{g}(x,y) = \exp\left[-\frac{2x^{2}-2(y-1)^{2}-xy}{2}\right] \tag{81}\] \[\mathcal{L}_{ng}(x,y) = \exp\left[-\frac{2x^{2}-2(y-1)^{2}-xy}{2}\right]+\exp\left[-\frac {2x^{2}-2y^{2}-3xy}{2}\right] \tag{82}\]
The subscripts \(g\) and \(ng\) indicate the Gaussian and non-Gaussian cases respectively.
Firstly, we calculate the evidence by the analytical method using Eqs. (56) and (80) and covariance
Figure 1: This figure shows the calculated evidence as a function of the number of likelihood evaluations. Note that the horizontal axis is logarithmic. The solid line corresponds to the thermodynamic integration. The dotted line and dot-dashed lines are the analytical methods with and without non-Gaussian corrections applied. The horizontal dashed line is the number obtained by the direct integration. The upper two panels correspond to \(\mathcal{L}_{g}\), while the lower two to \(\mathcal{L}_{ng}\). The left-hand side panels correspond to wide flat priors of \((-7,10)\) on both parameters, while the right-hand side to the narrow priors of \((-2,3)\) on both parameters. See text for discussion.
matrices inferred from sampling the likelihood using the vanilla Metropolis-Hastings algorithm with fixed proposal widths. Chains ranging from few to several million samples were used. We also calculate evidence using thermodynamic algorithm explained in Ref. [5]. Again, we vary algorithm parameters to get evidence values of varying accuracy. The resulting evidence as a function of number of likelihood evaluations is plotted in the Figure 1, together with the correct value inferred by direct numerical integration. The number of likelihood evaluations is crucial as this is the time-limiting step in the cosmological parameter estimation and model comparison exercises. The results are what could have been anticipated. We note that the size of the prior does not seem to be of crucial importance. This is comforting, given that the analytical method requires the knowledge of the _true_ covariance information, while we can only supply a covariance matrix estimated from the prior-truncated likelihood. We also note that the thermodynamic integration converges to the correct value in all cases. However, it does so after very many likelihood evaluations; typically about a million or so even for a two-dimensional problem. The analytical method becomes limited by systematics already by the ten-thousand samples. For Gaussian case, there is no systematic by construction, while the non-gaussian case suffers a systematic of about \(0.1\) in \(\ln E\). The non-Gaussian correction reduces the error by about a half and thus correctly estimates the uncertainty associated with the purely Gaussian approximation. In the case of wide priors, the only non-Gaussian correction of an appreciable size is the \(\ln(1+D_{ijkl}C_{ij}^{-1}C_{kl}^{-1}/8)\).
### A toy model comparison
We now proceed by calculating the Bayesian evidence for simple toy models with 5 and 6 parameters, shown in Table I. The purpose is to compare results with those obtained from thermodynamic integration again, but this time using a model that bears more resemblance to a typical problem one encounters in cosmology.
Beginning with the five-parameter model, we assume first that it has an uncorrelated multivariate Gaussian likelihood distribution. In this case the aim is to test the thermodynamic integration method, which gives \(\ln E_{\rm toy5}^{\rm num}=-8.65\pm 0.03\), while the exact expression gives \(\ln E_{\rm toy5}^{\rm ana}=-8.66\). Therefore, we conclude that the thermodynamic integration method is rather good in obtaining the correct evidence of the model. The Laplace approximation Eq. (57) also fares well for uncorrelated distributions, \(\ln E_{\rm toy5}^{\rm Lap}=-8.67\).
We now consider a likelihood function with a correlated covariance matrix \(C_{ij}\), with the same mean values and dispersions as the previous case, but with significant correlations. The analytic formula needed, Eq. (54), is no longer exact,2 and gives \(\ln E_{\rm toy5c}^{\rm ana}=-7.32\). For comparison thermodynamic integration gives \(\ln E_{\rm toy5c}^{\rm num}=-7.28\pm 0.06\), again in perfect agreement within errors. In this case the Laplace approximation fails significantly, \(\ln E_{\rm toy5c}^{\rm Lap}=-6.89\), the reason being that the correlations chosen bring the posterior into significant contact with the edges of the priors.
Footnote 2: One could rotate the parameter basis to remove the correlations, but then the priors wouldn’t be top-hats.
Let us now return to the uncorrelated case and include a new parameter, \(x_{6}\), as in Table I, and evaluate the different evidences that appear because of this new parameter, in order to see the sensitivity to systematic errors in the evaluation of the Bayesian evidence and their effects on model comparison. The numerical
\begin{table}
\begin{tabular}{c c c c} \hline Parameter & Mean & Prior Range & Model \\ \hline \(x_{1}\) & \(0.022\) & \([0.0001,0.044]\) & toy5,toy6 \\ \(x_{2}\) & \(0.12\) & \([0.001,0.3]\) & toy5,toy6 \\ \(x_{3}\) & \(1.04\) & \([0.8,1.4]\) & toy5,toy6 \\ \(x_{4}\) & \(0.1\) & \([0.01,0.3]\) & toy5,toy6 \\ \(x_{5}\) & \(3.1\) & \([2.6,3.6]\) & toy5,toy6 \\ \(x_{6}\) & \(0.98\) & \([0.5,1.5]\) & toy6 \\ \hline \end{tabular}
\end{table}
Table 1: The parameters used in the analytical evaluation of the toy model evidences, with 5 and 6 parameters respectively. The maximum likelihood of the toy models is taken (arbitrarily) to be \(\mathcal{L}_{\rm max}=1\).
result is \(\ln E_{\rm toy6}^{\rm num}=-10.75\pm 0.03\), while the exact analytical expression gives \(\ln E_{\rm toy6}^{\rm ana}=-10.74\), in perfect agreement, within errors. The Laplace approximation Eq. (57) again fares well for uncorrelated distributions, \(\ln E_{\rm toy6}^{\rm Lap}=-10.74\).
When the likelihood function has large correlations, and the priors are not too large, the naive Laplace approximation, Eq. (57), fares less well than the analytical approximation, Eq. (54).
### A real model comparison
In this subsection we will make use of the results obtained in Ref. [5], where we evaluated the evidence for 5- and 6-parameter adiabatic models, and for three 10-parameter mixed adiabatic plus isocurvature models. The prior ranges used are given in Table II. The latter models give a marginally better fit to the data but require more parameters, which is exactly the situation where model selection techniques are needed to draw robust conclusions. In Ref. [5] we used thermodynamic integration to compute the evidence and showed that the isocurvature models ware less favoured than the adiabatic ones, but only at a mild significance level.3
Footnote 3: Recently Trotta [9] used a different technique to analyze a restricted class of isocurvature model featuring just one extra parameter, and found it highly disfavoured. The different conclusion is primarily due to the very different prior he chose on the isocurvature amplitude, such that almost all the models under the prior are domintaed by isocurvature modes and in poor agreement with the data.
Beginning with the simplest adiabatic model, which uses the Harrison-Zel'dovich spectrum, we have used the analytical formulae above, Eq. (54), together with the covariance matrix provided by the cosmoMC programme [10], and obtained \(\ln E_{\rm ad}^{\rm ana}=-854.07\), while the thermodynamical integration gave \(\ln E_{\rm ad}^{\rm num}=-854.1\pm 0.1\)[5]. The agreement is excellent; this is because the distribution function for the adiabatic model is rather well approximated by a Gaussian, and the priors are rather large, so the formula Eq. (54) is very close to that obtained in the Laplace approximation, \(\ln E_{\rm ad}^{\rm Lap}=-854.08\).
However the analytic method fares less well for the adiabatic model with varying \(n_{\rm s}\), with both the analytic and Laplace methods giving \(\ln E_{\rm AD-n_{s}}=-853.4\), while the numerical method gives the smaller value -854.1, a discrepency of nearly unity.
Turning now to the isocurvature cases, we found an extremely good result for the CDI model, gaining from Eq. (54) the value \(\ln E_{\rm cdi}^{\rm ana}=-855.08\), while the thermodynamical integration gives \(\ln E_{\rm cdi}^{\rm num}=-855.1\pm 0.1\). This is surprising, given the relatively large non-gaussianities for at least three variables: \(n_{\rm iso}\), \(\beta\) and \(\delta_{\rm cor}\), whose priors are _not_ centered with respect to the mean. However the NID case shows much less good agreement, with a discrepency of 0.6. That suggests that the closeness of the CDI comparison is to some extent a statistical fluke, with the underlying method less accurate.
A summary of the different models can be found in Table 3.
\begin{table}
\begin{tabular}{c c c c} \hline Parameter & Mean & Prior Range & Model \\ \hline \(\omega_{\rm b}\) & 0.022 & \([0.018,0.032]\) & AD-HZ,AD-\(n_{\rm s}\),ISO \\ \(\omega_{\rm dm}\) & 0.12 & \([0.04,0.16]\) & AD-HZ,AD-\(n_{\rm s}\),ISO \\ \(\theta\) & 1.04 & \([0.98,1.10]\) & AD-HZ,AD-\(n_{\rm s}\),ISO \\ \(\tau\) & 0.17 & \([0,0.5]\) & AD-HZ,AD-\(n_{\rm s}\),ISO \\ \(\ln[10^{10}{\cal R}_{\rm rad}]\) & 3.1 & \([2.6,4.2]\) & AD-HZ,AD-\(n_{\rm s}\),ISO \\ \(n_{\rm s}\) & 1.0 & \([0.8,1.2]\) & AD-\(n_{\rm s}\),ISO \\ \(n_{\rm iso}\) & 1.5 & \([0,3]\) & ISO \\ \(\delta_{\rm cor}\) & 1.5 & \([-0.14,0.4]\) & ISO \\ \(\sqrt{\alpha}\) & 0 & \([-1,1]\) & ISO \\ \(\beta\) & 0 & \([-1,1]\) & ISO \\ \hline \end{tabular}
\end{table}
Table 2: The parameters used in the models; see Ref. [5] for nomenclature and other details. For the AD-HZ model \(n_{\rm s}\) was fixed to 1 and \(n_{\rm iso}\), \(\delta_{\rm cor}\), \(\alpha\) and \(\beta\) were fixed to 0. In the AD-\(n_{\rm s}\) model, \(n_{\rm s}\) also varies. Every isocurvature model holds the same priors for the whole set of parameters.
### Savage-Dickey method
Another numerical method for evidence calculation is the Savage-Dickey method, first described in Ref. [11] and recently used in Ref. [9]. This technique allows one to calculate the evidence ratio of two models from a simple and quick analysis of the Markov chains used for parameter estimation, provided that the models are nested; i.e., that one of them is included in the parameter space of the other. For instance, the AD model is nested within the AD-\(n_{\rm s}\) model, and the AD and AD-\(n_{\rm s}\) models are both nested within the CDI, NID and NIV ones. In the context of Markov chains, the Savage-Dickey method is essentially a measure of how much time the sampler spends in the nested model, weighted by the respective volumes of the two models. When the outer model has extra parameters, this method relies on approximating the nested model as a model with negligibly narrow priors in directions of extra parameters. We note, however, that when many extra parameters are present, this method must fail for reasons similar to those why grid-based parameter estimation approaches fail with models with many parameters. The MCMC parameter estimation simply does not have high enough dynamic range to probe the two models given the large prior volume ratio.
The AD and AD-\(n_{\rm s}\) models differ by one parameter. Using the same AD+ns samples as for the analytic method (i.e., the samples from which we extracted the covariance matrix), we obtained \(\ln(E_{AD}/E_{AD+n_{s}})=0.03\). The result from the precise thermodynamical integration, \(\ln(E_{\rm AD}/E_{\rm AD-n_{s}})=0\pm 0.1\) is in excellent agreement. The AD-\(n_{\rm s}\) and CDI (or NID, NIV) models differ by four parameters. With most simple choices of parametrization (including in particular the isocurvature and cross-correlation tilts), the AD-\(n_{\rm s}\) is not a point, but a hypersurface within the parameter space of the isocurvature models (i.e. \(\alpha=0\) and other three parameters act as dummy, unconstrained, parameters which do not affect the evidence). In these cases, the evidence ratios given by the Savage-Dickey method do not converge as the priors of the extra parameters are tightened up around the nested model, although they match thermodynamically-determined values to within a unit of \(\ln E\).
## 6 Discussion and Conclusions
We have developed an analytical formalism for computing the Bayesian evidence in the case of an arbitrary likelihood distribution with a hierarchy of non-Gaussian corrections, and with arbitrary top-hat priors, centered or uncentered. This analysis can be of great help for the problem of model comparison in the present context of cosmology, where observational data is still unable to rule out most extensions of the standard model based on the \(\Lambda\)CDM inflationary paradigm.
As an application of the exact and approximate formulae obtained for the Bayesian evidence of a model with approximately Gaussian likelihood distributions, we have compared the value predicted analytically with that computed with a time-consuming algorithm based on the thermodynamical integration approach. The values obtained analytically agree surprisingly well with those obtained numerically. While one can estimate the magnitude of the higher order corrections for the analytical formulae, it is very difficult to
\begin{table}
\begin{tabular}{l c c c c} \hline Model & \(\ln{\cal L}^{\rm max}\) & \(\ln E^{\rm num}\) & \(\ln E^{\rm ana}\) & \(\ln E^{\rm Lap}\) \\ \hline toy5 & \(0\) & \(-8.65\pm 0.03\) & \(-8.66\) & \(-8.67\) \\ toy5c & \(0\) & \(-7.28\pm 0.06\) & \(-7.32\) & \(-6.89\) \\ toy6 & \(0\) & \(-10.75\pm 0.03\) & \(-10.74\) & \(-10.74\) \\ toy6c & \(0\) & \(-9.73\pm 0.06\) & \(-9.71\) & \(-9.63\) \\ \hline AD & \(-840.78\) & \(-854.1\pm 0.1\) & \(-854.1\) & \(-854.1\) \\ AD-\(n_{\rm s}\) & \(-838.50\) & \(-854.1\pm 0.1\) & \(-853.4\) & \(-853.4\) \\ CDI & \(-838.05\) & \(-855.1\pm 0.2\) & \(-855.1\) & \(-854.5\) \\ NID & \(-836.60\) & \(-855.1\pm 0.2\) & \(-854.5\) & \(-854.5\) \\ NIV & \(-842.53\) & \(-855.1\pm 0.3\) & \(-854.9\) & \(-854.9\) \\ \hline \end{tabular}
\end{table}
Table 3: The different models, both toy and real, with their maximum likelihoods and evidences.
estimate the systematic effects of the numerical approach. Thus, with this analytical method we can test for systematics in the thermodynamical integration approach. So far, the values obtained agree, so it seems that the numerical approach is a good tool for estimating the evidence. However, it takes considerable effort and machine time to do the correct evaluation, and therefore, we propose the use of the analytical estimate, whose corrections are well under control, in the sense that one can compute the next order corrections and show that they are small.
**Note added:** Many years after my work was finished, a book appeared [12] which thoroughly discussed Bayesian Methods in Cosmology.
|
2305.19759 | Simple yet Effective Code-Switching Language Identification with
Multitask Pre-Training and Transfer Learning | Code-switching, also called code-mixing, is the linguistics phenomenon where
in casual settings, multilingual speakers mix words from different languages in
one utterance. Due to its spontaneous nature, code-switching is extremely
low-resource, which makes it a challenging problem for language and speech
processing tasks. In such contexts, Code-Switching Language Identification
(CSLID) becomes a difficult but necessary task if we want to maximally leverage
existing monolingual tools for other tasks. In this work, we propose two novel
approaches toward improving language identification accuracy on an
English-Mandarin child-directed speech dataset. Our methods include a stacked
Residual CNN+GRU model and a multitask pre-training approach to use Automatic
Speech Recognition (ASR) as an auxiliary task for CSLID. Due to the
low-resource nature of code-switching, we also employ careful silver data
creation using monolingual corpora in both languages and up-sampling as data
augmentation. We focus on English-Mandarin code-switched data, but our method
works on any language pair. Our best model achieves a balanced accuracy of
0.781 on a real English-Mandarin code-switching child-directed speech corpus
and outperforms the previous baseline by 55.3%. | Shuyue Stella Li, Cihan Xiao, Tianjian Li, Bismarck Odoom | 2023-05-31T11:43:16Z | http://arxiv.org/abs/2305.19759v1 | # Simple yet Effective Code-Switching Language Identification
###### Abstract
Code-switching, also called code-mixing, is the linguistics phenomenon where in casual settings, multilingual speakers mix words from different languages in one utterance. Due to its spontaneous nature, code-switching is extremely low-resource, which makes it a challenging problem for language and speech processing tasks. In such contexts, Code-Switching Language Identification (CSLID) becomes a difficult but necessary task if we want to maximally leverage existing monolingual tools for other tasks. In this work, we propose two novel approaches toward improving language identification accuracy on an English-Mandarin child-directed speech dataset. Our methods include a stacked Residual CNN+GRU model and a multitask pre-training approach to use Automatic Speech Recognition (ASR) as an auxiliary task for CSLID. Due to the low-resource nature of code-switching, we also employ careful silver data creation using monolingual corpora in both languages and up-sampling as data augmentation. We focus on English-Mandarin code-switched data, but our method works on any language pair. Our best model achieves a balanced accuracy of 0.781 on a real English-Mandarin code-switching child-directed speech corpus and outperforms the previous baseline by 55.3%.
**Index Terms**: multilingual, code-switching, low-resource, language identification
## 1 Introduction
With more than 6000 languages still alive today, there are more people speaking more than one language (whether from birth or through late acquisition) than monolingual speakers (Marian and Shook, 2012). When multilingual speakers who share two or more of the same languages engage in a conversation, they naturally tend to switch languages spontaneously. Code-switching allows bilingual speakers to express their intentions more freely and to be better understood (Heredia and Altarriba, 2001). With the development of machine learning and neural networks, language and speech processing with most high-resource monolingual languages are highly effective. However, it is non-trivial to adapt the monolingual tools to multilingual and code-switching tasks. Additionally, as demonstrated in our later experiments, even large models trained on multilingual data such as Whisper (Radford et al., 2022) and XLSR (Babu et al., 2021; Conneau et al., 2020) are limited in processing code-switched data between two high-resource languages. Therefore, an effective approach to leverage existing monolingual or multilingual pre-trained speech and language models and other NLP tools is to identify the language in each segment of a code-switched speech or text.
In this work, we focus on the language identification task of code-switched English and Mandarin speech data collected from Singapore on a child-directed activity. Singapore is such a language-dense region where four primary languages are spoken by the people - English, Malay, Mandarin, and Tamil, and almost all Singaporeans are bilingual or
Figure 1: “Split-then-Process” Pipeline. The red dotted box is the main focus of our study. Given a segmented utterance from a child-directed domain, the language of each segment is identified by our system. This can potentially be useful for a range of downstream processing tasks that leverages existing monolingual or multilingual tools.
multilingual. The language diversity in the region contributes to the wide dialectal variations of the code-switched data and increases the difficulty of speech-processing tasks. The child-directed characteristic of the data makes the problem unique in that both the content domain and the speech style deviate from standard datasets and models. Domain mismatch problems have been addressed by data augmentation Sun et al. (2021) or unsupervised adversarial training Wang et al. (2018) approaches in Automatic Speech Recognition (ASR) and gradual fine-tuning (GFT) in text-based settings Xu et al. (2021). We adopt both data augmentation and GFT to the speech CSLID task in this work to improve the robustness of our system.
As illustrated in Figure 1, the main objective of our model is to identify the language of a segment of speech, so that monolingual or multilingual models can be more effectively used for downstream tasks. With English and Mandarin being the languages with the largest number of speakers in Singapore and high-resource languages in the world, Code-Switching Language Identification is a crucial step in the "Split-then-Process" pipeline. Possible downstream tasks that could benefit from a robust language identification system include speech recognition, speech synthesis, or speech translation. We leverage monolingual data such as AISHELL Mandarin (Bu et al., 2017) and LibriSpeech English (Panayotov et al., 2015) to build a CSLID model that is robust to both domain and dialectal variations. The main contributions of our work are summarized as follows:
* We propose two systems for code-switching language identification, a Residual CNN with BiRNN network (CRNN) and an Attention-based Multitask Training Model with combined ASR and CSLID loss. The systems can be easily extended to any language pair.
* We investigate the effect of pre-training with data augmentation from monolingual sources and the effect of fine-tuning with out-of-domain code-switched data, concluding that data balance is more crucial than domain similarity.
* We demonstrate that small and efficient architectures with effective data augmentation can be extremely successful in the CSLID task, outperforming massive multilingual pre-trained language models (PLM). Our system placed 2nd in a challenge featuring an English-Mandarin code-switching child-directed speech corpus [reference redacted for review], and we make our code publicly available1 for further explorations in the field of code-switching speech processing.
Footnote 1: We make the project open source at [link hidden for review].
## 2 Related work
Due to the increase of globalization and the growing population of bilingual and multilingual speakers, there is an emerging need for better language technologies for code-switching languages. Due to its spontaneous nature, code-switching happens more in colloquial settings, making it difficult for data collection. Code-switching is also a complex sociocultural linguistic phenomenon that depends on a combination of factors including topic, formality, and speaker intent Mabule (2015); Nilep (2006). Code-switching can happen at different levels of the utterance (intersentential, intrasentential, intra-word) Myers-Scotton (1989). All the above characteristics make code-switching a fascinatingly diverse and challenging topic of study. In both text and speech processing, CSLID is a crucial step for downstream tasks such as text normalization for text-to-speech synthesis Magnhat et al. (2022), part-of-speech tagging Solorio and Liu (2008), speech translation Weller et al. (2022), and speech recognition Zhang et al. (2021, 2022); Zhou et al. (2022); Sreeram and Sinha (2020).
### Multidialectal Code-Switching
Code-switching speech processing faces the issue of dialectal variations.In Singapore, Mandarin, Hokkien, and Cantonese are the major Chinese dialects with most speakers, along with Teochew, Hakka, and Hainanese Gupta and Yeok (1995). Chowdhury et al. (2021) proposed an end-to-end attention-based conformer architecture for multi-dialectal Arabic ASR. Rivera (2019) built an acoustic model for code-switching detection among Arabic dialects. However, there is a lack of sufficient research on code-mixing between non-standard Mandarin and non-standard English, which is the focus of our study.
### Code-Switching Language Identification
The use of Convolutional Neural Networks (CNN) in speech processing is widely adopted due to the use of spectrogram or filter bank as the first feature extraction step of speech signal processing in monolingual tasks Ganapathy et al. (2014).
Deep Neural Networks (DNN) [21] and phoneme units-based Hidden Markov Model (HMM) with Support Vector Machine (SVM) classifier [13, 14] have also been used for CSLID. Additionally, CSLID is often integrated into ASR systems as an auxiliary task to improve the ASR performance [15, 16]. However, these approaches have a different focus from our current study, which aims to improve the CSLID performance for a range of speech-processing tasks.
### Data Augmentation & Multilingual PLMs
Various data augmentation techniques have been used for code-switching, but mostly focused on text processing tasks [20, 11]. Some work uses text-based data augmentation for speech tasks [13, 14]. [1] uses monolingual English and Arabic speech data for the code-switched ASR task. However, there is little prior work to synthetically generate code-switched speech data from monolingual sources. In our work, we segment monolingual speech data in the sub-utterance level to simulate code-switched speech data augmentation.
Additionally, with the recent development of massively multilingual pre-trained speech and language models such as mSLAM [1], Whisper [15] and XLS-R [1, 17], it is easier to leverage monolingual data for multilingual tasks. The use of multilingual PLMs for code-switching tasks in the text domain has proven to be successful [16], but it has not been widely used in the speech setting due to limited data and costly training. In our work, we use the multilingual PLMs as a zero-shot baseline with which we compare our parameter-efficient models.
## 3 Methodology
In this section, we first propose three systems for the CSLID task and describe the architectural design tailored to the different characteristics of the data and model. Then, we introduce our data augmentation method leveraging out-of-domain code-switching data with a GFT schedule to improve upon the pre-train-fine-tune paradigm.
### Crnn
The CRNN model, inspired by [1], is a stack of Residual CNNs and RNNs. We utilize the power of CNN to extract features directly from the spectral domain and use an RNN to extract temporal dependencies. We use bi-GRU [1] layers for the RNN component of the model because it has fewer parameters, making it faster to train and less prone to over-fitting. As illustrated in Figure 2(a), our CRNN model is a simplified version of [1] with 3 CNN layers to extract acoustic features and 5 GRU layers with hidden dimension 512 to learn features for language identification. We apply a linear classifier to the last hidden state of the RNN.
### Multi-Task Learning (MTL)
To enhance the model's ability to extract acoustic features, we train a model via multitask learning with a joint Connectionist Temporal Classification (CTC) and LID loss, as illustrated in Figure 3(b). The architecture of the model is based on a Conformer encoder, along with a linear layer for CTC decoding and an LSTM + linear layer for LID de
Figure 3: Multi-task Learning Model
Figure 2: CRNN Model
coding. Similar to the CRNN model, we conduct phased training to first pre-train the Conformer model with the joint loss on the monolingual corpora and fine-tune the model on the MERLIon and SEAME datasets with only the LID loss. This approach aims to better adapt the model to the target classification task.
### Multilingual PLMs
Being pre-trained on multiple languages, massively multilingual PLMs are a powerful tool for cross-lingual tasks. We want to understand the out-of-the-box ability of PLMs to process code-switching sentences by comparing the zero-shot CSLID performance of Whisper (Radford et al., 2022) and XLSR (Babu et al., 2021; Conneau et al., 2020) against the more parameter-efficient models we introduce in this work. For Whisper, we use the detect_language() method from the model class, passing in CutSets with a max duration of 50. For XLSR, we perform two-way zero-shot classification using wav2vec2-xls-r-300m with a LID head. The LID head is a 2-layer Bidirectional GRU with a linear layer.
### Data Augmentation
Child-directed English-Mandarin code-switching is an extremely low resource problem. As such, we propose a data augmentation method that takes advantage of any additional data in a similar distribution to improve the performance of the model. The target in domain data - MERLion - is unbalanced such that the ratio of English to Mandarin labels is about 4:1. In addition to up-sampling the Mandarin utterances during training, our proposed data augmentation approach combines the SEAME code-switching dataset (as described in detail in Section 4.1) that has more Mandarin utterances than English ones. Lastly, we propose a gradual fine-tuning schedule for smooth domain adaptation as described in Table 1 below (Xu et al., 2021). As we up-sample the Mandarin utterances in the MERLion dataset and vary the ratio of Mandarin to English in the sampled SEAME dataset to control for a smooth transition to the real Mandarin-English ratios in the development set. The gradual FT terminates with a stage of using only the MERLion dataset (with Mnadarin up-sampled) without the out-of-domain SEAME data. All the experiments described in Table 1 are fine-tuning the model checkpoint pre-trained on monolingual Mandarin and English speech.
## 4 Experiments
### Dataset & Metric
We use multiple monolingual English and Mandarin and code-switched English-Mandarin datasets in our experiments, including LibriSpeech (Panayotov et al., 2015), National Speech Corpus of Singapore (NSC) (Koh et al., 2019), AISHELL (Bu et al., 2017), SEAME (Lyu et al., 2010), and MERLIon (Chua et al., 2023). Table 2 reports the language and size of each dataset. Note that not all datasets are used for each experiment. The MERLIon dataset is split into training and development sets, and we refer to the train split of the MERLIon dataset as "MERLIon" in our system descriptions.
MetricThe MERLIon dataset roughly contains 25 hours of English speech and 5 hours of Mandarin speech. Due to this severe data imbalance issue, we use the Balance Accuracy (BAC), which is the average of recall obtained for each label class, rather than the Absolute Accuracy as the metric to evaluate our systems. In the submission of the English-Mandarin code-switching task, the evaluation also reports the Equal Error Rate (EER),
\begin{table}
\begin{tabular}{c|c c c c|c c c c|c c c|c} \hline \multirow{2}{*}{**Stage**} & \multicolumn{3}{c|}{**MERLion (M)**} & \multicolumn{3}{c|}{**SEAME (S)**} & \multicolumn{3}{c|}{**Total**} & \multicolumn{3}{c}{**Ratios**} \\ \cline{2-13} & zh & en & total & zh/en & zh & en & total & zh/en & zh & en & total & S/M & zh/en \\ \hline
1 & 5.4 (1) & 21.6 & 27.0 & 0.2 & 17.9 & 8.9 & 26.8 & 2.0 & 23.2 & 30.6 & 53.8 & 1.0 & 0.76 \\
2 & 10.7 (2) & 21.6 & 32.4 & 0.5 & 10.7 & 5.4 & 16.1 & 2.0 & 21.4 & 27.0 & 48.4 & 0.5 & 0.79 \\
3 & 10.7 (2) & 21.6 & 32.4 & 0.5 & 4.5 & 2.2 & 6.7 & 2.0 & 15.2 & 23.9 & 39.1 & 0.2 & 0.64 \\
4 & 16.1 (3) & 21.6 & 37.7 & 0.7 & 0.0 & 0.0 & 0.0 & - & 16.1 & 21.6 & 37.7 & 0.0 & 0.74 \\ \hline \end{tabular}
\end{table}
Table 1: Ordual FT Schedule. Values inside parentheses are up-sampling ratios for the MERLion zh utterances.
\begin{table}
\begin{tabular}{l|c c} \hline Dataset & Language & Length (hr) \\ \hline LibriSpeech-clean & en (US) & 100 \\ NSC & en (SG) & 100 \\ AISHELL & zh & 200 \\ SEAME & en-zh & 100 \\ MERLIon & en-zh & 30 \\ \hline \end{tabular}
\end{table}
Table 2: Datasets used in our experiments.
which is defined to be the threshold for an equal false acceptance rate and false rejection rate.
BaselineThe baseline over which we attempt to improve is the system developed by the task organizers, which consists of an end-to-end conformer model trained on the same available data (Chua et al., 2023). This system has a BAC of 50.32% and an EER of 22.13%.
### Preprocessing
We preprocess the data using lhoste2, a Python toolkit designed for speech and audio data preparation. We standardize the sample rate of all audio recordings to 16kHz by downsampling utterances in the development and test dataset with sample rates \(>\) 16kHz. Prior to training, we extract 80-dimensional filterbank (fbank) features from the speech recordings and apply speed perturbation with factors of 0.9 and 1.1. During training, we use on-the-fly SpecAug (Park et al., 2019) augmentation on the extracted filter bank features with a time-warping factor of 80.
Footnote 2: [https://github.com/lhoste-speech/lhoste](https://github.com/lhoste-speech/lhoste)
To train the model jointly with an ASR CTC loss, we first tokenize and romanize the bilingual transcripts with space-delimited word-level tokenization for monolingual English transcripts (LibriSpeech and NSC) and monolingual Mandarin transcripts in AISHELL, as these transcripts were pre-tokenized and separated by spaces. For the occasionally code-switched Mandarin words in NSC, we remove the special tags and kept only the content of the Mandarin words. The SEAME dataset contains a portion of untokenized Mandarin transcripts. Hence, we tokenize all Mandarin text sequences with length \(>4\) using a Mandarin word segmentation tool jieba3. Additionally, to reduce the size of the model, we adopt a pronunciation lexicon, utilizing the CMU dictionary for English word-to-phoneme conversion and the python-pinyni-jyutping-sentence tool for generating the pinyin for Mandarin words4. To enhance the model's ability to capture the lexical information in the training data, we add a suffix "_cn" for Mandarin phonemes.
Footnote 3: [https://github.com/Llanguage-Tools/python-pinyin-jyutping-sentence](https://github.com/Llanguage-Tools/python-pinyin-jyutping-sentence)
### Experimental Setup
We follow the pre-train-fine-tune paradigm for most experiments except for the zero-shot PLM baseline and ablation experiments to investigate the effect of pre-training. In the pre-training stage, we use the monolingual datasets (LibriSpeech, AISHELL, and NSC), and in the fine-tuning state, we use the code-switched datasets (SEAME and MERLIon).
Table 3 shows the experiments conducted in our study along with the pre-training and fine-tuning datasets and fine-tuning method. For this set of experiments, FT Methods: 1-stage FT means fine-tuning the model on the MERLIon dataset only; combined FT is fine-tuning the model on a 1-1 proportion of SEAME and MERLIon data; and gradual FT is fine-tuning the model on more SEAME (out-of-domain) data than MERLIon (in-domain) data, then increasing the ratio of MERLIon data gradually until the fine-tuning set contains only MERLIon data.
Table 4 summarizes the second set of experiments involving the up-sampling with schedule described in Section 3.4. Note that in Experiment #15, only the 1:1 mix of MERLion:SEAME is used as a control for the setting for the MTL system.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \# & System & epoch/stage & total epochs & LR \\ \hline
11 & & & 3 & 12 & 0.001 \\ \cline{3-5}
12 & \multirow{3}{*}{CRNN} & \multirow{3}{*}{5} & \multirow{3}{*}{20} & 0.0001 \\ \cline{3-5}
13 & & & & 0.00001 \\ \cline{3-5}
14 & & & & 0.00001 \\ \hline \# & System & epoch range & total epochs & LR \\ \hline
15 & & 1-20 & 20 & \\ \cline{3-5}
16 & & 1-5 & 5 & \\ \cline{3-5}
17 & \multirow{3}{*}{MTL} & 5-10 & 10 & 0.00001 \\ \cline{3-5}
18 & & 10-15 & 15 & \\ \cline{3-5}
19 & & 15-20 & 20 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Up-Sampling Experiments.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline \hline \# & System & PT Data & FT Data & FT Method \\ \hline
1 & & & - & - \\
2 & & LibriSpeech & MERLIon & 1-stage \\
3 & & + AISHELL & MERLIon & combined \\
4 & CRNN & & + SEAME & gradual \\ \cline{3-5}
5 & & & MERLIon & 1-stage \\
6 & & - & MERLIon & combined \\ \hline
7 & \multirow{3}{*}{MTL} & LibriSpeech & MERLIon & combined \\
8 & & + AISHELL + NSC & + SEAME & 2-stage \\ \hline
9 & \multirow{3}{*}{Whisper} & \multirow{3}{*}{-} & - & - \\
10 & XLSR & & - & - \\ \cline{3-5} \cl
### Training
#### 4.4.1 CRNN Training
We pre-train our CRNN model for 5 epochs on 100 hours of clean speech from LibriSpeechPanayotov et al. (2015) and 200 hours of preselected partition from AISHELLBu et al. (2017). Each batch contains a balanced amount of English and Mandarin sub-utterance level speech utterances to simulate an artificial speech code-switching dataset. We select the pre-trained model checkpoint with the best performance on the entire MERLIon dataset. Then, the model is fine-tuned on the MERLIon dataset (exp #2) or the MERLIon+SEAME dataset (exp #3&4) for 10 epochs, leaving out 1 hour of MERLIon data (1749 English utterances and 100 Mandarin utterances) for evaluation. During training, we set the max duration of each cut to 120ms; we use the Adam optimizer with a pre-training learning rate of 1e-4 and a fine-tuning learning rate of 1e-5, with a dropout of 0.1. In the experiment, ratios between the out-of-domain and in-domain data are \([3,2,1,0.5,0]\) over 5 epochs.
#### 4.4.2 Multitask Pre-Training
The conformer model is pre-trained with the joint CTC/LID loss for 5 epochs as well on the monolingual data, including LibriSpeech, AISHELL, and NSC. To balance the loss for each task, we interpolate the losses with a hyperparameter \(\lambda\). Formally, the overall loss \(L\) is computed as below:
\[L=(1-\lambda)L_{\text{CTC}}+\lambda L_{\text{LID}}\cdot\alpha \tag{1}\]
where \(L_{\text{CTC}}\) denotes the CTC loss, \(L_{\text{LID}}\) denotes the LID loss and \(\alpha\) is the scaler for the LID loss. We set \(\lambda=0.2\) and \(\alpha=100\). The model is then fine-tuned for 15 epochs on the mixed MERLIon and SEAME datasets. We intentionally balance the total duration of samples drawn from each dataset, which implicitly biases toward the development set as it contains fewer utterances, and our sampler terminates when it finishes an epoch on the smaller corpus.
## 5 Results and Analysis
### CRNN Results
Table 5 shows the English, Mandarin, and BAC of our CRNN model on the held-out part of the MERLIon development set. The best-performing model is the model initialized from the best pre-train checkpoint and gradually fine-tuned on the MERLIon and SEAME dataset (PT+FT). Without, it is more effective to _only_ fine-tune on the MERLIon in-domain dataset, implying that directly combining out-of-domain sources (SEAME) causes additional complexity for the model. The Mandarin accuracies for training on the MERLIon dataset from scratch with (exp#6) or without (exp#5) the SEAME dataset are both poor - 0.0 for the model fine-tuning only on MERLIon and 0.09 for the model fine-tuning on the MERLIon and SEAME datasets.
### Multitask Pre-Training
Two fine-tuning approaches were used for the MTL model. We find that after fine-tuning on the combined MERLIon + SEAME dataset, a second stage fine-tuning on only the MERLIon dataset in fact _hurts_ the performance. This might result from the imbalanced labeling effect, biasing the model toward the English predictions. Therefore, introducing more Mandarin samples from the SEAME corpus balances the labeling and yields better performance on the held-out set.
### Multilingual PLMs
As shown in Table 5, the zero-shot performance of Whisper is not great but reasonable given the massive amount of data it was pre-trained on. However, zero-shot XLSR is extremely ineffective in doing CSLID. These results suggest that multilingual PLMs do not have the out-of-the-box capability to understand the complex phenomenon of code-switching and thus require careful fine-tuning.
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \# & experiment & English & Mandarin & Balanced \\ \hline
1 & PT & 0.649 & 0.650 & 0.650 \\
2 & PT + FT (M) & 0.927 & 0.630 & 0.779 \\
3 & PT + FT (M+S) & 0.965 & 0.370 & 0.667 \\
4 & PT + FT (gradual) & 0.851 & **0.720** & **0.785** \\
5 & FT (M) & 1.0 & 0.0 & 0.5 \\
6 & FT (M+S) & 0.988 & 0.09 & 0.539 \\ \hline
7 & MTL + combined FT & 0.960 & 0.610 & **0.785** \\
8 & MTL + 2-stage FT & 0.957 & 0.46 & 0.708 \\ \hline
9 & Whisper Zero-Shot & 0.821 & 0.502 & 0.662 \\
10 & XLSR Zero-Shot & 0.198 & 0.0 & 0.099 \\ \hline \hline \end{tabular}
\end{table}
Table 5: English, Mandarin and Balanced Accuracy of our CRNN model on the held-out development set of MERLIon. Table keys: **PT** = only pre-training, **FT** = fine-tuned on the MERLIon train split, **w/ SEAME** = fine-tuned with mixed MERLIon train split and SEAME dataset, **MTL** = multitask learning model with pre-training and fine-tuning. (All rows without PT indicate that the model parameters are randomly initialized.)
We report the performance of our CRNN model at task submission time on the MERLion test set (labels unavailable to participants) in Table 6.
### Ablation Studies
#### 5.4.1 Effect of Pre-Training
As shown in exp #5 and #6, removing the pre-training stage results in significant performance drops. The model trained with only the MERLion dataset classifies all utterances as English because the MERLion dataset is heavily unbalanced, which contains 40287 English utterances and only 9903 mandarin utterances. This implies that the pre-training on monolingual data with balanced labels makes the model robust under heavily unbalanced classes, allowing the model to extract meaningful features for both languages even if data for one language is scarce.
#### 5.4.2 Effect of Code-Switched Fine-Tuning
Directly using the pre-trained model (exp #1) suffers from domain mismatch, suggesting that fine-tuning on gold data is necessary. First, pre-training data are originally monolingual, so dataset features such as recording quality and volume can be learned instead of linguistic features. Second, the pre-training datasets are from general domains, while the MERLion dataset contains children-directed speech, which might have a different set of vocabulary. Nevertheless, with the class imbalance issue, fine-tuning results on MERLion (exp #2) improves the BAC but lowers the Mandarin accuracy from 0.650 to 0.630.
#### 5.4.3 Effect of Gradual Fine-Tuning
Comparing Experiment #4 with Experiment #3, The model's classification accuracy on Mandarin labels improves significantly with a GFT on combined MERLion and SEAME data. Despite the class imbalance issue, the approach (exp #4) is shown to be successful, allowing the model to effectively extract enough linguistic information from the higher resource but out-of-domain dataset (SEAME) to avoid the short-cut learning from imbalanced in-domain dataset.
Given the effectiveness of GFT, we further explore experimental designs with the GFT setup combined with data up-sampling to solve the label imbalance issue in the target MERLion dataset. We report the model performance of these additional GFT experiments in Table 7. First, for the CRNN model, which has a fairly simple residual convolutional neural network architecture, GFT proves to be extremely helpful when fine-tuning on a model pre-trained only on monolingual Mandarin and English data. With a well-design gradual fine-tuning schedule, the classification accuracy on Mandarin improves steadily while the accuracy on English labels is maintained at a reasonable level as shown in Experiment #14, making this model achieve the best overall results out of all CRNN model variations.
On the other hand, GFT does not seem to be the contributing factor to the success of the MTL system in predicting the LID of the code-switched utterances. While keeping the MERLion:SEAME data ratio constant, Experiment #15 achieves the best performance across all systems and designs. This could be explained by the ASR portion of the loss function in the MTL framework, which forces the model to extract higher-level linguistic representations. This increases the robustness of the model against out-of-domain data (SEAME in this case) and therefore the smooth domain adaptation provided by the gradual fine-tuning schedule does not contribute as much in this system design. Up-sampling proves to be extremely helpful in this situation, as Experiment #15 outperforms Experiment #7 by 6.4%. The up-sampling provides the model with more opportunities to learn from and accurately classify instances of the underrepresented class, which leads to a high BAC.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multirow{2}{*}{System} & \multicolumn{2}{c|}{Dev} & \multicolumn{2}{c}{Test} \\ & EER & BAC & EER & BAC \\ \hline Baseline (Chua et al., 2023) & - & - & 0.221 & 0.503 \\ Whisper Zero-Shot & 0.228 & 0.662 & 0.230 & 0.649 \\ CRNN PT+FT & **0.146** & **0.663** & **0.155** & **0.701** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Equal Error Rate (EER) and Balanced Accuracy (BAC) on the MERLion development and test sets for our submitted system and the previous baseline.
\begin{table}
\begin{tabular}{l l|c c c} \hline \hline \# & CRNN Exp. Desc & English & Mandarin & Balanced \\ \hline
11 & 3ep-GFT lr=1e-3 & **0.938** & 0.270 & 0.604 \\
12 & 3ep-GFT lr=1e-5 & 0.823 & 0.410 & 0.616 \\
13 & 5ep-GFT lr=1e-3 & 0.798 & 0.610 & 0.704 \\
14 & 5ep-GFT lr=1e-5 & 0.932 & **0.680** & **0.806** \\ \hline \hline \# & MTL Exp. Desc & \multicolumn{3}{c}{Balanced Accuracy} \\ \hline
15 & non-GFT Doep & **0.835** & \\
16 & GFT ep1-5 & & 0.800 & \\
17 & GFT ep5-10 & & 0.806 & \\
18 & GFT ep10-15 & & 0.817 & \\
19 & GFT ep15-20 & & 0.805 & \\ \hline \hline \end{tabular}
\end{table}
Table 7: Performance of the two systems when fine-tuned with up-sampling and gradual fine-tuning.
Conclusion
In this work, we propose two simple and efficient systems for the spoken English-Mandarin child-directed code-switching LID task. The CRNN approach uses a simple stack of CNNs and RNNs to capture information from both the spectral and temporal axes. The multitask learning approach utilizes ASR CTC loss as an auxiliary task to learn higher-level linguistic features for CSLID. Our models significantly outperform previous baselines as well as multilingual PLMs, and we conduct extensive ablation studies to investigate factors that might influence CSLID performance. Future works include upsampling the minority label class and fine-tuning PLMs for larger-scale transfer learning to benefit code-switching speech processing.
## Limitations
Some of the limitations of our work include the fact that we are not able to use a large batch size when training the model due to compute limits, which might contribute to slower convergence and noisy model performance. Furthermore, we do not leverage cross-lingual transfer from other languages outside of the two languages that are included in the code-switched data. Incorporating code-switched data in other language pairs or monolingual data in related languages might result in additional positive cross-lingual interference.
|
2309.16967 | nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance | Automatic segmentation of medical images is crucial in modern clinical
workflows. The Segment Anything Model (SAM) has emerged as a versatile tool for
image segmentation without specific domain training, but it requires human
prompts and may have limitations in specific domains. Traditional models like
nnUNet perform automatic segmentation during inference and are effective in
specific domains but need extensive domain-specific training. To combine the
strengths of foundational and domain-specific models, we propose nnSAM,
integrating SAM's robust feature extraction with nnUNet's automatic
configuration to enhance segmentation accuracy on small datasets. Our nnSAM
model optimizes two main approaches: leveraging SAM's feature extraction and
nnUNet's domain-specific adaptation, and incorporating a boundary shape
supervision loss function based on level set functions and curvature
calculations to learn anatomical shape priors from limited data. We evaluated
nnSAM on four segmentation tasks: brain white matter, liver, lung, and heart
segmentation. Our method outperformed others, achieving the highest DICE score
of 82.77% and the lowest ASD of 1.14 mm in brain white matter segmentation with
20 training samples, compared to nnUNet's DICE score of 79.25% and ASD of 1.36
mm. A sample size study highlighted nnSAM's advantage with fewer training
samples. Our results demonstrate significant improvements in segmentation
performance with nnSAM, showcasing its potential for small-sample learning in
medical image segmentation. | Yunxiang Li, Bowen Jing, Zihan Li, Jing Wang, You Zhang | 2023-09-29T04:26:25Z | http://arxiv.org/abs/2309.16967v3 | # nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance
###### Abstract
The recent developments of foundation models in computer vision, especially the Segment Anything Model (SAM), allow scalable and domain-agnostic image segmentation to serve as a general-purpose segmentation tool. In parallel, the field of medical image segmentation has benefited significantly from specialized neural networks like the nnUNet, which is trained on domain-specific datasets and can automatically configure the network to tailor to specific segmentation challenges. To combine the advantages of foundation models and domain-specific models, we present nnSAM, which synergistically integrates the SAM model with the nnUNet model to achieve more accurate and robust medical image segmentation. The nnSAM model leverages the powerful and robust feature extraction capabilities of SAM, while harnessing the automatic configuration capabilities of nnUNet to promote dataset-tailored learning. Our comprehensive evaluation of nnSAM model on different sizes of training samples shows that it allows few-shot learning, which is highly relevant for medical image segmentation where high-quality, annotated data can be scarce and costly to obtain. By modeling the strengths of both its predecessors, nnSAM positions itself as a potential new benchmark in medical image segmentation, offering a tool that combines broad applicability with specialized efficiency. The code is available at [https://github.com/Kent0n-Li/Medical-Image-Segmentation](https://github.com/Kent0n-Li/Medical-Image-Segmentation).
## 1 Introduction
Efficient and accurate segmentation of medical images is essential in the modern clinical workflow including disease diagnosis and prognosis, treatment planning and monitoring, and treatment outcome follow-up [1]. Traditionally, medical image segmentation is a very time-consuming and labor-intensive task. The advent of deep learning-based automatic segmentation techniques has significantly reduced the time and effort required from radiologists and radiation oncologists [2]. Among the many deep learning architectures that have been designed for biomedical image segmentation, U-Net stands out for its ability to capture both global and local features effectively and efficiently for better segmentation
results [3]. Based on the U-Net backbone, a large number of studies developed architectures with various modifications for different tasks [4]. For example, TransUNet integrates the advantages of U-Net and Transformers, which defines a new benchmark in medical image segmentation [5]. By utilizing the global contextual understanding of Transformers and the precise localization capability of U-Net, TransUNet can capture long-range dependencies while maintaining the segmentation accuracy of local structures. Another example is UNet++ [6], which is designed to bridge the semantic gap between the encoder and decoder feature maps. It incorporates deeply supervised encoder-decoder networks interlinked with nested, dense skip pathways to enhance the segmentation accuracy. Another network, SwinUNet [7] introduces another Transformer-driven approach to medical image segmentation, leveraging the U-shaped Encoder-Decoder architecture and skip-connections for enhanced local-global semantic feature learning. This model shows superior performance over both traditional convolution-based methods and mixed transformer-convolution techniques. Many of the segmentation works, however, require substantial human effort in architecture modification and hyperparameter tuning to fit different applications or datasets. Acknowleding this challenge, the nnUNet framework [8] was proposed. The nnUNet framework, a "no-new-Net", takes a unique approach by abstaining from proposing new network architectures. Instead, it refocuses efforts on methodological, architectural search, and data preprocessing steps to yield optimal performance. The nnUNet strategy demonstrates that with appropriate preprocessing and postprocessing combinations, even a basic network architecture can achieve state-of-the-art performance across a wide variety of medical segmentation tasks.
Figure 1: The architecture of nnSAM, which integrates nnUNet’s encoder with the pre-trained SAM encoder. The correspondingly concatenated embeddings are input into nnUNet’s decoder to output the final segmentation. A cardiac sub-structure segmentation example is presented. (LV: left ventricle; RV: right ventricle; LA: left atrium; RA: right atrium; Myo: myocardium of LV)
Historically, deep learning models for medical image segmentation, including nnUNet, were tailor-made for specific datasets or applications, making it challenging to generalize a single model's effectiveness to various segmentation tasks. While the emergence of nnUNet signifies a transition to more flexible approaches for medical image segmentation, the quality of segmentation results still relies on ample training data on specific segmentation tasks. Acquiring large volumes of labeled medical images for each specific segmentation task is not only costly but also challenging in data-limited scenarios. For medical image segmentation tasks with a limited amount of training data, 'few-shot' learning solutions, which allow new models to be trained based on a few samples, are important and more practical. The advent of the segment anything model (SAM) [9, 10], a model that was trained on 11 million images and more than a billion segmentation masks (the SA-1B training dataset), has shown a great potential to achieve 'few-shot' and even 'zero-shot' learning across a diverse array of image categories. However, recent studies on the SAM model find its accuracy limited when applied directly to medical images without additional fine-tuning [11, 12]. In addition, the SAM model requires prompts as input in addition to the image itself (bounding box, points, etc.), which hinders its seamless integration in fully automated clinical workflows. This aspect, although a boon for versatility, may pose challenges in high-throughput medical scenarios that demand real-time or uninterrupted procedures. Recently, AutoSAM was developed based on the SAM framework to directly learn prompts from input to-be-segmented images and feed the learned prompts for fully automated segmentation. However, AutoSAM needs to learn a new prompt encoder from the training dataset and can be susceptible to the scarcity of the training data in 'few-shot' scenarios.
Inspired by the advantages and disadvantages of nnUNet and SAM, we introduce nnSAM, a novel plug-and-play solution designed to enhance the segmentation accuracy of medical images. nnSAM synergizes the powerful feature extraction and generalization capabilities of SAM with the data-centered auto-configuration capabilities of nnUNet. By leveraging the image encoder of the SAM and seamlessly integrating it into nnUNet's architecture, nnSAM produces an enriched latent space representation that serves as the foundation for enhanced segmentation accuracy. The fusion of SAM and nnUNet especially benefits scenarios where the training data is scant to achieve high-quality medical image segmentation.
The main contributions of this paper are summarised as follows:
* We introduced nnSAM, a novel fusion of the Segment Anything Model (SAM) and nnUNet. By combining the powerful feature extraction capabilities of SAM with the auto-configurable design of nnUNet, nnSAM ensures enhanced segmentation quality, even under very limited training data.
* Our comprehensive evaluation illuminates the superior performance of nnSAM over existing state-of-the-art techniques, providing a potential new baseline for medical image segmentation.
## 2 Method
### Architecture Overview
The architecture of the proposed nnSAM framework is depicted in Fig. 1. The model is designed to combine the strengths of nnUNet [8] and SAM [9]. Specifically, nnSAM consists of two parallel encoders: the nnUNet encoder and the SAM encoder. The SAM encoder is a pre-trained Vision Transformer (ViT) [13]. The embeddings from both encoders are concatenated and subsequently fed into nnUNet's decoder to output the final segmentation map. Furthermore, the SAM encoder is used as a plug-and-play plugin whose parameters are frozen during training. Correspondingly, only the weightings of the encoder and decoder of the nnUNet are updated during the training.
### Auto-configured nnUNet Architecture
Integrating nnUNet into the nnSAM allows automated network architecture and hyperparameter configuration, making it highly adaptable to the unique and specific features of each medical imaging dataset. This adaptive capability starts from a self-configuration process that automatically adjusts the nnUNet encoder's architecture to suit training dataset characteristics including the dimensions of the medical images, the number of channels, and the number of classes involved in the segmentation task. Additionally, nnUNet uses an automated preprocessing pipeline, which includes normalizing the input data and applying data augmentation techniques such as rotations, scaling, and elastic deformations. These preprocessing and augmentation steps are crucial for improving the robustness and accuracy of the model. Beyond these, nnUNet can automatically select the most effective loss function and adjust optimizer settings based on the dataset's inherent attributes. For example, for detected class imbalance within the dataset, nnUNet can automatically configure a weighted loss function to emphasize the minor classes. This is further supplemented by nnUNet's hyperparameter tuning process that involves a grid search over key hyperparameters including the learning rate and the batch size. Based on each specific training dataset, nnUNet's architecture also self-adjusts, aiming for optimal performance by dynamically modifying parameters such as the layer count and the convolutional kernel size. The comprehensive suite of auto-configurable features allows the nnUNet and correspondingly the nnSAM architecture to optimize its encoder setup for each specific medical imaging task, enhancing both its efficiency and accuracy. Since the number of layers of the nnSAM is determined by the specific dataset, in Fig. 1 we symbolize the number of encoder layers as \(E_{t}\) to \(E_{0}\) and the number of decoder layers as \(D_{1}\) to \(D_{t}\).
### SAM Encoder
The SAM encoder is a Vision Transformer model pre-trained on the extensive SA-1B segmentation dataset. Trained with this extremely large dataset,
the SAM encoder excels at domain-agnostic feature extraction for segmentation tasks. However, its segmentation ability is highly prompt-dependent, making it unable to self-identify the segmentation target and the underlying semantics. Therefore, nnSAM only uses the SAM encoder to incorporate its feature extraction strengths, while leaving the dataset-specific task (identifying the region of interest for segmentation) to nnUNet. For an input image \(x\in\mathbb{R}^{H\times W\times C}\), where \(H\times W\) are the spatial dimensions and \(C\) is the number of channels, the SAM encoder needs the input \(H\times W\) to be of size 1024\(\times\)1024. To meet this requirement, we resize it to 1024\(\times\)1024 using linear interpolation after the pre-processing of nnUNet. The SAM encoder produces an image embedding \(S\) with dimensions 64\(\times\)64. We subsequently resize this embedding \(S\) to match the dimensions of nnSAM's decoder layer \(D_{1}\) for concatenation (Fig. 1). To balance the inference speed of nnSAM with the segmentation accuracy, we use MobileSAM [10, 14], a lightweight SAM version that is less than 1/60 in size of the original SAM, but with comparable performance. MobileSAM is obtained by distillation from the original SAM, by which the knowledge from the original image encoder is transferred into the lightweight counterpart.
## 3 Experimental Setting
We evaluated nnSAM using the MM-WHS dataset [15] for CT-based cardiac sub-structure segmentation. The preprocessed data from CFDnet [16] was utilized, which contains a collection of 212 cardiac CT images with a slice size of 240\(\times\)220. The cardiac sub-structures are segmented in 2D for each slice. To evaluate nnSAM's performance under few-shot training, we partitioned the dataset into 20 training samples, 80 validation samples, and 112 testing samples. To further evaluate nnSAM under different training dataset scarcity scenarios, we used different subsets from the training samples to train different nnSAM versions, with training sample sizes ranging from 4 to 20. This allowed us to study how the
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Metrics} & \multicolumn{4}{c}{Training Sample Size} \\ \cline{3-6} & & **4** & **8** & **12** & **16** & **20** \\ \hline \multirow{2}{*}{**UNet**} & DICE (\%) & \(59.31\pm 22.30\) & \(62.46\pm 20.37\) & \(66.86\pm 21.13\) & \(71.86\pm 11.27\) & \(76.52\pm 9.43\) \\ & ASD (mm) & \(19.54\pm 7.59\) & \(20.82\pm 7.16\) & \(17.86\pm 6.90\) & \(19.79\pm 6.08\) & \(18.15\pm 7.52\) \\ \hline \multirow{2}{*}{**SwinUNet**} & DICE (\%) & 81.24 \(\pm\) 17.58 & 80.82 \(\pm\) 14.06 & 83.15 \(\pm\) 9.21 & 84.37 \(\pm\) 6.91 & 86.88 \(\pm\) 5.13 \\ & ASD (mm) & 4.79 \(\pm\) 3.01 & 5.29 \(\pm\) 4.29 & 3.61 \(\pm\) 2.57 & 4.12 \(\pm\) 2.94 & 4.17 \(\pm\) 4.12 \\ \hline \multirow{2}{*}{**TransUNet**} & DICE (\%) & 81.23 \(\pm\) 6.62 & 82.34 \(\pm\) 5.98 & 84.82 \(\pm\) 4.80 & 87.05 \(\pm\) 4.60 & 87.11 \(\pm\) 3.99 \\ & ASD (mm) & 4.18 \(\pm\) 1.90 & 8.79 \(\pm\) 3.29 & 9.77 \(\pm\) 2.94 & 11.03 \(\pm\) 3.84 & 11.99 \(\pm\) 3.30 \\ \hline \multirow{2}{*}{**AutoSAM**} & DICE (\%) & 65.10 \(\pm\) 23.62 & 65.89 \(\pm\) 20.58 & 67.63 \(\pm\) 21.77 & 77.55 \(\pm\) 7.55 & 78.53 \(\pm\) 8.30 \\ & ASD (mm) & 16.55 \(\pm\) 7.83 & 19.60 \(\pm\) 5.78 & 16.98 \(\pm\) 5.96 & 16.73 \(\pm\) 6.02 & 15.92 \(\pm\) 6.36 \\ \hline \multirow{2}{*}{**nnUNet**} & DICE (\%) & 81.77 \(\pm\) 13.68 & 84.45 \(\pm\) 18.72 & 88.36 \(\pm\) 13.00 & 92.35 \(\pm\) 7.55 & 93.15 \(\pm\) 7.86 \\ & ASD (mm) & 6.97 \(\pm\) 4.87 & 4.90 \(\pm\) 6.08 & 3.15 \(\pm\) 4.74 & 1.56 \(\pm\) 2.26 & 1.40 \(\pm\) 2.20 \\ \hline \multirow{2}{*}{**nnSAM**} & DICE (\%) & **84.67 \(\pm\) 13.52** & **86.36 \(\pm\) 16.19** & **90.74 \(\pm\) 11.89** & **93.20 \(\pm\) 5.53** & **93.75 \(\pm\) 5.35** \\ & ASD (mm) & **3.87 \(\pm\) 5.04** & **3.29 \(\pm\) 5.15** & **2.18 \(\pm\) 3.97** & **1.43 \(\pm\) 1.69** & **1.23 \(\pm\) 1.64** \\ \hline \end{tabular}
\end{table}
Table 1: DICE and ASD of different methods on different training sample sizes.
performance of nnSAM scales with the size of the available labeled data, under a simulated real-world clinical setting where labeled data might be difficult to obtain. The dataset contains labels for five different cardiac anatomy classes for a multi-class segmentation task. Specifically, the classes include the left ventricle (LV), right ventricle (RV), left atrium (LA), right atrium (RA), and myocardium of LV (Myo). Each of these classes presents its own unique challenges for segmentation, making the dataset particularly well-suited for testing the accuracy and robustness of nnSAM.
In addition to nnSAM, we also evaluated SwinUNet [7], TransUNet [5], UNet [3], the original nnUNet [8], and AutoSAM [17] for comparison. For SwinUNet, TransUNet, UNet, and nnUNet, we used publicly available codes for model training. AutoSAM has no official open-source codes, we reproduced it
Figure 2: Example 1 of segmentation visualization results for different methods on different numbers of training samples.
based on the publication descriptions. All methods were trained and tested on an NVIDIA GPU card (A100 with 80 GB memory). For the evaluation metric, we used the Average Symmetric Surface Distance (ASD) and the Dice Similarity Coefficient (DICE) [18]. ASD quantifies the average distance between the surfaces of two segmented objects and DICE evaluates their volumetric overlap.
For SwinUNet [7], TransUNet [5], UNet [3], and nnUNet [8], we have taken widely used public codes, while for AutoSAM [17], since there is no official open-source code, we have reproduced it as much as possible based on the article descriptions. All methods were trained and tested on A100 80G. For the evaluation metric, we used Average Symmetric Surface Distance (ASD) and the Dice Similarity Coefficient (DICE) [18]. The ASD is a metric that quantifies the average distance between the surfaces of two segmented objects. DICE evaluates
Figure 3: Example 2 of segmentation visualization results for different methods on different numbers of training samples.
the similarity between two segmented objects, considering the volume overlap between the two objects.
## 4 Results
Table 1 shows the model performance under different numbers of training data samples (4 to 20). The proposed nnSAM outperforms all other segmentation methods in terms of DICE and ASD for all training sample sizes. When trained with 20 labeled images, nnSAM achieves an average DICE score of 93.75% and an average ASD of 1.23 mm. The segmentation accuracy of nnUNet, which is recognized as one of the best segmentation models, is similarly substantially
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Sample Size} & \multicolumn{4}{c}{DICE (\%)} \\ \cline{3-6} & & Myo & LA & LV & RA & RV \\ \hline \multirow{4}{*}{**Unet**} & 4 & 50.65 \(\pm\) 20.23 & 64.91 \(\pm\) 25.10 & 68.33 \(\pm\) 23.92 & 55.53 \(\pm\) 24.40 & 57.10\(\pm\) 30.68 \\ & 8 & 49.51 \(\pm\) 18.57 & 62.49 \(\pm\) 19.24 & 74.21 \(\pm\) 23.22 & 59.35 \(\pm\) 22.65 & 66.75 \(\pm\) 29.56 \\ & 12 & 51.15 \(\pm\) 20.35 & 66.16 \(\pm\) 21.71 & 78.47 \(\pm\) 21.80 & 61.06 \(\pm\) 23.48 & 77.47 \(\pm\) 24.21 \\ & 16 & 61.95 \(\pm\) 10.22 & 70.84 \(\pm\) 13.12 & 80.60\(\pm\) 9.46 & 71.05 \(\pm\) 17.22 & 74.85 \(\pm\) 20.90 \\ & 20 & 63.02 \(\pm\) 13.21 & 73.44 \(\pm\) 11.32 & 84.57 \(\pm\) 9.91 & 75.65 \(\pm\) 13.60 & 85.94 \(\pm\) 8.11 \\ \hline \multirow{4}{*}{**SwinUNet**} & 4 & 67.56 \(\pm\) 21.19 & 83.07 \(\pm\) 20.08 & **90.26 \(\pm\) 9.21** & 79.29 \(\pm\) 22.43 & 86.01 \(\pm\) 17.54 \\ & 8 & 64.56 \(\pm\) 17.34 & 83.43 \(\pm\) 14.34 & 88.02 \(\pm\) 9.57 & 81.18 \(\pm\) 17.75 & 86.92 \(\pm\) 14.92 \\ & 12 & 65.15 \(\pm\) 17.08 & 85.97 \(\pm\) 10.08 & 88.38 \(\pm\) 7.82 & 85.35 \(\pm\) 8.84 & 90.9 \(\pm\) 5.71 \\ & 16 & 73.19 \(\pm\) 10.44 & 84.55 \(\pm\) 7.78 & 90.88 \(\pm\) 4.51 & 83.27 \(\pm\) 9.72 & 89.94 \(\pm\) 6.05 \\ & 20 & 76.14 \(\pm\) 9.55 & 87.23 \(\pm\) 6.87 & 92.26 \(\pm\) 3.81 & 87.07 \(\pm\) 6.79 & 91.69 \(\pm\) 4.0 \\ \hline \multirow{4}{*}{**TransUNet**} & 4 & 67.30 \(\pm\) 9.79 & 86.08 \(\pm\) 11.45 & 89.71 \(\pm\) 4.06 & **81.0 \(\pm\) 9.79** & 82.08 \(\pm\) 6.55 \\ & 8 & 68.16 \(\pm\) 8.19 & 84.53 \(\pm\) 8.04 & 87.56 \(\pm\) 4.59 & 83.52 \(\pm\) 9.80 & 87.94 \(\pm\) 5.85 \\ & 12 & 70.24 \(\pm\) 8.0 & 85.17 \(\pm\) 9.71 & 89.43 \(\pm\) 4.68 & 88.52 \(\pm\) 6.11 & 90.76 \(\pm\) 3.86 \\ & 16 & 76.61 \(\pm\) 5.95 & 87.51 \(\pm\) 9.34 & 91.63 \(\pm\) 3.38 & 88.08 \(\pm\) 6.65 & 91.42 \(\pm\) 3.16 \\ & 20 & 77.83 \(\pm\) 5.55 & 85.32 \(\pm\) 10.48 & 92.41 \(\pm\) 3.57 & 88.74 \(\pm\) 4.56 & 91.23 \(\pm\) 2.33 \\ \hline \multirow{4}{*}{**AutoSAM**} & 4 & 54.17 \(\pm\) 22.87 & 63.56 \(\pm\) 22.45 & 78.28 \(\pm\) 26.70 & 57.39 \(\pm\) 26.15 & 72.09 \(\pm\) 27.96 \\ & 8 & 52.98 \(\pm\) 19.73 & 61.86 \(\pm\) 18.86 & 78.82 \(\pm\) 22.03 & 58.03 \(\pm\) 24.23 & 77.76 \(\pm\) 25.14 \\ & 12 & 53.53 \(\pm\) 20.31 & 64.71 \(\pm\) 21.73 & 78.96 \(\pm\) 22.67 & 63.83 \(\pm\) 26.64 & 77.15 \(\pm\) 25.15 \\ & 16 & 65.35 \(\pm\) 10.58 & 77.33 \(\pm\) 9.33 & 86.75 \(\pm\) 6.73 & 72.44 \(\pm\) 14.53 & 85.88 \(\pm\) 7.07 \\ & 20 & 67.22 \(\pm\) 11.85 & 77.04 \(\pm\) 10.33 & 86.77 \(\pm\) 7.76 & 74.85 \(\pm\) 14.22 & 86.76 \(\pm\) 7.67 \\ \hline \multirow{4}{*}{**nnUNet**} & 4 & 72.24 \(\pm\) 13.01 & 83.69 \(\pm\) 16.91 & 88.43 \(\pm\) 11.24 & 78.10\(\pm\) 18.60 & 86.38 \(\pm\) 15.40 \\ & 8 & 75.31 \(\pm\) 19.25 & 88.12 \(\pm\) 18.91 & 87.9 \(\pm\) 19.63 & 82.37 \(\pm\) 22.17 & 88.57 \(\pm\) 19.31 \\ \cline{1-1} & 12 & 82.78 \(\pm\) 12.02 & 93.37 \(\pm\) 5.10 & 91.23 \(\pm\) 16.82 & 84.41 \(\pm\) 20.33 & 90.01 \(\pm\) 14.40 \\ \cline{1-1} & 16 & 88.66 \(\pm\) 5.01 & 94.95 \(\pm\) 4.05 & 94.0 \(\pm\) 13.62 & 90.79 \(\pm\) 10.28 & 93.35 \(\pm\) 10.04 \\ \cline{1-1} & 20 & 89.88 \(\pm\) 4.74 & 96.03 \(\pm\) 1.66 & 94.53 \(\pm\) 14.29 & 91.69 \(\pm\) 9.85 & 93.62 \(\pm\) 11.34 \\ \hline \multirow{4}{*}{**nnSAM**} & 4 & **77.05 \(\pm\) 14.47** & **88.67 \(\pm\) 10.53** & 89.93 \(\pm\) 11.97 & 80.86 \(\pm\) 21.13 & **86.83 \(\pm\) 14.75** \\ & 8 & **76.45 \(\pm\) 17.03** & **91.48 \(\pm\) 14.61** & **89.68 \(\pm\) 18.05** & **84.29 \(\pm\) 19.73** & **89.9 \(\pm\) 16.92** \\ \cline{1-1} & 12 & **86.40 \(\pm\) 9.69** & **94.89 \(\pm\) 4.71** & **92.20 \(\pm\) 16.65** & **88.76 \(\pm\) 16.44** & **91.45 \(\pm\) 14.27** \\ \cline{1-1} \cline{1} & 16 & **89.76 \(\pm\) 3.12** & **95.44 \(\pm\) 4.95** & **94.78 \(\pm\) 10.85** & **92.26
higher than the other methods, but slightly lower than that of nnSAM. The other methods, including SwinUNet, TransUNet, and AutoSAM, are of much lower accuracies, with DICEs all below 90% and ASDs all above 4 mm. The worst performance is from the vanilla UNet, which is expected as SwinUNet, TransUNet, and AutoSAM are all based on pre-trained models, while UNet is trained from scratch and is most affected by the lack of training samples. In addition, as the number of training samples gradually decreases, the advantage of nnSAM over the other methods becomes more prominent. In particular, when trained with only 4 labeled images, for DICE nnSAM outperforms the second-place nnUNet by 2.9%, and outperforms the other methods by more. Overall,
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Sample Size} & \multicolumn{6}{c}{ASD (mm)} \\ \cline{3-8} & & Myo & LA & LV & RA & RV \\ \hline \multirow{8}{*}{**UNet**} & 4 & 16.46 \(\pm\) 6.95 & 17.07 \(\pm\) 7.85 & 20.83 \(\pm\) 8.08 & 27.20 \(\pm\) 11.72 & 16.11 \(\pm\) 9.29 \\ & 8 & 19.33 \(\pm\) 6.43 & 23.09 \(\pm\) 9.35 & 17.35 \(\pm\) 7.02 & 28.35 \(\pm\) 10.84 & 16.0 \(\pm\) 10.98 \\ & 12 & 13.92 \(\pm\) 5.43 & 20.47 \(\pm\) 8.74 & 12.81 \(\pm\) 5.18 & 28.9 \(\pm\) 12.17 & 13.20 \(\pm\) 9.9 \\ & 16 & 16.52 \(\pm\) 5.73 & 28.38 \(\pm\) 9.26 & 16.67 \(\pm\) 4.24 & 25.13 \(\pm\) 11.38 & 12.25 \(\pm\) 8.91 \\ & 20 & 16.36 \(\pm\) 5.85 & 22.09 \(\pm\) 9.34 & 13.39 \(\pm\) 5.67 & 26.44 \(\pm\) 12.58 & 12.48 \(\pm\) 11.49 \\ \hline \multirow{8}{*}{**SwinUNet**} & 4 & 4.72 \(\pm\) 2.89 & 5.58 \(\pm\) 3.99 & **2.78 \(\pm\) 2.01** & 6.89 \(\pm\) 5.20 & 4.0 \(\pm\) 4.46 \\ & 8 & 3.41 \(\pm\) 2.21 & 5.47 \(\pm\) 6.88 & 3.45 \(\pm\) 2.77 & 10.07 \(\pm\) 10.25 & 4.04 \(\pm\) 3.65 \\ & 12 & 2.85 \(\pm\) 1.72 & 5.12 \(\pm\) 5.45 & 2.99 \(\pm\) 1.88 & 5.07 \(\pm\) 3.80 & **2.03 \(\pm\) 1.55** \\ & 16 & 2.51 \(\pm\) 1.63 & 5.19 \(\pm\) 6.24 & 2.39 \(\pm\) 1.13 & 8.24 \(\pm\) 6.32 & 2.28 \(\pm\) 1.57 \\ & 20 & 2.28 \(\pm\) 1.54 & 7.72 \(\pm\) 13.93 & 2.49 \(\pm\) 1.16 & 6.17 \(\pm\) 5.34 & 2.18 \(\pm\) 1.93 \\ \hline \multirow{8}{*}{**TransUNet**} & 4 & 5.27 \(\pm\) 1.98 & 4.77 \(\pm\) 5.30 & 2.86 \(\pm\) 1.01 & **3.81 \(\pm\) 2.42** & 4.21 \(\pm\) 1.79 \\ & 8 & 5.43 \(\pm\) 2.66 & 14.43 \(\pm\) 7.15 & 6.83 \(\pm\) 5.17 & 14.54 \(\pm\) 3.11 & 2.73 \(\pm\) 1.89 \\ & 12 & 6.61 \(\pm\) 3.41 & 21.03 \(\pm\) 8.27 & 6.63 \(\pm\) 2.9 & 12.06 \(\pm\) 3.18 & 2.51 \(\pm\) 2.38 \\ & 16 & 8.22 \(\pm\) 5.75 & 19.16 \(\pm\) 7.39 & 10.27 \(\pm\) 5.60 & 15.08 \(\pm\) 2.39 & 2.40 \(\pm\) 3.14 \\ & 20 & 7.62 \(\pm\) 3.78 & 24.38 \(\pm\) 7.94 & 7.84 \(\pm\) 4.73 & 18.17 \(\pm\) 3.81 & 1.91 \(\pm\) 1.01 \\ \hline \multirow{8}{*}{**AutoSAM**} & 4 & 14.18 \(\pm\) 7.62 & 23.89 \(\pm\) 11.01 & 10.33 \(\pm\) 7.67 & 20.66 \(\pm\) 11.12 & 13.68 \(\pm\) 10.81 \\ & 8 & 19.14 \(\pm\) 6.86 & 26.87 \(\pm\) 9.33 & 15.10\(\pm\) 5.75 & 25.18 \(\pm\) 11.23 & 11.72 \(\pm\) 7.97 \\ & 12 & 16.31 \(\pm\) 6.92 & 23.55 \(\pm\) 8.9 & 11.91 \(\pm\) 6.02 & 19.42 \(\pm\) 9.73 & 13.72 \(\pm\) 9.98 \\ & 16 & 14.02 \(\pm\) 5.23 & 26.80 \(\pm\) 9.51 & 11.24 \(\pm\) 5.82 & 22.45 \(\pm\) 9.80 & 9.15 \(\pm\) 7.64 \\ & 20 & 12.24 \(\pm\) 4.69 & 26.81 \(\pm\) 10.99 & 9.11 \(\pm\) 4.43 & 23.80 \(\pm\) 10.81 & 7.62 \(\pm\) 7.49 \\ \hline \multirow{8}{*}{**nnUNet**} & 4 & 8.19 \(\pm\) 3.39 & 4.26 \(\pm\) 4.77 & 3.98 \(\pm\) 4.11 & 15.38 \(\pm\) 12.83 & 3.06 \(\pm\) 4.50 \\ & 8 & 3.06 \(\pm\) 4.41 & 5.41 \(\pm\) 9.40 & 3.97 \(\pm\) 5.63 & 8.84 \(\pm\) 11.64 & 3.23 \(\pm\) 5.70 \\ & 12 & 2.09 \(\pm\) 4.13 & 2.36 \(\pm\) 4.40 & 2.43 \(\pm\) 4.19 & 6.56 \(\pm\) 11.39 & 2.33 \(\pm\) 3.66 \\ & 16 & 1.12 \(\pm\) 1.72 & 1.17 \(\pm\) 2.22 & **1.47 \(\pm\) 2.43** & 2.37 \(\pm\) 3.99 & 1.64 \(\pm\) 2.95 \\ & 20 & 1.01 \(\pm\) 1.60 & **0.78 \(\pm\) 0.27** & 1.50\(\pm\) 3.31 & 2.08 \(\pm\) 3.27 & 1.60\(\pm\) 3.53 \\ \hline \multirow{8}{*}{**nnSAM**} & 4 & **2.73 \(\pm\) 3.37** & **3.43 \(\pm\) 4.46** & 3.47 \(\pm\) 4.22 & 6.61 \(\pm\) 11.05 & **3.13 \(\pm\) 4.48** \\ & 8 & **2.77 \(\pm\) 3.88** & **2.63 \(\pm\) 6.43** & **3.21 \(\pm\) 4.88** & **5.29 \(\pm\) 8.45** & **2.57 \(\pm\) 4.97** \\ \cline{1-1} & 12 & **1.71 \(\pm\) 3.22** & **1.07 \(\pm\) 0.84** & **2.34 \(\pm\) 4.21** & **3.65 \(\pm\) 8.38** & 2.16 \(\pm\) 4.31 \\ \cline{1-1} & 16 & **1.03 \(\pm\) 1.17** & **1.10\(\pm\) 2.96** & 1.65 \(\pm\) 2.46 & **1.87 \(\pm\) 2.93** & **1.49 \(\pm\) 2.73** \\ \cline{1-1} & 20 & **0.9 \(\pm\) 1.00** & **0.78 \(\pm\) 0.34** & **1.25 \(\pm\) 2.20** & **1.77 \(\pm\) 2.86** & **
nnSAM provides higher segmentation accuracy compared to the other methods, especially when the amount of training data is limited.
Table 2 and Table 3 show the performance of DICE and ASD under the corresponding categories of labels. In both tables, nnSAM provides the best results in most cases. As the second-ranked model, nnUNet has the closest performance to our nnSAM, however, in the visualization results of Fig. 3, nnUNet does not achieve a good segmentation for the Myo category with more false positives when the training sample size is 4. Besides, there are also some outliers with counter-intuitive trends. For instance, the SwinUNet and TransUNet results on the LA label show that the ASD becomes larger when the sample data size increases. According to Fig. 2 and Fig. 3, we found that TransUNet and SwinUNet show more false positive segmentations as the sample size increases, and all these false positives are far away from the true segmentation position, leading to anomalous results in ASD. In general, UNet and AutoSAM generate poor segmentation results. The myocardium of LV (Myo) is almost unrecognizable for both methods when the training data size is limited. Since AutoSAM relies heavily on custom-trained encoders to provide automatic prompts, the 'few-shot' learning scenario poses challenges in learning accurate prompts for SAM segmentation, and results in poor accuracy. These results suggest that nnSAM offers superior accuracy in segmenting challenging targets with only a small number of training samples, which is attributed to the strong generality of the SAM encoder and the adaptive power of nnUNet's auto-configurable framework.
## 5 Discussion
The results demonstrate the superior performance of nnSAM for medical image segmentation, especially in few-shot learning scenarios where labeled training data is limited. By integrating the pretrained SAM encoder into nnUNet's framework, nnSAM can leverage SAM's powerful feature extraction capabilities while simultaneously benefiting from nnUNet's adaptive architecture configuration and hyperparameter optimization. The evaluation using the MM-WHS cardiac CT dataset highlights several key advantages of nnSAM. First, nnSAM consistently achieved the highest accuracy across all sizes of training sets (4 to 20 samples), outperforming state-of-the-art models like nnUNet, SwinUNet, and TransUNet. This ability to produce accurate segmentations from very few examples could make nnSAM valuable for medical applications where acquiring labeled data is difficult and expensive. Models like SwinUNet and TransUNet showed erratic results when labeling some structures where more training samples yielded worse results, indicating they might be overfitted to the training data distribution. In contrast, nnSAM's segmentation quality improved consistently with more training data added. Compared with nnSAM, AutoSAM uses a custom encoder to replace the prompt encoder, making it able to automatically generate and feed prompts to SAM. However, AutoSAM is not optimized for semantic segmentation of medical images as nnUNet and does not have the powerful preprocessing and auto-configuration capabilities as nnUNet either.
Since the emergence of nnUNet, it has become the state-of-the-art in most medical image segmentation tasks, representing a top-of-the-line, end-to-end model for traditional task-specific semantic segmentation. SAM, on the other hand, is a prompt-based segmentation framework and a representative model with strong generalizability. Combining the best models from two different segmentation frameworks has proved effective to further improve the medical image segmentation accuracy and sets a potential new benchmark.
Our current study has some limitations that should be addressed in future work. First, we evaluated nnSAM on a single dataset of cardiac CT scans. Future studies testing it on larger and more diverse medical imaging datasets are warranted. Second, the current nnSAM framework still requires a limited amount of training data and labels, and future works are needed to explore the possibility of achieving end-to-end segmentation with only one sample ('one-shot' learning) or without any labeling at all ('zero-shot' learning). In addition, we used 2D slices for training and testing; the extension to 3D volume-based segmentation may further enhance the segmentation accuracy but is currently challenged by technical difficulties on merging 3D SAM embeddings with 3D nnUNet embeddings. Future investigations are warranted to search for potential solutions or alternatives.
## 6 Conclusion
We introduce nnSAM, a novel, few-shot learning solution for medical image segmentation that melts the strengths of the Segment Anything Model (SAM) and nnUNet. Our extensive evaluation across different numbers of 2D training samples sets a potential new benchmark in medical image segmentation, especially in scenarios where training data is scarce. The results also highlight the robustness and superior segmentation performance of nnSAM, making it a promising tool for future research and practical applications in medical imaging.
## 7 Acknowledgement
We thank Xiang Feng and Yongbo He for their helpful discussions and partial attempts at the early stages of the project.
|
2308.16496 | Global Synthesis of CNOT Circuits with Holes | A common approach to quantum circuit transformation is to use the properties
of a specific gate set to create an efficient representation of a given
circuit's unitary, such as a parity matrix or stabiliser tableau, and then
resynthesise an improved circuit, e.g. with fewer gates or respecting
connectivity constraints. Since these methods rely on a restricted gate set,
generalisation to arbitrary circuits usually involves slicing the circuit into
pieces that can be resynthesised and working with these separately. The choices
made about what gates should go into each slice can have a major effect on the
performance of the resynthesis. In this paper we propose an alternative
approach to generalising these resynthesis algorithms to general quantum
circuits. Instead of cutting the circuit into slices, we "cut out" the gates we
can't resynthesise leaving holes in our quantum circuit. The result is a
second-order process called a quantum comb, which can be resynthesised
directly. We apply this idea to the RowCol algorithm, which resynthesises CNOT
circuits for topologically constrained hardware, explaining how we were able to
extend it to work for quantum combs. We then compare the generalisation of
RowCol using our method to the naive "slice and build" method empirically on a
variety of circuit sizes and hardware topologies. Finally, we outline how
quantum combs could be used to help generalise other resynthesis algorithms. | Ewan Murphy, Aleks Kissinger | 2023-08-31T06:58:03Z | http://arxiv.org/abs/2308.16496v1 | # Global Synthesis of CNOT Circuits with Holes
###### Abstract
A common approach to quantum circuit transformation is to use the properties of a specific gate set to create an efficient representation of a given circuit's unitary, such as a parity matrix or stabiliser tableau, and then resrussise an improved circuit, e.g. with fewer gates or respecting connectivity constraints. Since these methods rely on a restricted gate set, generalisation to arbitrary circuits usually involves slicing the circuit into pieces that can be resynthesised and working with these separately. The choices made about what gates should go into each slice can have a major effect on the performance of the resynthesis. In this paper we propose an alternative approach to generalising these resynthesis algorithms to general quantum circuits. Instead of cutting the circuit into slices, we "cut out" the gates we can't resrussise leaving holes in our quantum circuit. The result is a second-order process called a quantum comb, which can be resynthesised directly. We apply this idea to the RowCol algorithm, which resrussises CNOT circuits for topologically constrained hardware, explaining how we were able to extend it to work for quantum combs. We then compare the generalisation of RowCol using our method to the naive "slice and build" method empirically on a variety of circuit sizes and hardware topologies. Finally, we outline how quantum combs could be used to help generalise other resynthesis algorithms.
## 1 Introduction
Current quantum computers suffer from severe limitations such as high error rates, low numbers of qubits, and connectivity constraints for multi-qubit operations. Furthermore, without error correction, the poor fidelity of current gate implementations compounds over the execution of the circuit, so it advantageous to find the smallest possible circuit to represent an algorithm.
Using the properties of a specific gate set, an efficient representation of a circuit's unitary can be resynthesised into an improved circuit. For example, the unitary action of CNOT circuits can be fully described by the associated \(\mathbb{F}_{2}\)-linear function over basis vectors on \(n\)-qubit space, seen as vectors in \(\mathbb{F}_{2}^{n}\). In other words, we can represent the action of a CNOT circuit using a matrix over \(\mathbb{F}_{2}\), called its _parity matrix_.
Noting that the parity matrix of a single CNOT gate corresponds to an elementary row operation, it is possible to resrussise a CNOT circuit from its parity matrix by performing Gauss-Jordan elimination to reduce the matrix to identity and introducing one CNOT gate for each corresponding row operation. This method was introduced in [3] and refined in the Patel-Markov-Hayes algorithm [16], which produces asymptotically optimal gate counts for (unconstrained) CNOT synthesis. Similar ideas involving the decomposition of symplectic matrices into basic generators have also been applied for synthesising Clifford circuits from stabiliser tableaux [2, 14, 7, 18].
Some quantum computers, such as some superconducting devices [17], have the additional limitation of restricted connectivity, meaning 2-qubit gates aren't allowed between arbitrary pairs of qubits. By restricting which row operations are possible when reducing a parity matrix to the identity, circuits that obey specific connectivity constraints can be synthesised from ones that don't. A variety of techniques
for introducing these constraints based on Steiner trees [15, 12, 19, 11] and integer programming [4] have been introduced in recent years.
These synthesis methods provide a way to optimise circuits that mitigates some of the current limitations of NISQ devices. However, they suffer from relying on the properties of a specific gate set. The conventional generalisation of these methods to arbitrary quantum circuits is to slice the circuit into pieces that can be synthesised, and treat each of them separately[10]. This isn't as straightforward as it might initially sound. Consider for example the circuit in Figure 0(a) and a CNOT circuit synthesis procedure. The synthesis procedure can't deal with the circuit as it currently is due to the gates U and V, so we need to slice the circuit into sections containing just CNOT gates and synthesise those. As some of the gates in a quantum circuit can be moved past each other, there isn't always a unique way to make a slice. Take the two possible slicing options in Figures 0(b) and 0(c) for example. Since the CNOT slices are treated independently during the synthesis procedure, which sides of the slice certain CNOTs end up on can affect the size of the new circuit by allowing/preventing simplifications or cancellations.
In this paper we present a new approach to generalising circuit synthesis that doesn't rely on slicing the circuit into pieces. Instead we remove the gates that can't be synthesised, leaving holes in the circuit and producing what is known as a quantum comb [5, 9]. This quantum comb can be understood as a new circuit with additional qubits, where these new qubits represent the old qubits at different points in time. We then explain how extending the functionality of the CNOT synthesis algorithm RowCol to work quantum combs allows one to route general circuits. Our generalisation is then empirically tested against the naive slicing process on a variety of circuit sizes and hardware constraints, finding our method has increasingly better performance than the slicing one as circuit sizes increase. Finally, we outline how quantum combs could be used to generalise other circuit synthesis procedures, as well as ways in which our current algorithm could be further optimised.
This paper has the following structure. Section 2.1 explains how the parity matrix representation of a CNOT circuit works. Section 2.2 introduces the idea behind CNOT synthesis algorithms, with a focus on the circuit routing algorithm RowCol. In Section 2.3 we introduce quantum combs, as well as language specific to this paper that will be helpful in explaining our algorithm. Section 3 explains our extension to RowCol that allows working with quantum combs. Empirical results comparing our quantum combs method to the slicing process are presented in Section 4. Finally, Section 5 concludes the paper and outlines possible future works for using quantum combs to generalise other synthesis procedures.
Figure 1: Possible slicing procedures for a synthesis process that can’t deal with hadamard gates
## 2 Preliminaries
### CNOT Circuit as a Parity Matrix
In this paper, we will use the phrases "CNOT circuit" or "CNOT comb" which refer to circuits or quantum combs entirely made of CNOTS. CNOT stands for "controlled not" and is a quantum gate that acts on 2-qubits: the control \(c\) and the \(t\), \(\text{CNOT}(c,\,t)\). It acts in such a way that a NOT gate is applied to the target qubit only if the control qubit is in state \(|1\rangle\), \(\text{CNOT}|0\rangle|0\rangle=|0\rangle|0\rangle\) and \(\text{CNOT}|1\rangle|0\rangle=|1\rangle|1\rangle\). When we refer to states in this section we mean computational basis states and the behaviour for general superpositions can be inferred from the linearity of quantum operations. An alternative way of thinking about about CNOTs is as modulo 2 addition: the target qubit changes state to the sum of the control and target values modulo 2, \(\text{CNOT}|c\rangle|t\rangle=|c\rangle|c\oplus t\rangle\). This idea is represented as a circuit diagram in Figure 1(a). A CNOT gate can therefore be written as a list of which qubits are present in the parity equations of the output states: a representation of this is given by the matrix over \(\mathbb{F}_{2}\) in Figure 1(b). By reasoning about CNOT gates in this way we can write an entire CNOT circuit as a list of parity equations, Figure 2(a), then represent that circuit by an invertible element of \(\mathbb{F}_{2}^{n\times n}\) (i.e a parity matrix), Figure 2(b). One way to construct this matrix is by traversing the circuit and applying row operations for each CNOT, \(\text{CNOT}(c,\,t)\) corresponds to \(\text{R}(c,\,t)\), where \(\text{R}(c,\,t)\) means setting row \(t\) the sum of rows \(c\) and \(t\) modulo 2. By identifying a set of row operations that reduce a parity matrix to the identity, a CNOT circuit can be generated: this is the core principle behind the CNOT circuit synthesis algorithms discussed in the next section.
### CNOT Circuit Synthesis Algorithms
Synthesis algorithms provide a means of reducing the size of CNOT circuits[16], or allow the resynthesis of circuits under topological constraints[12, 15, 11]. The ability to perform both of these tasks efficiently is vital for NISQ computing, as it provides a way to best utilise the machines we have available whilst minimising the consequences of their limitations. These CNOT synthesis methods work by converting the circuit into a parity matrix, as described in Section 2.1, then identifying which row operations convert
Figure 3: Parity representation of a CNOT circuit
Figure 2: Parity representations of a CNOT gate
the matrix back to the identity. This sequence of row operations corresponds to the generated CNOT circuit.
For some quantum computers, superconducting ones, for example [17], CNOT gates may be restricted to only be possible between nearest neighbour qubits. These computers are said to be topologically constrained, with graphs representing the allowed CNOTs called the topology, and the problem of converting a circuit to one that obeys these constraints known as quantum circuit routing. The systematic introduction of SWAPs into the circuit is one way to overcome this problem[6]. However, in this paper we will focus on the circuit synthesis approaches to finding a solution, specifically the ones that use parity matrices, as there are other alternative approaches that synthesise from different representations [4]. By restricting the possible row operations when reducing a parity matrix to the identity, circuit synthesis methods can be also used to produce circuits that obey topological constraints. One of these synthesis algorithms is known as RowCol, and will be the focus of the rest of this section.
Starting with a CNOT circuit \(\mathcal{C}\), and a graph \(G(V,E)\) representing connectivity of qubits, RowCol synthesises a new circuit as follows:
```
Input : A circuit \(\mathcal{C}\), and topology \(G(V,E)\) Output : A circuit \(\mathcal{C}^{\prime}\), respecting the topological constraints 1. Generate a new empty circuit \(\mathcal{C}^{\prime}\) with the same number of qubits as \(\mathcal{C}\). 2. Compute the parity matrix \(P\) of \(\mathcal{C}\). 3. Pick a non-cutting vertex of \(V\) of \(G\), and get its corresponding qubit \(q\). 4. Apply elementary row operations, restricted to the edges of \(G(V,E)\), to reduce row \(q\) and column \(q\) to a unit vector. 5. Remove vertex \(V\) from \(G\), and row and column \(q\) from \(P\). 6. Go back to Step 2 and repeat until \(G\) has no more vertices. 7. Return \(\mathcal{C}^{\prime}\)
```
**Algorithm 1**RowCol
Note the column of \(q\) is reduced to a unit vector by using row operations to place a 1 on the diagonal, then adding row \(q\) to the other rows to eliminate 1s above and below the diagonal. The row can be made into a unit vector by solving a system of linear equations for other rows to add back on to row \(q\).
In order to respect connectivity constraints, row operations might not be performed directly, but via some intermediate operations computed using Steiner trees. While this is an important aspect of RowCol and related algorithms, we can treat this process essentially as a "black box" for the purposes of our algorithm. We refer readers to the paper that introduced RowCol [19] or other Steiner-tree based algorithms [12, 15, 11] for details.
We are now going to step through the RowCol procedure for the matrix in Figure 2(b). We start with qubit 0, meaning we are going to need to eliminate the 0th column, then the 0th row. To eliminate the column we need to perform \(R(0,1)\), and to eliminate the row we need to perform \(R(3,0)\). This has then reduced the 0th column and row to unit vectors, therefore we can ignore these when eliminating the rest of the matrix.
\[\left(\begin{array}{cccc}1&0&0&1\\ \hline\mathbf{1}&1&1&1\\ \mathbf{0}&0&1&1\\ \mathbf{0}&0&0&1\end{array}\right)\xrightarrow{R_{1}:=R_{0}+R_{1}}\left( \begin{array}{cccc}1&\mathbf{0}&\mathbf{0}&1\\ 0&1&1&0\\ 0&0&1&1\\ 0&0&0&1\end{array}\right)\xrightarrow{R_{0}:=R_{3}+R_{0}}\left(\begin{array}[ ]{cccc}1&\mathbf{0}&\mathbf{0}&1\\ 0&1&1&0\\ 0&0&1&1\\ \mathbf{0}&0&0&1\end{array}\right)\xrightarrow{R_{0}:=R_{3}+R_{0}}\left( \begin{array}{cccc}1&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ 0&1&1&0\\ \mathbf{0}&0&1&1\\ \mathbf{0}&0&0&1\end{array}\right)\]
The column for the 1st qubit is already a unit vector, meaning we can move directly onto the row. To eliminate this row we need to perform \(R(2,1)\) and \(R(3,1)\): although a specific order is shown on the diagram either will work to eliminate the row. As both the column and row are now eliminated, we can ignore these in the subsequent operations.
\[\left(\begin{array}{cccc}1&0&0&0\\ 0&1&\framebox{$\mathbf{0}$}\\ 0&0&1&1\\ 0&0&0&1\end{array}\right)\xrightarrow{R_{1}:=R_{2}+R_{1}}\left(\begin{array}[] {cccc}1&0&0&0\\ 0&1&\framebox{$\mathbf{0}$}&1\\ 0&0&1&1\\ 0&0&0&1\end{array}\right)\xrightarrow{R_{1}:=R_{3}+R_{1}}\left(\begin{array}[] {cccc}1&0&0&0\\ 0&1&\framebox{$\mathbf{0}$}&0\\ 0&0&1&1\\ 0&\framebox{$\mathbf{0}$}&0&1\end{array}\right)\]
The column is already a unit vector again here, meaning we move to the row, which can be eliminated by \(R(3,2)\). This reduces the matrix to the identity meaning the process is over.
\[\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&1&\framebox{$\mathbf{1}$}\\ 0&0&0&1\end{array}\right)\xrightarrow{R_{2}:=R_{3}+R_{2}}\left(\begin{array}[ ]{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&1&\framebox{$\mathbf{0}$}\\ 0&0&0&1\end{array}\right)\]
The sequence of row operations that converted the matrix to the identity are: \(R(0,1)\), \(R(3,0)\), \(R(2,1)\), \(R(3,1)\), \(R(3,2)\). This tells us how to construct the resynthesised circuit which is shown in Figure 4.
### Quantum Combs
Here we will introduce the concept of a quantum comb, as well as the language and notation used in this paper to discuss specific aspects of them. A quantum comb is a generalisation of a quantum channel that can take other quantum channels as input, rather than states. These can be depicted graphically, as in Figure 5, as circuits which not only have open wires at the top and bottom, but certain "holes" in the middle, where other gates can be inserted. The term "comb" comes from the fact that the entire object should no longer be represented as a box, but as an irregular shape resembling a hair comb, where each the of "teeth" corresponds to a distinct time step. See e.g. [5] for the formal definition and many such pictures.
Figure 4: Circuit generated from the step-by-step RowCol procedure
Figure 5: General representation of a quantum comb as quantum circuit with holes
Rather than defining combs in general, we will focus on combs arising from quantum circuits where certain single-qubit gates have been removed. The restriction to single-qubit holes is not essential, but it suffices for our purposes and will make certain aspects of our algorithm simpler. These can be described as normal quantum circuits with some extra information about time ordering of qubits, subject to some constraints. To motivate the definition, we will look at an example circuit, consisting of CNOT gates and several single-qubit gates we wish to remove:
(1)
Note that we have labelled the inputs and outputs by the same indices \(\{0,1,2,3\}\). We call these labels for the qubits in our original circuit the _logical qubits_. If we remove the gates \(U,V,W,H\) from the circuit, we obtain something that looks like this:
(2)
We can break the data represented by the picture above into two parts. First, we can see this as just a normal quantum circuit \(\mathcal{C}\), but now acting on more qubits:
(3)
Notice how this circuit is over more qubits than just the original logical qubits. The indices above refer to a qubit at a particular point in time, and hence have a many-to-one relationship with the logical qubits. We call these the _temporal qubits_.
There is also a binary relation telling us which temporal qubits come directly before others, which we call the \(holes\). We can represent the holes indicated in (2) as the set \(\mathcal{H}=\{(1,4),(2,6),(6,7),(4,5)\}\).
We can also define what it means to plug single-qubit gates into each of these holes. That is, for any set \(\mathcal{G}\) of single-qubit gates and a _plugging map_\(p:\mathcal{H}\rightarrow\mathcal{G}\), we can unambiguously reconstruct a circuit. For example, we can reconstruct our original circuit using the plugging map
\[p::\{(1,4)\mapsto V,(4,5)\mapsto H,(2,6)\mapsto U,(6,7)\mapsto W\}.\]
To define what it means to be a valid comb, we will formalise the process of plugging in gates, and require this to result in a well-defined circuit.
```
Input : A pair \((\mathcal{C},\mathcal{H})\) of a circuit \(\mathcal{C}\) on a set of qubits \(Q=\{0,\ldots,n-1\}\) and a binary relation \(\mathcal{H}\subseteq Q\times Q\), as well as a plugging function \(p:\mathcal{H}\rightarrow\mathcal{G}\) Output : A circuit \(\mathcal{C}^{\prime}\) or FAIL
```
**Algorithm 2**Comb composition
**Definition 2.1**.: A pair \((\mathcal{C},\mathcal{H})\) is called a _comb_ if Algorithm 2 succeeds for any plugging map \(p\).
Note that this Algorithm 2 can fail if the pair \((\mathcal{C},\mathcal{H})\) introduces cyclic dependencies between temporal qubits. However, when we obtain such pairs by "cutting out" the non-CNOT gates from a circuit, this algorithm will always succeed. We can formalise this "cutting out" process with the following procedure:
```
Input : A pair \((\mathcal{C},\mathcal{H})\) of a circuit \(\mathcal{C}\) on a set of qubits \(Q=\{0,\ldots,n-1\}\) and a binary relation \(\mathcal{H}\subseteq Q\times Q\), as well as a plugging function \(p:\mathcal{H}\rightarrow\mathcal{G}\) Output : A circuit \(\mathcal{C}^{\prime}\) or FAIL
```
**Algorithm 3**Comb composition
## 3 Algorithm: CombSynth
Our main algorithm proceeds by decomposing a circuit into a comb consisting of just CNOT gates and a plugging map for all the additional single-qubit gates. It then resynthesises the comb using a variation of the RowCol algorithm which preserves the comb structure, then plugs the single-qubit gates back in to give a fully routed circuit.
The RowCol algorithm works by eliminating qubits one by one. To apply this idea to quantum combs we need to know what it means to remove one temporal qubit at a time. As temporal qubits represent sections of the "lifetime" of the original logical qubits, not all of them can exist at the same time. This means that, although we may write the comb as one large circuit, it is not possible to perform CNOTs between all the temporal qubits at any given time. This idea needs to be carried forward into the synthesis by restricting row operations to only be between qubits in the same sections of time.
For a comb \((\mathcal{C},\mathcal{H})\), a temporal qubit \(q\) is called _available_ if it does not appear as the first part of a hole. That is, there exists no \(q^{\prime}\) such that \((q,q^{\prime})\in\mathcal{H}\). It is _extractible_ if its row and column can be made into a unit vector using the RowCol algorithm restricted only to row operations between available qubits.
Finally, to track topological constraints, which may be relevant for RowCol, we maintain a connection between logical and available temporal qubits. For each logical qubit \(q_{0}\), let \(t(q_{0})\) be its associated available temporal qubit. That is, let \(t(q_{0})=q_{0}\) if \(q_{0}\) does not appear in any holes, otherwise, let it be \(q_{k}\) for the maximal transitive chain of holes \((q_{0},q_{1}),(q_{1},q_{2}),\ldots,(q_{k-1},q_{k})\). Therefore, the mapping \(t\) is determined by a collection of holes, meaning as we update \(\mathcal{H}\) in the algorithm below we are also updating \(t\).
Our main algorithm, CombSynth, works as follows:
```
Input : A comb \((\mathcal{C},\mathcal{H})\) with temporal qubits \(Q=\{0,\ldots,n-1\}\), topology graph \(G(V,E)\) Output : A comb \((\mathcal{C}^{\prime},\mathcal{H}^{\prime})\), respecting topological constraints for any plugging \(p\) 1. Create a new comb \((\mathcal{C}^{\prime},\mathcal{H}^{\prime})\) with \(\mathcal{C}^{\prime}\) initially empty and \(\mathcal{H}^{\prime}=\mathcal{H}\) 2. Compute the parity matrix \(P\) of \(\mathcal{C}\). 3. Identify an extractible temporal qubit \(e\) in comb \((\mathcal{C},\mathcal{H})\). 4. Produce a rectangular sub-matrix \(P^{\prime}\) with columns the same as \(P\) and rows labelled by \(t(q)\) for each logical qubit \(q\). 5. Run an iteration of RowCol on row \(t(e)\) and column \(e\) of \(P^{\prime}\), with topology \(G\), updating \(\mathcal{C}^{\prime}\). 6. Update the corresponding rows of \(P\) using \(P^{\prime}\). 7. Remove \(e\) from the qubits of \(\mathcal{C}\) and any hole of the form \((e^{\prime},e)\) for some \(e^{\prime}\) from \(\mathcal{H}\). 8. Repeat from Step 3 until no qubits remain in \(\mathcal{C}\). 9. Return \((\mathcal{C}^{\prime},\mathcal{H}^{\prime})\).
```
**Algorithm 3**Comb decomposition
Note that if \((\mathcal{C},\mathcal{H})\) arose from a circuit by cutting single-qubit gates out, as in Algorithm 3, there will always be at least one extractible temporal qubit. We simply need to choose one corresponding to
the latest gate that has been cut out of the circuit.
The core of the RowCol algorithm is used to reduce the row and column of the chosen temporal qubit to unit vectors. Applying this procedure iteratively, removing the temporal qubits in an allowed order, and updating the sub-matrix as you go, will reduce the overall parity matrix to the identity whilst ensuring that no row operations happen between qubits that exist at different points in time. Therefore we have a procedure for synthesising quantum combs from their parity matrices. This method eliminates rows and columns of a matrix in the same way as RowCol, and the rows of rectangular sub-matrix \(P^{\prime}\) always correspond to the same logical qubits, even though we are swapping the temporal qubits in and out. Hence, our algorithm can be used to route to constrained topologies in the same way as RowCol, but now it can be done for general unitaries by converting to a quantum comb. Comparisons of our generalisation of RowCol to a simple slicing procedure are presented in the next section.
To illustrate how CombSynth works, we'll work through an example, showing each of the steps explicitly, similar to what we did for RowCol. In doing so, we will look at the circuit in (1), which has a quantum comb shown in (3). We will apply this process without topological constraints as these aren't necessary for showing how our generalisation works. The quantum comb is made of 8 temporal qubits, however, our initial circuit is made of 4 logical ones. This means that only 4 of the 8 temporal qubits in our quantum comb can exist at any one time, and a \(4\times 8\) rectangular matrix will then be used for the elimination steps. The sub-matrix will initially take the form in Figure 6. Note that the rows are not in the same order as they were in the larger matrix: this is because they are placed in the position of their corresponding logical qubit. A table of the elimination steps is presented in Figure 7, which highlights the row and column that get reduced to unit vectors and lists the row operations needed to do this. The elimination order of the temporal qubits is \(5,7,6,3,4,0,1\).
To complete the CombSynth algorithm, the comb generated from the row operations in Figure 7 is filled with the gates removed from the original circuit. This circuit, shown in Figure 8, is smaller than the original, though only by 1 gate, but this effect grows as the circuit size increases due to more CNOT cancellations.
## 4 Results
We have conducted a series of computational experiments to compare our generalisation of RowCol to a circuit slicing procedure. For the slicing we have used a naive algorithm that cuts the circuit where the gates are found, similar to that shown in Figure 0(b). 20 random circuits with CNOT counts of
Figure 6: Generation of sub-matrix from full parity matrix
Figure 7: Row operations on sub matrices that reduce the parity matrix to the identity
\(4,8,16,32,64,128,256,512\) and \(1024\) were generated, and a set of non-CNOT gates distributed uniformly throughout them. The number of non-CNOT gates was set to be proportional to the number of CNOTs, with the proportionality factor varying from \(5\%\) to \(50\%\). The percentages of non-CNOTs in the table and graphs below are therefore the proportion of non-CNOTs to CNOTs, not the proportion of non-CNOTs to the total number of gates in the circuit. A range of architectures, popular for conducting similar computational experiments, was then selected to route our circuits onto. These were the 9-qubit square grid, 16-qubit square grid, Rigetti 16-qubit Aspen, 16-qubit IBM QX5 and 20-qubit IBM Tokyo. For each set of experimental parameters we recorded the proportional change in CNOT gates (CNOT overhead) due to the synthesis algorithms.
A set of graphs with different parameters for the computational experiment is shown in Figure 9. These graphs are illustrative of the behaviour of all of the experiments: the overhead is high for small circuits with the slicing procedure sometimes being better, but as the circuit size increases both overheads approach a constant value, with the comb algorithm outperforming the slicing one. This is to be expected as using quantum combs allows for better cancellations than the slicing process, due to each slice being routed independently. The overheads for the largest circuit sizes, 1024 CNOTs, are shown in Table 1, giving a comparison of the asymptotic behaviour of each algorithm. It can be seen that larger proportions of non-CNOT gates reduce the advantage of using the comb algorithm: however it still outperforms the
Figure 8: Circuit generated from the CombSynth procedure in Section 3
Figure 9: Graphs showing the change in CNOT overhead when increasing the size of the circuit. This selection of graphs was chosen because it covers a wide range of the parameters of the computational experiment, however all the graphs still broadly have the same behaviour. They start off with a larger overhead, fluctuate a bit for the small circuit sizes, then approach some some constant value. With this approached value being smaller for the comb routing process then the slicing one.
slicing method for all architectures and gate proportions tested.
## 5 Conclusion and Future Work
We have proposed an alternative to slicing the circuit, using quantum combs, for generalising synthesis algorithms to arbitrary circuits. This idea was then concretely outlined for the case of CNOT synthesis by developing a generalisation of RowCol that works for quantum combs, and showing this allows the routing of arbitrary circuits. Finally, through a series of computational experiments, we demonstrated that for large circuits our quantum comb generalisation of RowCol outperforms the slicing procedure on a range of architectures and CNOT/non-CNOT proportions. Work has recently been done on improving the performance of RowCol by allowing the qubits to be permuted by the synthesis algorithm [11], investigating whether the performance of CombSynth could be improved in a similar way would be an interesting research direction. Currently, comb synthesis is designed to work with a subroutine that only works with one qubit (and hence one row/column) at a time, but it would be worth adapting it to work with synthesis algorithms which operate one more than one row or column at once, like the Patel-Markov-Hayes algorithm. Along a similar vein, it appears that our algorithm could be generalised to deal with multi-qubit holes by introducing some extra requirements than certain sets of temporal qubits should be extracted simultaneously.
Throughout this paper, we compared our quantum comb approach to that of slicing for generalising synthesis algorithms, however there have been some recent approaches that don't use slicing to perform circuit optimisation. These are lazy synthesis [13] and ZX circuit extraction [8], our approach differs from these as we focused on the utility of the higher order structure quantum combs for quantum compilation. Although a comparative analysis between our approach and these would be an interesting topic for further research. We focused on using quantum combs to generalise a CNOT synthesis algorithm in this paper: a natural next step would be to try and apply this idea to other synthesis algorithms such as the synthesis of Clifford circuits from stabiliser tableaux or CNOT+phase circuits from phase polynomials. Finally, quantum combs allow for routing of circuits with "black box" operations, where you may not know what operation will eventually be performed at the time of compilation. This may be useful for compilation in the context of parameterised circuits such as those used in variational algorithms, or for classical-quantum algorithms where you want to route around mid-circuit measurements.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{3}{*}{Architectures} & \multicolumn{2}{c|}{5\%} & \multicolumn{2}{c|}{15\%} & \multicolumn{2}{c|}{25\%} & \multicolumn{2}{c|}{50\%} \\ & Non-CNOT Gates & \multicolumn{2}{c|}{Non-CNOT Gates} & \multicolumn{2}{c|}{Non-CNOT Gates} & \multicolumn{2}{c|}{Non-CNOT Gates} \\ \cline{2-9} & Comb & Slice & Comb & Slice & Comb & Slice & Comb & Slice \\ \hline
9q-square & -43.1\% & 80.79\% & 34.12\% & 208.7\% & 91.93\% & 255.9\% & 182.1\% & 306.8\% \\ \hline
16q-square & 13.11\% & 344.5\% & 154.4\% & 511.1\% & 263.9\% & 564.6\% & 437.0\% & 606.9\% \\ \hline regetti\_ & \multirow{2}{*}{47.31\%} & \multirow{2}{*}{555.4\%} & \multirow{2}{*}{231.2\%} & \multirow{2}{*}{893.1\%} & \multirow{2}{*}{379.6\%} & \multirow{2}{*}{1027\%} & \multirow{2}{*}{614.9\%} & \multirow{2}{*}{1119\%} \\
16q\_aspen & & & & & & & & \\ \hline bm\_qx5 & 32.27\% & 461.9\% & 197.2\% & 698.8\% & 322.4\% & 783.6\% & 527.8\% & 837.7\% \\ \hline ibm\_q20\_tokyo & 33.17\% & 393.1\% & 183.6\% & 481.0\% & 289.3\% & 500.2\% & 440.7\% & 498.2\% \\ \hline \end{tabular}
\end{table}
Table 1: CNOT overhead when routing to different architectures and proportions of non-CNOT gates. The values above are for the largest circuits tested: 1024 CNOTs |
2304.00121 | Decoding the End-to-end Writing Trajectory in Scholarly Manuscripts | Scholarly writing presents a complex space that generally follows a
methodical procedure to plan and produce both rationally sound and creative
compositions. Recent works involving large language models (LLM) demonstrate
considerable success in text generation and revision tasks; however, LLMs still
struggle to provide structural and creative feedback on the document level that
is crucial to academic writing. In this paper, we introduce a novel taxonomy
that categorizes scholarly writing behaviors according to intention, writer
actions, and the information types of the written data. We also provide
ManuScript, an original dataset annotated with a simplified version of our
taxonomy to show writer actions and the intentions behind them. Motivated by
cognitive writing theory, our taxonomy for scientific papers includes three
levels of categorization in order to trace the general writing flow and
identify the distinct writer activities embedded within each higher-level
process. ManuScript intends to provide a complete picture of the scholarly
writing process by capturing the linearity and non-linearity of writing
trajectory, such that writing assistants can provide stronger feedback and
suggestions on an end-to-end level. The collected writing trajectories are
viewed at https://minnesotanlp.github.io/REWARD_demo/ | Ryan Koo, Anna Martin, Linghe Wang, Dongyeop Kang | 2023-03-31T20:33:03Z | http://arxiv.org/abs/2304.00121v1 | # Decoding the End-to-end Writing Trajectory in Scholarly Manuscripts
###### Abstract.
Scholarly writing presents a complex space that generally follows a methodical procedure to plan and produce both rationally sound and creative compositions. Recent works involving large language models (LLM) demonstrate considerable success in text generation and revision tasks; however, LLMs still struggle to provide structural and creative feedback on the document level that is crucial to academic writing. In this paper, we introduce a novel taxonomy that categorizes scholarly writing behaviors according to intention, writer actions, and the information types of the written data. We also provide ManusScript, an original dataset annotated with a simplified version of our taxonomy to show writer actions and the intentions behind them. Motivated by cognitive writing theory, our taxonomy for scientific papers includes three levels of categorization in order to trace the general writing flow and identify the distinct writer activities embedded within each higher-level process. ManusScript intends to provide a complete picture of the scholarly writing process by capturing the linearity and non-linearity of writing trajectory, such that writing assistants can provide stronger feedback and suggestions on an end-to-end level. The collected writing trajectories are viewed at [https://minnestanlp.github.io/REWARD_demo/1](https://minnestanlp.github.io/REWARD_demo/1)
Footnote 1: The public code for the data collection and Chrome extension is here: [https://github.com/minnestanlp/reward-system](https://github.com/minnestanlp/reward-system)
writing assistant, scholarly writing, dataset +
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
Footnote †: journal: Computer Vision and Pattern Recognition
+
+
Footnote †: journal: Computer Vision and Pattern Recognition
Applying this taxonomy to a dataset of academic writing samples will give us insight into the academic writing process and provide us with data to support the generation of suggestions that align with the writer's current activity and intention. In the future, we plan to extend this work by scaling the data collection process over a longer period of time to develop a more nuanced taxonomy of writers' actions and intentions.
## 2. Manuscript: A Dataset of the End-to-End Writing Process
Analyzing a final manuscript alone is intractable for capturing an author's original intentions. We have developed a taxonomy of scholarly writing trajectories illustrated by Figure 2 that can characterize the finer-grained actions an author takes into distinct categories but is also general enough to fully capture the author's trajectory throughout the entire writing process. The highest level of our taxonomy describes the intention informing the writer's actions, and is based on the three main processes described by Flower and Hayes (Flower and Hayes, 2017). The middle layer describes the various writing actions that might take place to carry out the writer's intention. Each intention is associated with its own set of actions. For example, while the author is revising their work, they may be making substantive, formal, or stylistic revisions. The lowest level describes the linguistic or LaTeX unit that they are currently operating on. For example, if the writer is drafting and moving around paragraph topic sentences within a new section of their paper, their spans of keystrokes would alternate between Planning\(\rightarrow\)Generation\(\rightarrow\)Section and Planning\(\rightarrow\)Organization\(\rightarrow\)Section because they are working at the section level and switching between generating new ideas and organizing them.
Data CollectionWe developed a chrome extension that reverse engineers Overleaf's editing history utilizing user keystrokes to track writing actions in real-time (See details in Appendix A). From this, we can generate a playback that shows the chronological progression for each completed writing session. Our initial study involved four participants in a pilot study where they were prompted to describe their current or future research plans by responding to the available prompts or in free form over a thirty-minute writing session.
In total, we collected four writing trajectories, including 46 discontinuous edits with 3290 recorded actions. The detailed statistics are in Appendix C.
Annotation SchemaDue to the limited scope of our pilot study, we applied a reduced annotation schema, containing two levels of granularity (Table 1). The higher level includes Planning, Implementation, and Revision. These labels are used to denote the general process that the writer is working in. The lower level categorizations include {idea generation, concept organization}, {lexical_chaining}, and {syntactic, lexical, structural} for each of the three processes respectively. Presently, the category of Implementation is limited in that the only sub-category is lexical_chaining. We hope to learn more about the Implementation process during our next study.
## 3. Annotation Results
Three of the authors annotated the samples that were gathered (See Figure 1 for an example). One author annotated sample 1 in the course of developing the annotation guidelines. Figure 3 illustrates the first participant's writing trajectory. Each of the other three samples was annotated by two different authors such that each author annotated two samples, and no two samples had the same pair of annotators. The inter-annotator agreement score (mean F1) across the three samples is 65.26. For all scores, see Appendix B.
\begin{table}
\begin{tabular}{p{42.7pt} p{284.5pt}} \hline \hline _generation organization_ & The process of adding ideas to the document. \\ \hline \hline
**Implementation** & The writer’s intention is to produce high-quality and persuasive text that meets their writing goals. \\ \hline \hline _lexical chaining_ & Writing coherent text where sentences are linked by the semantic relationships between words (Flower and Hayes, 2017). \\ \hline \hline
**Revision** & The writer’s intention is to improve the clarity, consistency, coherence, and style of the written text. \\ \hline \hline _syntactic lexical_ & Fixing grammar, spelling, and punctuation. \\ \hline \hline \end{tabular}
\end{table}
Table 1. Simplified annotation schema applied to our dataset
Figure 3. Annotated writing trajectory of one participant. The x-axis shows the writing steps chronologically. The horizontal bands show the three high-level processes of Planning, Implementation, and Revision.
Figure 2. Hierarchical Taxonomy of Writing Actions
Future Work
_Extended schema._ The simplified annotation schema we applied to our data is limited in its ability to capture the expressiveness and nuance of scholarly communication. To this end, we are continuing to refine the hierarchical taxonomy of scholarly writing (see Figure 2). For example, while revising their work, a writer might replace a word with another to improve clarity; this would be classified as Revision\(\rightarrow\)Substanative\(\rightarrow\)Lexical.
_Larger data collection._ To validate our taxonomy and gain deeper insight into the scholarly writing process, we will need to collect more writing data over a longer period of time. The current study design is too short (30 minutes), and the prompt is too limiting to gather a comprehensive representation of scholarly writing behaviors. Our future study will be conducted over a period of months and will observe the writing actions of researchers as they write their actual academic works in order to elicit data that accurately represents the scholarly writing process.
_With multiple authors._ Often within the writing process for scholarly papers, multiple authors will work on a manuscript simultaneously. For example, the input of other authors, comments, or suggestions may influence an author's writing trajectory compared to their usual writing habits in an individual setting. Therefore, tracking how the writing trajectory differs between the individual writing space and the collaborative one poses an interesting task to explore.
_Multiple academic disciplines._ The authors of this work have a bias towards writing conventions in computer science research. While we developed our taxonomy to be general enough to be applied to various academic disciplines, there may be nuances in the writing requirements for other disciplines that we are unfamiliar with. Further study is required to ascertain appropriate modifications to our schema for different disciplines. In particular, we believe the writer actions that belong to the Implementation phase might need to be expanded for other disciplines, and additional information units added to the Media/Materials level.
_Writing Assistants._Manuscript intends to decode the writing process in academic writing by capturing writer actions in an end-to-end manner such that writing assistants can provide more useful feedback at each phase of the process. Through taxonomizing writer actions at each point, the dataset can provide a good representation of the trajectory that authors tend to take within their writing and their intentions that may provide current writing assistants with a more clear understanding in predicting the next steps that the writer envisions.
|
2309.08595 | Demodulation demonstration using the LightCube CubeSat | LightCube is a 1U educational CubeSat which had the goal of connecting the
public with space by producing a flash visible to the naked eye on command by a
public user. The spacecraft could be triggered via HAM radio communications by
those with an amateur license. LightCube is commanded with a DTMF sequence, and
reports telemetry using RTTY, an AFSK modulation scheme and is decoded with a
custom GNURadio-companion flowgraph. Several radio applications were written,
including a from-scratch decoder written for educational purposes and one
optimized to be compatible with the SatNOGS environment. Lightcube deployed
from the International Space Station on April 24th 2023 and operated for 24
hours before suffering a battery failure. During this time it was tracked by
many amateurs around the world with observations reported to the SatNOGs
database. Audio observations of the beacons were subsequently decoded by the
student team and by amateurs. Having received many observations from around the
world, the team has been able to reconstruct the sequence of events leading to
loss of communications. | Lindsay M. Berkhout, Christopher McCormick, Daniel C. Jacobs, Jaime Sanchez de la Vega | 2023-09-15T17:51:10Z | http://arxiv.org/abs/2309.08595v1 | # Demodulation demonstration using the LightCube CubeSat
###### Abstract
LightCube is a 1U educational CubeSat which had the goal of connecting the public with space by producing a flash visible to the naked eye on command by a public user. The spacecraft could be triggered via HAM radio communications by those with an amateur license. LightCube is commanded with a DTMF sequence, and reports telemetry using RTTY, an AFSK modulation scheme and is decoded with a custom GNURRadio-companion flowgraph. Several radio applications were written, including a from-scratch decoder written for educational purposes and one optimized to be compatible with the SatNOGS environment. Lightcube deployed from the International Space Station on April 24th 2023 and operated for 24 hours before suffering a battery failure. During this time it was tracked by many amateurs around the world with observations reported to the SatNOGs database. Audio observations of the beacons were subsequently decoded by the student team and by amateurs. Having received many observations from around the world, the team has been able to reconstruct the sequence of events leading to loss of communications.
Lindsay M. Berkhout [email protected]
School of Earth and Space Exploration
Arizona State University, Tempe, AZ, USA
Jaime Sanchez de la Vega
Ira A. Fulton Schools of Engineering
Arizona State University, Tempe, AZ, USA
## 1 Introduction
The LightCube 1U CubeSat was designed to connect the public with space operations by offering direct interaction with a Low Earth Orbit (LEO) satellite. Additionally, the design and operation of the satellite was largely a student-based effort. Projects like NASA's CubeSat Launch initiative (CSLI)1 provide opportunities for educational institutions to fly small satellites, lowering barriers to student involvement in space-based missions.
Figure 1: A benchtop photo of the flight-ready LightCube satellite.
The payload consists of a bulb that produces a naked-eye visible flash on user command. In order to command the spacecraft, the only requirement is an amateur HAM radio license. Commands are sent and received through a deployable UHF antenna. The real CubeSat is included in Figure 1, and a block diagram of the satellite is include in Figure 2.
During its life the satellite was operated almost exclusively through the SatNOGS amateur satellite network (Nicolas, 2021) and decoded using GNUradio. This paper provides technical detail on flight and ground side radio systems.
Section 2 describes the communications protocols for uplink commands and downlink telemetry. Section 3 describes the groundstation setups used for received telemetry, and a description of the GNURadio based decoding. Section 4 describes the parsed telemetry received over the approximately 24 hours of operation.
## 2 Spacecraft Communication
The spacecraft transponder operates in the amateur Ultra High Frequency (UHF) band, using a deployable UHF antenna to transmit and receive from groundstations. The antenna is a circularly polarized dual dipole configuration.
The spacecraft accepts commands through a Dual Tone Multi-Frequency (DTMF) sequence transmitted by a ground user, and Downlink modes include Radioteletype (RTTY) with Audio Frequency Shift Keying (AFSK) for telemetry and DTMF tone sequences for command confirmation. The transceiver operates at 437.175 MHz in the amateur band under coordination by the International Amateur Radio Union (IARU) with call sign KJ7TZG. The radio itself is a DRA818U-UHF Band Voice Transceiver Module, and a CMX865A modem provides both the DTMF and RTTY (AFSK) encoding and decoding of the output audio.
A summary of the communications protocol information is available in table 1.
### Uplink Commands
The spacecraft payload is commanded using a sequence of DTMF codes. The Xenon flash tubes have an 8 \(\mu s\) flash length, and flashes are limited to every 30 seconds to allow the payload to reset. The current sequences are unpublished, as the team was unable to test the payload in-situ before communications failure. The payload flash can be enabled or disabled by the communications team to allow amateur users to signal the spacecraft. Should a flash command be issued, it will be logged and publicly available, enabling a historical recording of commands. Table 2 displays the frequency mapping for DTMF tones.
### Downlink Telemetry
A single downlink transmission can be one of three types: a full telemetry packet, an orientation packet, and a 'heartbeat' packet. During operation, full telemetry was sent every one minute, and the heartbeat and orientation packets were sent every 30 seconds. Telemetry is encoded with RTTY (AFSK) at 300 Baud. Full packet definitions are available online2, and include satellite health information such as solar panel metrics and flash bulb status.
Footnote 2: [https://interplanetarylab.github.io/missions/Lightcube](https://interplanetarylab.github.io/missions/Lightcube)
## 3 Groundstation Communications
### SatNOGS
Support from the Libre Space Foundation Satellite Networked Open Ground Stations (SatNOGS) project was critical to operations. SatNOGS provides a network of ama
\begin{table}
\begin{tabular}{|c|c|c|} \hline & Scheme & Carrier \\ \hline Downlink & RTTY(AFSK) & 437.175 MHz \\ Uplink & DTMF & 437.175 MHz \\ \hline \end{tabular}
\end{table}
Table 1: Scheme, protocol, and carrier information for Lightcube.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Tones & 1209 Hz & 1336 Hz & 1477 Hz & 1633 Hz \\ \hline
697 Hz & 1 & 2 & 3 & A \\
770 Hz & 4 & 5 & 6 & B \\
852 Hz & 7 & 8 & 9 & C \\
941 Hz & * & 0 & \# & D \\ \hline \end{tabular}
\end{table}
Table 2: Tone encoding used for Dual Tone Multi-Frequency (DTMF) signaling
Figure 2: Block diagram of the satellite. The payload consists of flash bulbs that are naked-eye visible from the ground. Communications between ground station and satellite use UHF frequencies over a deployable antenna.
teur groundstations around the world. SatNOGS provides a guide for amateur radio operators to set up receiver stations. These stations are based on software defined radios which connect through a standardized program back to a central server. On this server remote users can schedule observations of satellites and inspect the results.
SatNOGS supports many amateur satellites with observations and decoding. Satellite teams can integrate their instrument into the SatNOGS architecture by adding relevant information (modulation scheme, carrier frequency, etc.) to the database of supported satellites and providing a GNU-Radio based decoder to parse the recieved packets.
Figure 3 shows the locations of all registered SatNOGS stations. Stations that received successfully demodulated LightCube beacons are highlighted.
Figure 4 shows an example high-quality LightCube observation from a SatNOGS station. SatNOGS offers an FM demodulated audio file as a station product, as well as a waterfall plot showing the FM demodulated signal.
The SatNOGS network also offers direct demodulation of received signals for satellites that provide accompanying GNURadio flowgraphs. The LightCube demodulator itself is not currently integrated into the network, and telemetry outputs are processed by hand from the provided audio files.
### ASU Groundstation
The local ASU Groundstation is connected to the SatNOGS network, and additionally uses GQRX3 for some applications. The GPredict software4 is used for tracking and doppler correction.
Footnote 3: [https://gqrx.dk/](https://gqrx.dk/)
Footnote 4: [http://gpredict.oz9ec.net/](http://gpredict.oz9ec.net/)
The hardware consists of two 70cm Yagi Antennas, with a Yaesu G5500 Rotator and Kuhne LNA. A few different radios are available, including software defined radios such as the RTL-SDR and HackRF-One, and a traditional ICOM-9100 radio.
### Decoding with GNURadio
Decoding of LightCube beacons is enabled by a GNURadio-companion flowgraph. An annotated version of the flowgraph is included in Figure 5. The version in this figure expects a raw voltage steam from a file, but other versions of are available to allow for decodes from live hardware streaming or SatNOGS audio files.
Received data must first be FM demodulated to extract the audio signal from the FM modulated carrier at 437.175
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline Bell 103 FSK Mode & Min & Typ. & Max & Units \\ \hline Baud rate & & 300 & & \\ Mark (logical 1) & 2222 & 2225 & 2228 & Hz \\ Space (logical 0) & 2022 & 2025 & 2028 & Hz \\ \hline \end{tabular}
\end{table}
Table 3: Baud rate and mark/space frequency information for the Bell 103 protocol used for downlink telemetry
Figure 3: Locations of all SatNOGS registered stations, indicated with black ’x’ markers. Square red markers show stations with received LightCube packets.
MHz. This step is performed within the flowgraph in the case of a hardware input or voltage stream file by the NBFM block, which then outputs the baseband audio data. However, the SatNOGS network outputs FM demodulated audio data already, so this step is bypassed for the SatNOGS flowgraph version.
A throttle block can be enabled or bypassed depending on the input source. Hardware inputs do not need the throttle, while it must be included when using a file input as issues with sample rates will occur otherwise.
Following the throttle block, soft symbols are extracted from the audio data. This step also offers options depending on user-choice. While the stream is now demodulated from the carrier wave, the audio signal itself must be translated to baseband and low pass filtered. Then the frequency shift keyed signal is quadrature demodulated and clock recovery is performed. These steps are performed using individual GNURadio blocks in one version of the flowgraph for educational purposes, as the user can follow the demodulation process step-by-step. For a simplified flowgraph, there is an option to use a pre-built demodulator block.
The annoted version included in Figure 5 shows the use of this pre-built AFSK demodulator block from the gr-satellites package5 to perform the soft symbol extraction steps. Gr-satellites is an out-of-tree GNURadio module providing support blocks for satellite communications. The project supports many amateur satellites and modulation schemes.
Footnote 5: [https://github.com/daniestevez/gr-satellites](https://github.com/daniestevez/gr-satellites)
The soft symbols are converted to hard symbols by a binary slicer, which is fed to the'sync and create packed PDU' block provided by the gr-satellites package. This block searches for a defined sequence to denote the start of a packet. For LightCube, the sync searches for the amateur radio call-sign included in the packet header. The PDU is converted back to a tagged stream to perform some data manipulation. The modem produces two 'idle' bits, as well as a stop and parity bit. To facilitate easy parsing, these bits are removed and the bytes reversed to reflect the modem ordering.
The bytes now represent a complete and parse-able telemetry packet. The stream is dumped to debug and hexdump blocks within GNURadio-companion for a quick view, and simultaneously written to a file and sent over UDP to the local user terminal. The output packet is parsed to assign specific telemetry parameters to their corresponding bytes.
## 4 Received Telemetry
The SatNOGS network received the first confirmed beacon from LightCube at 13:31:26 UTC on April 24th, 2023. Beacons were consistently recorded until 08:55:22 UTC on April 25th, 2023. The received beacons appear to cover around 9 orbits. Using decoded packets from SatNOGS and the ASU ground station, the communications team was able to reconstruct the likely series of events leading to loss of communication.
Reported values for the satellite battery temperature sug
Figure 4: Observation by Fredy Damkalis on station owned by Mike Rupprecht. The SatNOGS network demodulates the FM carrier and outputs an audio file as well as a figure showing the baseband signal, included here. For LightCube, this audio output needs to be further demodulated for the AFSK scheme.
gested unexpected variability. The variability was tracked to a software mistake which left the battery heater turned off in a pre-launch testing configuration.
This left battery temperature at the mercy of natural solar heating, and the temperature therefore swings with sun/shade exposure. Often, the batteries fell below the recommended operating temperature of 0\({}^{\circ}\)C. Battery temperature as a function of local Solar angle, shown in Figure 6 gives some idea of how solar input might have driven battery heating. Beacons are color/shape coded by an 'in-sun' or 'in-shade' determination from the Skyfield (Rhodes, 2019) package. Trends largely suggest that the battery temperature experienced peaks and troughs correlated with Solar irradiance.
Based on the temperatures reported well below the recommended battery operating minimum of 0\({}^{\circ}\)C, it is not likely we will receive more beacons from the satellite. However, a potential remains for communication to be restored during times when the satellite has been warmed by the sun for extended periods as the orbit precesses and solar angles increase.
## 5 Conclusion
Using the large coverage and reliable stations of the SatNOGS network and GNURadio based software tools,
Figure 5: Annotated GNURadio Companion flowgraph for demodulating LightCube telemetry packets. This version assumes a voltage stream file, but other versions of the flow graph allow for live hardware decoding or post-hoc input of a SatNOGS audio file. To parse (assign bytes to telemetry values), a Python program captured packets from the terminal or output file and printed the values.
Figure 6: Solar altitude at satellite location vs. battery temperature reported from satellite telemetry. The data covers telemetry from 9 orbits. Purple circle data was were the satellite was determined to be ‘in-shade’ by the Skyfield (Rhodes, 2019) package, and the orange square data was determined to be ‘in-sun’. Battery temperature trends appear to largely follow Solar altitude and ‘in-sun’ determinations.
telemetry was obtained and demodulated from the LightCube CubeSat during a very short window when the satellite operated. Battery temperature data from the telemetry suggests that the hardware has fallen below allowed temperature ranges, and the CubeSat ceased reporting telemetry shortly under 24 hours after launch. Some potential exists for beacons to resume due to natural heating from high Solar intensity orbits, but based on the long lapse in communications it is not likely.
Much was learned from the process of launching the satellite, and lessons learned will be applied to future educational satellite projects. LightCube was developed and operated largely by students, offering early involvement in real space projects.
## Software
This work was enabled by a number of software packages, including the geopandas package6 for station plotting, the Astropy (Astropy Collaboration, 2022) and Skyfield packages for satellite orbit information, and the GNURadio project. We would also like to acknowledge the use of the out-of-tree GNURadio-companion modules provided by SatNOGS and gr-satellites.
Footnote 6: [https://github.com/geopandas/geopandas](https://github.com/geopandas/geopandas)
The GNURadio based decoders for this project are availble at [https://github.com/ASU-cubesat/lightcube_telemetry](https://github.com/ASU-cubesat/lightcube_telemetry).
## Acknowledgements
LightCube is supported by funding from the Interplanetary Lab at Arizona State University (ASU), Vega Space Systems, and ASU Alumni.
We thank the amateur community involved in the SatNOGS project for enabling high duty cycle observations of the LightCube satellite.
LMB acknowledges that this material is based upon work supported by a National Science Foundation Graduate Research Fellowship under Grant No. 2233001.
|
2304.00135 | Optics design and correction challenges for the high energy booster of
FCC-ee | One of the major upcoming challenges in particle physics is achieving precise
measurements of the Z, W, and H bosons, as well as the top quark. To meet these
targets, the next e\textsuperscript{+}e\textsuperscript{-} collider complex,
FCC-ee, will need to achieve unprecedented luminosities. The FCC-IS European
Study is investigating the feasibility of these challenges, with a cornerstone
of the study being the design and optimization of the high-energy booster
(HEB). This paper provides an update on the status of the HEB of FCC-ee in
light of recent developments in the injector and collider survey, as well as an
overview of ongoing work on longitudinal stability and design robustness in
relation to field, alignment, and diagnostics errors. Constraints and effects
related to the design frequency of the accelerating cavities, as well as
collective effects, are also highlighted. Lastly, the paper presents an
investigation into an alternative arcs cell design. | Antoine Chance, Barbara Dalena, Tatiana Da Silva, Ahmad Mashal, Mauro Migliorati, Adnan Ghribi, Ali Rajabi, Frank Zimmermann | 2023-03-31T21:20:07Z | http://arxiv.org/abs/2304.00135v2 | # Optics design and correction challenges for the high energy booster of FCC-ee
###### Abstract
One of the major upcoming challenges in particle physics is achieving precise measurements of the Z, W, and H bosons, as well as the top quark. To meet these targets, the next e\({}^{+}\)e\({}^{-}\) collider complex, FCC-ee, will need to achieve unprecedented luminosities. The FCC-IS European Study is investigating the feasibility of these challenges, with a cornerstone of the study being the design and optimization of the high-energy booster (HEB). This paper provides an update on the status of the HEB of FCC-ee in light of recent developments in the injector and collider survey, as well as an overview of ongoing work on longitudinal stability and design robustness in relation to field, alignment, and diagnostics errors. Constraints and effects related to the design frequency of the accelerating cavities, as well as collective effects, are also highlighted. Lastly, the paper presents an investigation into an alternative arcs cell design.
## 1 Introduction
FCC-ee is a double-ring lepton collider and the first operational stage of the integrated long-term FCC project. It is expected to serve as an electron-positron Higgs and electroweak factory, achieving the highest luminosities within 15 years. There are four operational modes defined for FCC-ee, which are referred to as \(Z\), \(W\), \(H\), and \(t\bar{t}\). The beam properties, including energy, current, and emittance, vary for each mode. However, the short beam lifetime and the requirements for top-up injection are common features among them.
The High Energy Booster (HEB) is the final part of the injector complex, where particles with an energy of 20 GeV are injected into the HEB ring. The main criteria for the HEB lattice and its ramping cycle design are accelerating the particles up to the collision energy, adjusting beam properties for efficient top-up injection, and meeting filling time considerations.
Previous studies Benedikt et al. (2018) have shown that a single design could not satisfy the mentioned requirements for all energies. Therefore, two distinguished lattices have been designed, one for \(Z\) and \(W\) modes and another for \(H\) and \(t\bar{t}\) modes. Preliminary results confirmed the lattices' performance in the ideal case. However, in the realistic case, the HEB performance is affected by inevitable errors such as magnetic field imperfection and misalignment, instabilities raised from collective effects, and the behavior of particles during the ramping process. Hence, it is necessary to ensure the possibility of correcting destructive effects before finalizing the design.
The effects of magnetic multipole errors on lattice momentum acceptance and a first analytical estimation of emittance growth in the presence of intra-beam scattering have been previously reported Dalena et al. (2022). This paper is a status report on the aforementioned developments for the HEB design. The second section reports on the study of longitudinal phase space stability, while the third section details closed orbit correction strategies in the presence of misalignment errors. The fourth section shows the beginning of a joint effort for precise collective effects studies. Section six provides an update on the change in RF cavities frequencies with respect to the Conceptual Design Report and the corresponding RF budget. Finally, an alternative optics design under study is presented in the last section.
## 2 Optics stability
One of the ongoing tasks is to achieve sufficiently large Dynamic Aperture (DA) and Momentum Aperture (MA) for a \(90^{\circ}\) phase advance lattice, designed for modes \(H\), \(t\bar{t}\) and \(t\bar{t}\). By using a non-interleave sextupoles arrangement with 32 pairs of focusing sextupoles and 33 pairs of defocusing sextupoles in each arc, a considerably large dynamic aperture (in absence of the radiation and RF cavities) is obtained. Increasing the number of sextupole families to four and optimizing resonance driving terms enhances the DA, especially for off-momentum particles. The size of both the DA and MA is reduced when a 6D tracking procedure includes radiation effects and energy compensation at RF cavities has been performed. As a first step in resolving this issue, we investigate particles' stability in the longitudinal phase space. A kilometer-scale bending radius of the dipole magnets, as well as the low value of dispersion function along them, makes the lattice's momentum compaction factor to become very small. In this case, even a slight perturbation in momentum compaction could lead to instability in the longitudinal phase space. A variation in path length with momentum at higher order could be used to compute perturbation terms Wiedemann (2015) as Eq. (1) shows:
\[\frac{\Delta L}{L_{0}}=\alpha_{c}\delta+\alpha_{1}\delta^{2}+\xi+\mathcal{O}(3) \tag{1}\]
where \(L_{0}\) is the nominal path length, \(\alpha_{c}\) the momentum compaction factor, \(\alpha_{1}\) the second order momentum compaction and \(\xi\) the momentum independent term (see Wiedemann (2015) for more details). The extended form of the momentum compaction in the Hamiltonian reveals the existence of the secondary RF bucket in the longitudinal phase space. For certain condition which depends on perturbation terms \(\alpha_{1}\) and \(\xi\), these buckets could get close and interfere with each other. The buckets' interference disturbs the longitudinal phase stability condition and reduces the momentum acceptance of the lattice. Moreover, the width of the RF buckets determines by the desired momentum acceptance and its required RF voltage. Hence, the stability criterion for the perturbation term \(\alpha_{1}\) as a function of desired momentum acceptance could be defined Wiedemann (2015) as in Eq. (2):
\[\alpha_{1}\leq\frac{|\eta_{c}|(1-\Gamma)^{3/4}}{\sqrt{3}(\Delta p/p_{0})_{ \text{desired}}}\qquad\qquad\qquad\text{Where }\Gamma=\frac{4\xi\alpha_{1}}{\eta_{c}^{2}} \tag{2}\]
Tracking the on-axis particle within the momentum acceptance range of the ideal lattice gives us \(\alpha_{c}=7.33\times 10^{-6}\), \(\alpha_{1}=3.52\times 10^{-6}\), and \(\xi=3.07\times 10^{-11}\) ; so, the right hand side of Eq. (2) for desired \(\pm 5\%\) momentum acceptance is equal to \(8.46\times 10^{-5}\) and the lattice meets this stability condition. The location of the primary and secondary RF buckets in the longitudinal phase space is shown in figure 1. It should be noted that the secondary RF buckets are situated at a considerable distance from the main buckets, and their influence on the particle dynamics can be disregarded.
The initial amplitude of the particles could alter the value of \(\alpha_{1}\), accordingly, the stability criteria for variation of \(\alpha_{1}\) has been defined in Wiedemann (2015) as : \(\Delta\alpha_{1}<\frac{\eta_{\epsilon}^{2}}{4\xi}=0.226\). To ensure the dynamic stability of the beam, the variation of \(\alpha_{1}\) within the range of \((x,y)\in[-20,20]\times[-10,10]\) [mm\({}^{2}\)] has been computed and its result is shown in figure 2. It could be concluded that the variation of \(\alpha_{1}\) for the defined range is far below the stability threshold.
## 3 Orbit correction
The objective of the orbit correction study is twofold: firstly, to establish the permissible tolerances for the misalignment of the booster's components, and secondly, to assess the necessary strength of the correctors to rectify the closed orbit perturbation resulting from machine errors. Various types of errors have been taken into account, including random dipole field and roll errors, quadrupole alignment errors, BPM alignment and reading errors, and sextupole alignment errors.
The correction strategy described in Ref. Dalena et al. (2023) was employed to evaluate the tolerances for misalignment and field errors on the elements. To do so, we conducted several tests by gradually introducing different error types,
Figure 1: The location of primary and secondary RF buckets in longitudinal phase space.
Figure 2: The variation of \(\alpha_{1}\) for the different amplitudes.
each on 100 seeds. The initial test configuration consisted of the combination of quadrupole offsets (MQ), dipole relative field error (MB), and main dipole roll error. Statistical analysis of this initial configuration revealed that all seeds converged until an MQ offset of \(150\,\mathrm{\SIUnitSymbolMicro m}\) was reached. Notably, all errors applied to the elements were randomly Gaussian distributed within \(\pm\) 3 RMS (Root Mean Square).
The figure 3 shows the orbit and correctors strengths values and distributions with their respective analytical RMS values for the 99 successful seeds at the end of the correction procedure. The dashed red lines on the distributions represents the RMS calculated analytically.
In both the orbit and the corrector strength analyses, we observe that our analytical predictions align well with the simulation data. Specifically, the estimation accurately captures all the data for the vertical plane, while the horizontal plane has a few outliers that exceed the analytical limit. These behaviors can be explained by the combined effect of different errors and the \(\beta\)-function, which renders the procedure less effective. Nonetheless, the orbit distribution corresponds to our expectations, with the amplitude in both planes being in the order of magnitude of the MQ offsets (the dominant errors). Moreover, the pattern of the succession of the arcs and insertions is apparent, as we only applied the errors on the arcs, and the residual orbit after correction in the insertions should be almost zero. The RMS values for each successful seed are distributed around the dashed red line representing one analytical RMS estimate. The blue dots correspond to the RMS after turning on the sextupoles and correcting the orbit once, always using a Singular Value Decomposition (SVD) algorithm. Few outlier seeds exist that do not improve with the iteration of the corrector procedure.
Regarding the correctors' strength distribution (see figure 3), we observe that the strengths are contained in the 3 times mean analytical RMS limit. Therefore, the first specifications for the main magnets' misalignment of the High Energy Booster arcs cells are set to 150 \(\mu\)m, with a maximum corrector strength of about 20 mT-m, as far as orbit correction is concerned. These values will be confirmed or potentially reduced following the full emittance tuning performance study.
Figure 3: Superposition of the orbits the and correctors strengths of 99 seeds (dots) and the global envelope for all considered machine configurations (solid line) in the X and Y transverse planes. Density distributions (one for each seed) are also superposed. The red dashed line is 3 times the mean of the analytical RMS values.
## 4 Collective effects and injection parameters
The high energy booster's smaller pipe radius of 25 mm, compared to the collider's 35 mm, and almost 100 km circumference, along with the required injection energy and bunch population for the desired emittance at different operating modes (shown in table 1), make collective effects important. Two of these effects being investigated are the resistive wall and intrabeam scattering (IBS).
As for the resistive wall, because of the reduced pipe diameter, the impedance and the wakefield contributions are expected to be higher than the main rings Migliorati et al. (2018); Migliorati et al. (2021) in both longitudinal and transverse plane2.
Footnote 2: these two effects scale as \(r^{-1}\) in the longitudinal plane and as \(r^{-3}\) in the transverse one.
Another important point related to the resistive wall is the possible presence of eddy currents that could be generated by the ramp of the magnetic field during acceleration. In order to avoid them, an initial proposal of a stainless steel vacuum chamber was discussed. This material would have increased the resistive wall contribution by a too large amount. However, due to the low magnetic field in the booster 3, the eddy currents are not expected to be a problem, giving, as consequence, that it is possible to have a stainless steel vacuum chamber with a copper coating of 1 mm, so that, from about 2 KHz on, the skin depth is such that the beam sees only the copper. However, one simple way to fabricate such a vacuum chamber would give, as a final result, a copper pipe with a stainless steel strip about 1 mm wide4 seen by the beam. This strip would produce an azimuthal asymmetry in the impedance and this contribution, also taking into account the detuning (quadrupolar) terms, has to be studied in detail. In this section, we present only preliminary results for a circular beam pipe made of copper. The longitudinal and transverse wake potentials of a 0.2 mm Gaussian bunch used as a pseudo-Green function for copper and stainless steel are shown in figure 4.
Footnote 3: according to the CDR Benedikt et al. (2018) from 0.005 T at 20 GeV to a maximum peak of 0.046 T at 182.5 GeV, with a rate of field change below 0.03 T/s.
Footnote 4: Private communication.
With such contributions, the effects on beam dynamics have been studied with the PyHEADTAIL tracking code Li et al. (2016). As initial conditions for the bunch distribution we have supposed a transverse emittance of \(3\times 10^{-4}\) mm\(\cdot\)mrad in the horizontal plane and 1.4 \(\mu\)m\(\cdot\mu\)rad in the vertical plane, with an equilibrium relative energy spread of \(3\times 10^{-4}\) and a zero current bunch length of about 2.2 mm.
The effect of the longitudinal wakefield is shown in figure 5. We can see that, at the nominal bunch population of \(2.4\times 10^{10}\) we have an effect of the potential well distortion, but also a slight microwave instability which increases the energy spread by a few percent with respect to its nominal value.
In the transverse plane, the wakefield can produce Transverse Mode Coupling Instability (TMCI), which can be more dangerous than microwave instability because it can cause beam loss. The instability occurs when two coherent oscillations modes couple together. As shown in figure 6 Metral and Migliorati (2020), the threshold is higher than the nominal current, but the safety margin is not so large.
Figure 4: Illustration of longitudinal (left) and transverse dipolar (right) wake potential of a 0.2 mm Gaussian bunch for a copper and stainless steel vacuum chamber.
It is important to note that the presented results provide only a rough evaluation of the collective effects on the beam dynamics of the booster. Self-consistent simulations that take into account the intrabeam scattering and wakefields, both influencing the bunch length, must be considered. Additionally, the possible difference in the resistive wall impedance due to the presence of the strip, which produces also an asymmetry with contributions to the detuning (quadrupolar) impedance, must be carefully evaluated. Finally, the machine impedance model has to be built. For the IBS, an updated analytical study made with MadX with respect to the one published in Dalena et al. (2022) is represented in figure 5. It shows an important contribution of the IBS after energy ramp to the energy spread making it higher than the collider threshold. This result is however to be confirmed with tracking simulation taking into account simultaneously the wakefield and the IBS.
Figure 5: Left : Effect of the IBS on the evolution of the energy spread during a linear ramp for an injected beam energy spread and bunch length respectively set to 0.15% and 1 mm. Right : RMS bunch length and and energy spread versus bunch population.
Figure 6: Real part of the tune shift of the first azimuthal transverse coherent oscillation modes normalised by the synchrotron tune \(Q_{s0}\) as a function of bunch population.
## 5 RF parameters and frequency choice
As the HEB is central in the accelerator complex design, it provides inputs to the different working group studies. One of these inputs is the radiofrequency (RF) total voltage budget, which depends, among other things, on the bunch length needed at extraction for different energy modes and the RF frequency of the accelerating cavities. Recently, the base design of the RF frequency has changed from 400 MHz to 800 MHz, requiring a revision of the total voltage budget. Assuming no energy gain (\(E_{gain}=0\)), one can calculate the resulting RF voltage using the Eq (3):
\[V_{RF}=\sqrt{\left(\frac{C^{2}\sigma_{e}^{2}E_{t}\eta}{2\pi\nu_{RF}\sigma_{z}^ {2}\beta^{3}}\right)^{2}+\left(E_{gain}+U_{0}\right)^{2}} \tag{3}\]
In this equation, C represents the booster circumference, \(\sigma_{z}\) is the bunch length, \(E_{t}\) is the total energy, \(\eta\simeq-\alpha_{c}\) is the slippage factor, \(\beta\) is the normalized velocity, \(\nu_{RF}\) is the RF cavities frequency, \(U_{0}\) is the synchrotron energy loss per turn, and \(\sigma_{e}\) is the energy spread.
At injection energy, the momentum acceptance is taken as a criterion for the calculation of the cavities RF voltage budget by solving Eq. (4):
\[\delta_{p}=\frac{2Q_{s}(V_{RF})c}{C\nu_{RF}\alpha_{c}}\sqrt{\tan\phi_{s}(V_{ RF})*(1+\phi_{s}(V_{RF})-\pi/2)} \tag{4}\]
with \(Q_{s}(V_{RF})\) the synchronous tune and \(\phi_{s}(V_{RF})\) the synchronous phase.
As can be seen in table 1, the frequency change from 400 MHz to 800 MHz allows to reduce the total cavities RF budget required at extraction. However, taking the momentum acceptance as a requirement at injection almost doubles it when the RF frequency is doubled.
## 6 Alternative Optics
In the baseline, the arcs are made of FODO cells with a phase advance of \(60^{\circ}\)/\(90^{\circ}\) respectively for the modes \(Z\)/\(W\) and \(H\)/\(t\bar{t}\). The main reason is a larger momentum compaction at injection at the \(Z\)/\(W\) modes to manage stronger collective effects due to a larger current. However, to enlarge the dynamic aperture, the sextupole scheme is based on a non-interleaved scheme with a phase advance of \(180^{\circ}\) between two sextupoles of a pair. Maintaining the possibility to have \(60^{\circ}\)/\(90^{\circ}\) implies to have a different cabling for the different operation modes. Moreover, the number of sextupoles is roughly doubled.
We propose an alternative scheme which enables to tune the momentum compaction by keeping the same non-interleaved scheme for all operation modes. The principle is to create a dispersion and betatron wave at one quadrupole near a sextupole. We assume for instance that the integrated quadrupole strength is modified by \(\Delta k\); the quadrupole near the other paired sectupole is modified by \(-\Delta k\) (see figure 7). The phase advance between the center of the 2 quadrupoles
\begin{table}
\begin{tabular}{l c c c c c c} \hline Modes & Injection \(60^{\circ a}\)/\(90^{\circ b}\) & Z & W & H & \(t\bar{t}\) & [Units] \\ \hline Energy & 20 & 45.6 & 80 & 120 & 182.5 & [GeV] \\ \hline \(\sigma_{z}\)\({}^{c}\) & 4 & 4.38 & 3.55 & 3.34 & 1.94 & [mm] \\ \(\delta_{p}\)\({}^{d}\) & 3 & 4.38 & 3.55 & 3.34 & 1.94 & [mm] \\ \(\alpha_{c}\)\({}^{e}\) & 14.9 / 7.34 & 14.9 & & 7.34 & & [10\({}^{-6}\)] \\ \hline \(V_{RF,400}\)\({}^{f}\) & 53.6 / 27.6 & 124.6 & 1023.2 & 2185.6 & 14205.4 & [MV] \\ \(V_{RF,800}\)\({}^{g}\) & 104.8 / 52.8 & 83.9 & 623.6 & 2038.3 & 11554.9 & [MV] \\ \hline \end{tabular} \({}^{a}\)\(60^{\circ}\) phase advance lattice of the Z and W modes
\({}^{b}\)\(90^{\circ}\) phase advance lattice of the H and \(t\bar{t}\) modes
\({}^{c}\)\({}^{d}\)Bunch length
\({}^{e}\)Momentum acceptance
\({}^{e}\)Momentum compaction factor
\({}^{f}\)RF voltage for \(\nu_{RF}=400\)\(MHz\)
\({}^{g}\)RF voltage for \(\nu_{RF}=800\)\(MHz\)
\end{table}
Table 1: Total RF voltage budget for the different energy modes of the FCC-ee HEB for two RF cavities frequencies.
is \(180^{\circ}\) in both planes. By this way, the betatron wave is cancelled in the second quadrupole contrary to the dispersion wave (because the frequency of the betatron wave is twice the one of the dispersion wave). The other advantage is also not to change the tune of the cell. If we do the thin lens approximation then to get a relative change of \(x\) on the momentum compaction, we should have \(\Delta k\approx\frac{\sqrt{x}}{2\sqrt{3}}\)
The advantages of this scheme are to simplify the cabling and distribution of the sextupoles in the arcs; to enable to tune the momentum compaction during the ramp and thus to get a smaller equilibrium emittance for the \(Z/W\) modes; to be compatible with any arc pattern if the phase advance between the sextupoles is still \(180^{\circ}\). The drawbacks are an additional power supply for the quadrupoles (but that should be less expensive than increasing the number of sextupoles if we have to keep non-interleaved scheme for \(60^{\circ}\!/90^{\circ}\) cells), a larger equilibrium emittance in comparison with a FODO cell giving the same momentum compaction, a possible reduction of the dynamic aperture and momentum acceptance.
## 7 Conclusion
This paper reports on the status of several ongoing parallel developments for the FCC-ee High Energy Booster. One of the ongoing studies focuses on particle stability in longitudinal phase space, particularly with regard to the control of synchro-betatron excitations. Other investigations are aimed at evaluating the performance of the current HEB optics design, in particular with respect to stability against lattice elements imperfections, main sextupoles settings, and the interplay of collective effects with a realistic lattice model. By integrating Wakefield and IBS, together with synchrotron radiation, we aim to gain important insights into the injection parameter constraints. These simulations will also allow us to compare the effects of different energy ramping strategies.
All these studies will provide input for alternative optics design and RF parameter choices for the different operating modes, while maintaining compatibility with both pre-injectors and collider requirements. However, it is important to note that these studies are highly dependent on various inputs such as accelerating cavities RF frequency, bunch population, vacuum pipe geometry, material and coating, and collider length, among other parameters. Therefore, they must be conducted in parallel and in dynamic interaction with other ongoing studies such as pre-injectors, collider optics, radio-frequency, or arcs optics patterns to ensure that all the necessary inputs are taken into consideration, and that the most optimal solutions are achieved.
## Acknowledgements
This work was supported by the European Union's Horizon 2020 Research and Innovation programme under grant no. 951754 (FCCIS).
|
2307.16561 | Correlation between the optical veiling and accretion properties: A case
study of the classical T Tauri star DK Tau | Classical T Tauri stars (cTTs) accrete from their circumstellar disk. The
material falls onto the stellar surface, producing an accretion shock, which
generates veiling in a star's spectra. In addition, the shock causes a
localized accretion spot at the level of the chromosphere. Our goal is to
investigate the accretion, particularly the mass accretion rates (Macc), for
the cTTs DK Tau, over two periods of 17 and 29 days, using two different
procedures for comparison purposes. The first method relies on the derivation
of the accretion luminosity via accretion-powered emission lines. The second
compares the variability of the optical veiling with accretion shock models to
determine mass accretion rates. We used observations taken in 2010 and 2012
with the ESPaDOnS spectropolarimeter at the CFHT. We find peak values of the
veiling (at 550 nm) ranging from 0.2 to 1.3, with a steeper trend across the
wavelength range for higher peak values. When using the accretion-powered
emission lines, we find mass accretion rate values ranging from
log(Macc[Msol/yr]) = -8.20 to log(Macc[Msol/yr]) = -7.40. This agrees with the
values found in the literature, as well as the values calculated using the
accretion shock models and the veiling. In addition, we identify a power-law
correlation between the values of the accretion luminosity and the optical
veiling. For the 2010 observations, using the values of the filling factors
(which represent the area of the star covered by an accretion spot) derived
from the shock models, we infer that the accretion spot was located between +45
degrees and +75 degrees in latitude. We show that both methods of determining
the mass accretion rate yield similar results. We also present a helpful means
of confirming the accretion luminosity values by measuring the veiling at a
single wavelength in the optical. | M. Nelissen, A. Natta, P. McGinnis, C. Pittman, C. Delvaux, T. Ray | 2023-07-31T10:41:57Z | http://arxiv.org/abs/2307.16561v1 | # Correlation between the optical veiling and accretion properties
###### Abstract
Context:Classical T Tauri stars (cTTs) accrete from their circumstellar disk. The material falls onto the stellar surface, producing an accretion shock, which generates veiling in a star's spectra. In addition, the shock causes a localized accretion spot at the level of the chromosphere.
Aims:Our goal is to investigate the accretion, particularly the mass accretion rates (\(\dot{M}_{\rm acc}\)), for the cTTs DK Tau, over two periods of 17 and 29 days, using two different procedures for comparison purposes.
Methods:The first method relies on the derivation of the accretion luminosity via accretion-powered emission lines. The second compares the variability of the optical veiling with accretion shock models to determine mass accretion rates. We used observations taken in 2010 and 2012 with the ESPaDOnS (Echelle SpectroPolarimetric Device for the Observation of Stars) spectropolarimeter at the CFHT (Canada-France-Hawaii Telescope).
Results:We find peak values of the veiling (at \(\sim\)550 nm) ranging from 0.2 to 1.3, with a steeper trend across the wavelength range for higher peak values. When using the accretion-powered emission lines, we find mass accretion rate values ranging from \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)=-8.20\) to \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)=-7.40\). This agrees with the values found in the literature, as well as the values calculated using the accretion shock models and the veiling. In addition, we identify a power-law correlation between the values of the accretion luminosity and the optical veiling. For the 2010 observations, using the values of the filling factors (which represent the area of the star covered by an accretion spot) derived from the shock models, we infer that the accretion spot was located between \(+45^{\circ}\) and \(+75^{\circ}\) in latitude.
Conclusions:We show that both methods of determining the mass accretion rate yield similar results. We also present a helpful means of confirming the accretion luminosity values by measuring the veiling at a single wavelength in the optical.
Conclusions:
## 1 Introduction
Classical T Tauri stars (cTTs) are low-mass (\(M_{\star}\lesssim 2\,M_{\odot}\)), optically visible, pre-main sequence stars that are magnetically active and are accreting from their surrounding disks. These circumstellar disks are the birth place of exoplanets, with the era of planet formation overlapping that of star formation. Accretion shapes the lifetime of the disks, which is on the order of a few million years (see Hartmann et al. 2016). Accretion is also thought to have a significant impact on the development of young stars by transferring mass, by impacting angular momentum evolution, and by being involved in jet launching (see Frank et al. 2014). Understanding the accretion process is thus an important issue for the establishment of plausible stellar and planet formation theories. cTTs present an opportunity to probe the interactions between the stars, their magnetic fields, and their disks through the accretion of circumstellar material.
According to the current magnetospheric accretion paradigm (see e.g., Shu et al. 1994; Romanova et al. 2002; Bessolaz et al. 2008; Hartmann et al. 2016), the magnetic field of a cTTs truncates the inner circumstellar disk at a few stellar radii and forces the migrating circumstellar material into accretion columns. The matter subsequently falls onto the stellar surface at near free-fall velocities, producing an accretion shock. These shocks generate emission lines and an excess continuum that is superimposed on the photospheric spectrum, which causes veiling of the photospheric absorption lines (see Calvet & Gullbring 1998). This happens mostly at ultraviolet (UV) and optical wavelengths, with the intensity of the veiling decreasing at longer wavelengths, and this causes the absorption lines to appear shallower. High-resolution spectroscopy allows us to resolve the absorption lines and measure their decrease in depth due to veiling. The values of veiling experience temporal variations as well: as the star rotates, the accretion shock is seen at different angles with respect to the line of sight of the observer. With these ideas in mind, veiling can thus be used as a tracer of accretion activity and variability.
The shocks also cause the emergence of localized accretion spots (also called bright or hot spots) at the chromospheric level, which are the footprints of the accretion columns on the surface of the star. It has been found that these accretion spots present a radial density gradient: a denser core surrounded by a low-density region (see Espaillat et al. 2021). However, details of the
accretion process are still poorly understood (see e.g., Robinson & Espaillat, 2019; Espaillat et al., 2021, 2022; Gangi et al., 2022; Pittman et al., 2022).
The mass accretion rate onto a star (\(\dot{M}_{\rm acc}\)) is an important parameter when studying the evolution of the pre-main sequence stars and their disks. One method of determining this quantity is by using the accretion luminosity (\(L_{\rm acc}\)), knowing the stellar mass and radius (see Gullbring et al., 1998). By extrapolation, \(L_{\rm acc}\) can be derived from several accretion-powered emission lines, using empirical relations between the accretion luminosity and the line luminosities (see Alcala et al., 2017).
A second method of extracting \(\dot{M}_{\rm acc}\) is by comparing the values of veiling to accretion shock models. These models were first introduced by Calvet & Gullbring (1998). They modeled the base of the accretion column at the stellar surface using a geometry that is one-dimensional and plane-parallel. The accreting material is assumed to be at the free-fall velocity, which depends on the stellar mass and radius. Historically, the models were characterized by a single pair of energy flux and filling factor values, with the energy flux being the flux of energy carried by the accretion column and the filling factor being the fraction of the surface of the star covered by the accretion spot. Ingleby et al. (2013) updated the models by using multiple energy fluxes, each one with its corresponding filling factor, allowing for multiple accretion columns. Further improvements of the models have been made more recently (see e.g., Robinson & Espaillat, 2019; Espaillat et al., 2022; Pittman et al., 2022).
In addition to its use with accretion shock models, veiling in the optical band can also be used to infer accretion luminosity, as Stock et al. (2022) have found in the case of the cTTs RU Lup. Furthermore, the evolution of various accretion-related quantities with stellar rotation can offer more insight into the accretion process. In this context, this work is a study of the accretion properties and variability in a particular star, DK Tau.
DK Tau is a low-mass cTTs located in the Taurus molecular cloud at a distance of 132.6 pc (see Gaia Collaboration et al., 2016, 2018, 2022) and accreting from its circumstellar disk. It shows considerable veiling (see e.g., Hartigan et al., 1995; Fischer et al., 2011). It is in a wide binary system (i.e., separated by 2.38\({}^{\prime\prime}\)- see e.g., Manara et al., 2019). This separation allows DK Tau A (called "DK Tau" for short hereafter) to be spatially resolved by the ESPaDOnS (Echelle SpectroPolarimetric Device for the Observation of Stars) spectropolarimeter and investigated on its own. DK Tau has a K7 spectral type (see e.g., Johns-Krull, 2007; Fischer et al., 2011), a 4000 K effective temperature, an 8.2 days rotation period, a 2.48 \(R_{\odot}\) radius, and its rotation axis is inclined by 58\({}^{\circ}\)(see Nelissen et al., 2023). Its outer disk is inclined by 21\({}^{\circ}\)(see Rota et al., 2022), and is misaligned compared to the inner disk and rotation axis (see Nelissen et al., 2023). Johns-Krull (2007) derived a mass of 0.7 \(M_{\odot}\) for DK Tau. In addition to accretion, the star also shows evidence of ejection (see e.g., Hartigan et al., 1995), in particular inner disk winds and a jet, which was revealed by the detection of both low and high velocity forbidden [OI] emission by Banzatti et al. (2019). Grankin et al. (2007) reported a long-term photometric variability of \(\overline{\Delta V}\) = 1.86 mag.
We present a study of the active accretor DK Tau, specifically of the variability of its veiling from night to night and across the visible wavelength range. We describe our observations and data analysis in Sect. 2. We detail our results regarding the accretion luminosity, the use of accretion shock models, and the calculations of the mass accretion rate in Sect. 3. In Sect. 4, we discuss a correlation between the veiling and the accretion properties, as well as variability in relation to stellar rotation and accretion spots. Finally, we present our main conclusions in Sect. 5.
## 2 Observations and data analysis
### ESPaDOnS Data
We used observations of DK Tau taken with ESPaDOnS, an echelle spectropolarimeter covering the optical domain, from 370 to 1 050 nm, in a single exposure. ESPaDOnS is mounted at the CFHT (Canada-France-Hawaii Telescope), a 3.6 meter telescope in Hawaii. It has a fiber aperture of 1.66\({}^{\prime\prime}\)and has a resolving power of 65 000 (see Donati et al., 2006).
Two sets of nine circularly polarized spectra were collected in December 2010, and from the end of November to the end of December 2012. In 2010 the observations were taken over 17 days, and in 2012 the observations were taken over 29 days. This was done in order to capture at least one full rotation of DK Tau for each set. These observations were obtained as part of the MaPP (Magnetic Protostars and Planets) large program at the CFHT, under proposals 10BP12 and 12BP12, with J.-F. Donati as P.I. and are public. We downloaded the observations from the archive of the PolarBase website1(see e.g., Petit et al., 2014), as well as the corresponding image files from the Canadian Astronomy Data Centre (CADC) website2.
Footnote 1: [http://polarbase.irap.omp.eu](http://polarbase.irap.omp.eu)
Footnote 2: [http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en](http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en)
The data had been previously reduced at the CFHT with the LibreESpRTf (for "Echelle Spectra Reduction: an Interactive Tool") reduction package specifically built for extracting polarization echelle spectra from raw data. This involved subtracting the bias and the dark frames, and correcting for the variations in sensitivity using flat field frames (see Donati et al., 1997). In addition to the LibreESpRT automatic continuum normalization, we continuum normalized the spectra. The automatic procedure did not properly adjust the continuum, as it is not tailored for stars that display emission lines.
The Julian date corresponding to the middle of each exposure are listed in Table 1. The total exposure time is 4 996.0 s for each observation. It is important to note that on 19 December 2010 the full Moon was close to DK Tau (see Nelissen et al., 2023) and, because of the resultant scattered light, we ignore this night in our analysis.
### Optical veiling
Because accretion shocks are at a higher temperature than the photosphere, they add extra flux to the stellar continuum. This artificially decreases the depth of photospheric absorption lines, an effect known as veiling. The veiling (\(R\)) is defined as the ratio between the accretion shock flux and the photospheric flux. The values of veiling vary as a function of wavelength for a single night, and also vary from night to night.
In order to determine the values of the veiling in our spectra, we adopted a technique based on the fitting of a rotationally broadened and artificially veiled weak-lined T Tauri star (wTTs) template to the spectrum of DK Tau. We used an ESPaDOnS spectrum because of its high resolution and choose a wTTs with the same spectral type as our cTTs and coming from the same star forming region. This allows us to assume that both T Tauri stars (TTs) would have the same chemical composition, very similar age and \(\log g\), and that the microturbulence and macroturbulence velocities should be very similar. Consequently, this
wTTs can be seen as the purely photospheric version of DK Tau because it experiences no accretion. In addition, the wTTs also needs to have a \(v\sin i\), the line of sight projected equatorial rotational velocity, lower than the \(v\sin i\) of DK Tau, which is of 13.0 km s\({}^{-1}\) (see Neilssen et al. 2023). Indeed, the wTTs spectrum will be artificially broadened to match DK Tau in the fitting process and this cannot be done if the wTTs lines are already broader because of rotational broadening. The rotational broadening is performed by convolving the spectrum by a Doppler rotational profile as described in Gray (2008). After exploring several ESPaDOnS spectra of wTTs with a K7 spectral type in order to find the one that provided the best fit, we ultimately used TAP45 (which has a \(v\sin i\) of 11.5 km s\({}^{-1}\) - see Feigelson et al. 1987; Bouvier et al. 1993).
We looked at windows of \(\sim\)20 nm throughout the spectra (ranging from 550 nm to 900 nm). We then obtained a value of the veiling for the different windows. The window at 617.50 nm generally showed the best fit between the observation and the template, it is therefore used throughout this work to summarize the veiling for each night (see e.g., Fig. 3). Once we had values of the veiling as a function of the wavelength, and working in the logarithmic plane, we fit a linear relation through the points (using a least squares polynomial fit). Figures 1 and 2 show plots of the veiling values (in black) as a function of wavelength for the 2010 and 2012 observations, as well as the best fit (\(y=ax+b\) in blue, with \(a\) being the slope). The standard deviation is used as the uncertainty of the fit (light blue shaded region). A list of the coefficients of the fits can be found in Appendix A.
We thus studied the variability of veiling for DK Tau, not only from night to night, but across our spectra, which cover the optical domain. The peak values of veiling (at \(\sim\)550 nm) for each night in 2010 range from 0.2 to 0.9. One observation out of eight has a nearly constant veiling value across its spectrum (i.e., with \(a<\) -0.90). For the 2012 observations, the peak values of veiling range from 0.2 to 1.3 (higher than the value from two years before). Three night out of nine have nearly constant veiling values across their spectrum. In general, we can see that when the peak values of veiling are higher, the slope of the fit is steeper (see Fig. 1 and 2). Fig. 3 shows the veiling at 617.50 nm folded in with the stellar rotation phase, for the 2010 and 2012 observations. The error bars are the uncertainty of the fits. We find that the veiling folds well in phase, albeit with more scatter for the 2012 observations. This is likely due to the veiling being modulated by higher accretion activity and not only by the stellar rotation.
### Derivation of DK Tau's photospheric continuum
Because ESPaDOnS spectra are not flux-calibrated, we derived DK Tau's photospheric continuum from a high-resolution spectrum taken at the Telescopio Nazionale Galileo (TNG) using HARPS-N in 2018 by Gangi et al. (2022). The spectrum was corrected for extinction as in Gangi et al. (2022); we measured the veiling as in the previous section and found a small value with a flat dependence on wavelength in the optical (i.e., R\(\sim\) 0.2 with \(a<\) -0.90). Fig. 4 shows the deveiled HARPS-N spectrum. Because it does not cover the entirety of our wavelength range, we also plot a lower resolution spectrum obtained at the Asiago telescope almost simultaneously to the HARPS-N data and used by Gangi et al. (2022) for flux-calibration, which extends to slightly longer wavelengths. Finally, to cover the whole region of interest, we used the photospheric continuum of SO879, a wTTs of K7 spectral type observed with VLT/X-shooter (described in Stelzer et al. 2013). The X-shooter spectrum was corrected for extinction, then scaled to the distance and luminosity of DK Tau, that is \(d=132.6\) pc and \(L=0.65\,L_{\odot}\) (with the luminosity value being derived from the flux-calibrated HARPS-N spectrum). As the SO879 continuum proved to be a good description of the DK Tau continuum in the spectral range where they overlap, we adopt it as a good representation of the photospheric continuum over the whole range of the ESPaDOnS spectra discussed in the following.
In summary then to determine the values of veiling (see Sect. 2.2), we used an ESPaDOnS spectrum of the wTTs TAP45 because of its high resolution. As ESPaDOnS spectra are not flux-calibrated, we could not use that spectrum to extract DK Tau's photospheric continuum. Conversely, the X-shooter spectrum of SO879 could not be used to derive the veiling because of its lower resolution. In the absence of an appropriate wTTs spectrum to use as a template in both circumstances, we worked with TAP45 and SO879.
## 3 Results
### Accretion luminosity from emission lines
In order to calculate the accretion luminosities for each night, we first measured the equivalent width (EW) of several accretion-powered emission lines, namely H\(\alpha\), H\(\beta\), H\(\gamma\), the Her lines at 447.1 nm, 501.6 nm, 587.6 nm, 667.8 nm and 706.5 nm, as well as the Can infrared triplet at 849.8 nm, 854.2 nm and 866.2 nm. We corrected the EWs for veiling, then we flux-calibrated them using our template of DK Tau's photospheric continuum to obtain line fluxes, before converting the line fluxes into line luminosities. Finally, using the empirical relations in Table 2. from Alcala et al. (2017), we obtained the values of the accretion luminosities for each night and for each line. Plots of the accretion luminosities on a nightly basis can be found in Appendix B. Figure 5 shows accretion luminosity values averaged over the different lines and folded in phase. We calculated the weighted average, and used the weighted standard deviation of the spread in the values found from the different lines as the error bars. The values for each night are listed in Table 2. We can see that the
\begin{table}
\begin{tabular}{c c c} \hline \hline Date & Julian date & Rotation phase \\ (yyyyy-mm-dd) & & (8.2 day period) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 1: Dates for the 2010 and 2012 datasets.
accretion luminosity changes with time by a factor of up to approximately 6. It changes in phase with the stellar rotation in the 2010 epoch data, but less clearly in the 2012 data. In Fig. 6, we can see that the accretion luminosity correlates with the veiling in an almost linear fashion (see Sect. 4.1 for a discussion of the correlation).
Next, we calculated the mass accretion rate onto the star for each night using the values of the accretion luminosities. The conversion was made using equation 8 from Gullbring et al. (1998):
\[L_{\rm acc}\simeq\frac{GM_{\star}M_{\rm acc}}{R_{\star}}\left(1-\frac{R_{\star }}{R_{\rm in}}\right) \tag{1}\]
with \(R_{\star}=2.48\,R_{\odot}\), \(M_{\star}=0.7\,M_{\odot}\) and \(R_{\rm in}=5\,R_{\star}\) (as is typically used). Figure 7 shows the mass accretion rate obtained via \(L_{\rm acc}\) and folded in phase. As the mass accretion rate is proportional to the accretion luminosity, we take the error bars on \(\dot{M}_{\rm acc}\) as being the same as the ones on \(L_{\rm acc}\), which are the dominating components, ignoring the error bars on \(M_{\star}\) and \(R_{\star}\). Furthermore, we are focusing on the variability and the later do not change from night to night.
For the accretion luminosity, we find values ranging from \(\log\left(L_{\rm acc}[L_{\odot}]\right)\) = -1.36 to \(\log\left(L_{\rm acc}[L_{\odot}]\right)\) = -0.57. This leads to mass accretion rate values ranging from \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)\) = -8.20 to \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)\) = -7.40. The values of the accretion luminosities and mass accretion rates that we find for each night are listed in Table 2. In comparison, Fischer et al. (2011) find \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)\) = -7.40, a value that is in agreement with the higher end of our range. Gullbring et al. (1998) and Ingleby et al. (2013) quote similar values of \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)\) = -7.42 and -7.47. Our results are also in agreement with the value quoted by Fang et al. (2018) of \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)\) = -7.86, a value approximately in the middle of our range. Gangi et al. (2022) find \(\log\left(\dot{M}_{\rm acc}[M_{\odot}\,{\rm yr}^{-1}]\right)\) = -8.33. This agrees within the error bars of the lower end of our range.
### Accretion shock models
The emission of accretion shocks has been modeled starting with Calvet & Gullbring (1998). What is observed from the accretion shock is determined by the energy of the shock and by the size of the accretion spot on the stellar surface. These accretion shock models therefore depend on the accretion energy flux \(\cal F\), the fractional surface coverage of the accretion spots \(f\) and on the stellar properties. The models adopt a one-dimensional plane
Figure 1: Veiling (black dots), the best linear fit (blue line) and the standard deviation (light blue shaded region) as a function of wavelength for the 2010 observations.
parallel geometry for the accretion column and do not distinguish between a single accretion spot or a multitude of spots.
The accretion energy flux is the flux of energy that the accretion column carries into the shock, and it is defined by equation 10 from Calvet & Gullbring (1998):
\[{\cal F}=\frac{1}{2}\rho\,v_{s}^{3} \tag{2}\]
where \(\rho\) is the density of the material in the accretion column and \(v_{s}\) the free-fall velocity of that material. It is assumed that the velocity is constant, as it depends on the mass and radius of the star via equation 1 from Calvet & Gullbring (1998):
\[v_{s}^{3}=\left|\frac{2GM_{\star}}{R_{\star}}\right|^{1/2}\,\left|1-\frac{R_{ \star}}{R_{\rm i}}\right|^{1/2} \tag{3}\]
where \(M_{\star}\) is the stellar mass, \(R_{\star}\) is the stellar radius and \(R_{\rm i}\) is the radius where the magnetosphere truncates the circumstellar disk and is assumed to be \(5\,R_{\star}\). It follows that low or high values of \({\cal F}\) represent low or high-density columns.
Each energy flux is scaled by a filling factor \(f\), representing the fraction of the stellar surface covered by accretion spots. Its value varies between 0 and 1: when \(f=0\), the spot is non-existent; when \(f=1\), the spot has the size of the star; when, for example, \(f=0.1\), this indicates that 10 % of the stellar surface is covered by accretion spots (regardless of it being a single large accretion spot or several smaller spots). A decrease in the filling factor will downscale the emission of the accretion column independently of wavelength, while a change in the energy flux will shift the peak of the emission in wavelength.
The shock models that we worked with were made using DK Tau's parameters: a distance \(d=132.6\) pc, a stellar luminosity \(L=0.65\,L_{\odot}\), a radius \(R_{\star}=2.48\,R_{\odot}\), an effective temperature \(T_{\rm eff}=4000\) K and a mass \(M_{\star}=0.7\,M_{\odot}\). They were computed for different energy fluxes, namely \({\cal F}=1\ \times\ 10^{9}\,{\rm erg\,s^{-1}\,cm^{-2}}\), \({\cal F}=3\ \times\ 10^{9}\,{\rm erg\,s^{-1}\,cm^{-2}}\), \({\cal F}=1\ \times\ 10^{10}\,{\rm erg\,s^{-1}\,cm^{-2}}\), \({\cal F}=3\ \times\ 10^{10}\,{\rm erg\,s^{-1}\,cm^{-2}}\), \({\cal F}=1\ \times\ 10^{11}\,{\rm erg\,s^{-1}\,cm^{-2}}\) and \({\cal F}=3\ \times\ 10^{11}\,{\rm erg\,s^{-1}\,cm^{-2}}\).
The top panel of Fig. 8 shows the shock models for different energy fluxes, as well as DK Tau's photospheric continuum. We calculated the ratio between the different modeled accretion shock fluxes and the photospheric continuum flux, in order to obtain different values of model-predicted veiling as a function of wavelength (see bottom panel of Fig. 8). We can see that, in the optical range, the slope of the model-predicted veiling increases monotonically with the energy flux \({\cal F}\). Therefore, the slope of the modeled veiling seems to univocally characterize the accre
Figure 2: Same as Fig. 1, for the 2012 observations.
tion energy flux. This allows us to assign a specific value of \(\mathcal{F}\) to a specific slope in the observed veiling.
Working in the logarithmic plane, we computed the slopes of the observed and modeled veiling values by fitting the points with a linear relation: \(y=ax+b\). We then plotted the slopes (i.e., \(a\)) as a function of the value of the veiling at 617.50 nm in Fig. 9. We linearly interpolated the values of the slopes, of the veiling at 617.50 nm, and of the energy fluxes between the six models that we used (but only show these six models in Fig. 9 for clarity).
The observed veiling values follow the aforementioned trend of a higher slope with higher veiling values (highlighted by the gray dotted line). Regarding the model-predicted veiling values, they are scaled by the filling factors which are free parameters. Because, in the optical range, the slopes of the modeled veiling are only dependent on the value of the energy flux and independent of the filling factors, we first derive the best value of the energy flux for each observation, by matching the slope of the modeled veiling with the slope of the observational veiling (i.e., matching along the y-axis). Next, adjusting the values of the filling factors shifts the modeled veiling values along the x-axis only. This allows us to align the models with the observations by varying the filling factors. We thus derive the energy flux and the appropriate filling factor for that energy flux for each night. Two examples of the overlap of the observed and modeled veiling can be found in Appendix C.
In the literature (see e.g., Ingleby et al., 2013; Robinson and Espaillat, 2019; Espaillat et al., 2022; Pittman et al., 2022), it is usually a combination of several energy fluxes, each with its own filling factor, that are used to fit a single observation, with each energy flux better constrained by different regions of the spectrum. In general, the higher energy fluxes (corresponding to higher densities of the material in the accretion column) peak in the UV range and the lower energy fluxes peak in the optical. In this work, however, as we have a narrower wavelength range (i.e., 550 nm - 900 nm) that excludes the UV, we are using a single energy flux with its assigned filling factor. Nevertheless, we are able to constrain the models to our data adequately. We are thus making the approximation that, for a given night, one homogeneous accretion spot, characterized by a single value of \(\mathcal{F}\) and of \(f\), is dominating the optical emission.
We find energy fluxes ranging from 1.00\(\,\times\,10^{9}\) to 2.15\(\,\times\,10^{11}\) erg s\({}^{-1}\) cm\({}^{-2}\). For the different energy fluxes, we find filling factors ranging from 0.026 to 0.117, implying that the accretion columns cover from 2.6% to 11.7% of the stellar surface. These numbers are comparable to the ones listed in the literature for other cTTs, though it should be noted that they use a multicolumn approach, whereas we assume a single column and spot. For example, using three accretion columns with \(\mathcal{F}\) = 1 \(\times\) 10\({}^{10}\) erg s\({}^{-1}\) cm\({}^{-2}\), \(\mathcal{F}\) = 1 \(\times\) 10\({}^{11}\) erg s\({}^{-1}\) cm\({}^{-2}\), \(\mathcal{F}\) = 1 \(\times\) 10\({}^{12}\) erg s\({}^{-1}\) cm\({}^{-2}\), Robinson and Espaillat (2019) mention filling factors ranging from 5.00\(\,\times\,10^{-5}\) to 0.39 for various low-mass cTTs (namely DM Tau, GM Aur, SZ 45, TW Hya and VW Cha). For the same energy fluxes, Espaillat et al. (2021) find filling factors ranging from 5.30\(\,\times\,10^{-5}\) to 0.18 for the cTTs GM Aur. For the same energy fluxes once more, Pittman et al. (2022) mention filling factors ranging from 4.85\(\,\times\,10^{-5}\) to
Figure 4: DK Tau’s photospheric continuum as a function of wavelength (thick continuous black line). The thin lines show the flux-calibrated spectra of DK Tau taken with HARPS-N (continuous red line) and the Asiago Telescope (maroon dashed line), both corrected for extinction and deveiled (see Gangi et al., 2022).
Figure 3: Veiling over time, shown folded in phase with an 8.2 day period, for the 2010 (top panel) and 2012 (bottom panel) data. Cycle 0 for the 2010 epoch starts on December 17, while for the 2012 epoch it starts on the first observation of 2012. This was done in order to display the minimum at the same phase for both epochs. Different colors and symbols represent different rotation cycles.
0.23 for several cTTs (namely CVSO 58, CVSO 90, CVSO 104, CVSO 107, CVSO 109A, CVSO 146, CVSO 165A, CVSO 165B and CVSO 176).
The values of the energy fluxes and filling factors that we find for each night are listed in Table 2. We can see that they are anticorrelated: higher energy fluxes are paired with smaller filling factors. Conversely, filling factors increase when the energy fluxes decrease, in order to match the observed veiling. This is to be expected for a constant accretion rate, as the product of the energy flux by the filling factor is proportional to the mass accretion rate (see equation 11 from Calvet & Gullbring 1998). In other words, in order to generate the same amount of flux, lower energy fluxes need to be paired with larger filling factors (see e.g., Calvet & Gullbring 1998; Ingleby et al. 2013).
Figure 10 shows the filling factors folded in phase, while Fig. 11 shows the energy fluxes folded in phase. We can see that both quantities vary in phase with the stellar rotation, notably in the 2010 epoch data.
Next, using the values of energy fluxes and filling factors that we obtained, and using equation 11 from Calvet & Gullbring (1998):
\[\begin{split}\mathcal{F}=9.8\times 10^{10}\,\mathrm{ergs\,s^{-1}\, cm^{-2}}\,\left(\frac{\dot{M}_{\mathrm{acc}}}{10^{-8}M_{\odot}\,\mathrm{yr^{-1}}} \right)\\ \times\left(\frac{M_{\star}}{0.5M_{\odot}}\right)\left(\frac{R_{ \star}}{2R_{\odot}}\right)^{-3}\left(\frac{f}{0.01}\right)^{-1}\end{split} \tag{4}\]
with \(R_{\star}=2.48\,R_{\odot}\) and \(M_{\star}=0.7\,M_{\odot}\), we calculate the corresponding mass accretion rates for each observation. They range from log (\(\dot{M}_{\mathrm{acc}}[M_{\odot}\,\mathrm{yr^{-1}}]\)) = -8.78 to log (\(\dot{M}_{\mathrm{acc}}[M_{\odot}\,\mathrm{yr^{-1}}]\)) = -7.04 (see Table 2).
Figure 12 shows the mass accretion rate obtained via \(\mathcal{F}\) and \(f\) folded in phase. We estimated the uncertainty on the values of the mass accretion rate by comparing different models with the distribution of points for the observed veiling. We find uncertainties on the product of the energy fluxes \(\mathcal{F}\) and the filling factors \(f\), which translates into uncertainties on the mass accretion rate, that range from a factor 2 (or lower for some nights) to a factor 5 (on the nights where the scatter on the observed veiling is the largest).
Over the range of observations, we see quite a significant range of values for \(\mathcal{F}\), the energy flux of the column that im
Figure 5: Accretion luminosity over time, shown folded in phase with an 8.2 day period, for the 2010 (top panel) and 2012 (bottom panel) data. Cycle 0 for the 2010 epoch starts on December 17 2010, while for the 2012 epoch it starts on the first observation of 2012, as in previous figures. Different colors and symbols represent different rotation cycles.
Figure 6: Accretion luminosity as a function of the veiling at 617.50 nm, for the 2010 (top panel) and 2012 (bottom panel) data.
pinges on the star, as it varies by two orders of magnitude, because of the range of slopes displayed by the observational veiling. The product of \(\mathcal{F}\) and \(f\) varies over a smaller range, because the mass accretion rate does not change as much as the energy flux.
## 4 Discussion
### Accretion luminosity and optical veiling
Figure 13 shows the accretion luminosities measured from accretion-powered emission lines as a function of the veiling at 617.50 nm, for both 2010 and 2012 observations. The correlation is quite good, showing that the optical veiling traces the accretion luminosity very well. This is notably interesting and useful, as the two quantities are measured in two considerably different ways: the veiling is measured at a single wavelength, while the accretion luminosity is a global quantity in terms of wavelength. Consequently, within the uncertainties, by measuring the veiling (i.e., by simply comparing the depth of observed absorption lines to a photospheric template) at one wavelength (in the optical in this case), one can infer the accretion luminosity which is mostly emitted in the UV range and not at optical wavelengths.
We find a similar correlation between the accretion luminosity and the veiling as Stock et al. (2022) have found for RUL Lup, a more luminous (\(L=1.31\,L_{\odot}\)) and higher accreting cTTs (\(\log\left(L_{\rm acc}[L_{\odot}]\right)\sim 0.3\)) than DK Tau. In their Fig. 8, they show a power-law relation between the accretion luminosity and the veiling from the Li line, with a slope of 1.22 (\(\pm\) 0.26) over a range of \(\log R\) going from 0.40 to 0.65. In comparison, for DK Tau and using the optical veiling from the photospheric absorption lines, we find a slope of 0.91 (\(\pm\) 0.11) over a range of \(\log R\) going from -0.70 to 0.10. The slopes agree within their error bars. Combined, they cover a large range of accretion luminosities, going from \(\log\left(L_{\rm acc}[L_{\odot}]\right)\) = -0.10 to \(\log\left(L_{\rm acc}[L_{\odot}]\right)\) = -1.40. The vertical intercept varies between the two stars, with b = -0.93 (\(\pm\) 0.13) for RU Lup and b = -0.68 (\(\pm\) 0.04) for DK Tau.
Figure 8: Models of the excess flux from accretion shocks (in rainbow colors) and photospheric continuum (in black) as a function of wavelength (top panel). Modeled veiling (in rainbow colors) computed from the different shock models and the photospheric continuum (bottom panel).
Figure 7: Mass accretion rate based on the accretion luminosity over time, shown folded in phase with an 8.2 day period, for the 2010 (top panel) and 2012 (bottom panel) data. Cycle 0 for the 2010 epoch starts on December 17, while for the 2012 epoch it starts on the first observation of 2012, as in previous figures. Different colors and symbols represent different rotation cycles.
The discrepancy between the correlations could stem from the comparison of the veiling from the photospheric absorption lines (for DK Tau) with the veiling derived from a single line, the Li line, which suffers from differences in the abundance between RU Lup and the adopted template. The authors have applied a correction, but note that it has large uncertainties, which could cause a systematic shift in their numbers.
It would be very interesting to investigate the relation between the accretion luminosity and the veiling in other stars. It would also be interesting to study the relation for a larger range of veiling values. If a similar relation is found for different stars, this could be a great tool when working on deriving quantities from limited data.
The correlation shown in Fig. 13 between the optical veiling and \(L_{\rm acc}\) is also potentially very interesting when applied to objects with high and uncertain values of the extinction, as veiling does not depend on it, while any method to measure \(L_{\rm acc}\) does (see also Stock et al. 2022).
### Mass accretion rates
We extracted the mass accretion rate for each night via two different procedures: using the accretion luminosity based on accretion-line powered emission lines (see Sect. 3.1), and comparing the observed veiling in the optical with accretion shock models to derive values of the energy flux and filling factor (see Sect. 3.2). The values of \(\dot{M}_{\rm acc}\) that we obtain are listed in Table 2. They agree within a factor 2 for most nights, which is within the error bars. Figure 14 plots the two sets of mass accretion rates for both epochs. We can see a trend close to a one-to-one relationship, showing that both methods of estimating \(\dot{M}_{\rm acc}\) are comparable. The discrepancy is probably systematic. Indeed, the mass accretion rate derived using the accretion shock model is almost systematically higher than the one calculated using the accretion luminosity. Similarly, Pittman et al. (2022) have found systematically higher mass accretion rate from accretion shock models than from the H\(\alpha\) luminosity. They mention systematic effects in the modeling methods as the probable cause.
Shock models have been used to derive accurate values of the mass accretion rates by fitting the observed excess from the UV to the visual as a combination of shocks of different \(\mathcal{F}\) and \(f\) (see e.g., Ingleby et al. 2013; Espallat et al. 2021). In the case of DK Tau, we find that a single shock model, with \(\mathcal{F}\) and \(f\) values constrained only by the observed excess emission at optical wavelengths, reproduces quite well the values of \(\dot{M}_{\rm acc}\) obtained from the intensity of the emission lines. This is possible only when a single shock dominates the observed excess emission at
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline Date & log \(L_{\rm acc}\) (err) & log \(\dot{M}_{\rm acc}\) (err) & \(\mathcal{F}\) & \(f\) & log \(\dot{M}_{\rm acc}\) \\ (yyyy-mm-dd) & (\(L_{\odot}\)) & (\(M_{\odot}\) yr\({}^{-1}\)) & (erg s\({}^{-1}\) cm\({}^{-2}\)) & & (\(M_{\odot}\) yr\({}^{-1}\)) \\ & & based on \(L_{\rm acc}\) & & & based on \(\mathcal{F}\) and \(f\) \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 2: Accretion luminosities, mass accretion rates based on \(L_{\rm acc}\), energy fluxes, filling factors and mass accretion rates based on \(\mathcal{F}\) and \(f\) for the 2010 and 2012 epochs.
Figure 9: Slope of the veiling for the different models (diamonds in rainbow colors), and for the 2010 and 2012 observations (black dots) as a function of the veiling value at 617,50 nm. The vertical error bars are the errors on \(a\), the horizontal ones are the uncertainty of the fits. The gray dotted line highlights the trend followed by the observations.
all wavelengths. This is indeed the case not only in DK Tau but in 15 of the 21 TTS modeled by Ingleby et al. (2013). It is important to note that, as discussed in the following, this result does not rule out models where a single spot has a complex structure (see e.g., Robinson & Espaillat, 2019; Espaillat et al., 2021), which can be detected in time-sequence observations.
### Variability properties
We investigate the evolution of various quantities with the stellar rotation. Though it should be noted that eight and nine points (for the 2010 and 2012 epoch respectively) are low numbers to detect a trend. The modulation of several of them, such as the accretion luminosity (see Fig. 5), the mass accretion rate derived from it (see Fig. 7), the value of the veiling at 617.50 nm (see Fig. 3), and the mass accretion rate derived from the shock models (see Fig. 12) seem to be dominated by the stellar rotation for the 2010 epoch. This is less clear for the 2012 epoch, which experiences some perturbation. Nelissen et al. (2023) speculate that DK Tau may be in the stable accretion regime (see e.g., Romanova et al., 2008; Kurosawa & Romanova, 2013) in 2010. This difference between both epochs might be linked to the accretion being less stable in 2012, when the mass accretion rate is higher.
We observe that the accretion luminosity and energy flux seem to be changing in phase for the 2010 epoch. Consequently, the filling factor has to adjust to the energy flux and the mass accretion rate, as we have demonstrated that both methods of calculating the mass accretion rate (via the accretion luminosity or via the shock models) yield similar results that are changing in phase. The apparent variability of the filling factor with the phase can be used to estimate the location of the accretion spot (see Sect. 4.4).
Figure 11: Energy fluxes over time, shown folded in phase with an 8.2 day period, for the 2010 (top panel) and 2012 (bottom panel) data. Cycle 0 for the 2010 epoch starts on December 17, while for the 2012 epoch it starts on the first observation of 2012, as in previous figures. Different colors and symbols represent different rotation cycles.
Figure 10: Filling factors over time, shown folded in phase with an 8.2 day period, for the 2010 (top panel) and 2012 (bottom panel) data. Cycle 0 for the 2010 epoch starts on December 17, while for the 2012 epoch it starts on the first observation of 2012, as in previous figures. Different colors and symbols represent different rotation cycles.
### Accretion spot
Under the assumption of a single, stable accretion spot, Fig. 15 shows the rotation of a star seen edge on (i.e., with the observer looking at the page, while the stellar rotation axis is vertical and in the plane of the page), with the spot at the equator (for illustrative purposes). We can see that, as the star rotates, the spot is first invisible, then a portion of it becomes visible, next the whole spot is in view, until it gradually disappears again behind the star.
Figure 16 shows an accretion spot, at a random location, for different inclinations of the star, with the inclination being the angle of the stellar rotation axis with respect to the line of sight of the observer. The second case is closer to the one corresponding to DK Tau's inclination of \(58^{\circ}\). We can see that the inclination of the star influences not only the portion of the accretion spot that might be visible, but also affects the perceived area of the spot. This is a geometrical effect, with the true area of the spot remaining unchanged.
In the case of DK Tau, the effects depicted on Fig. 15 and Fig. 16 are combined. This means that there are times during the stellar rotation when we do not see the spot, and times when we start seeing it, with a perceived area that is changing due to the inclination of the stellar rotation axis and the rotational phase.
Assuming a point-like accretion spot, Fig. 17 shows the projected surface of the spot (i.e., the perceived area of the spot \(A\) divided by the true area of the spot \(A_{\rm tot}\)) at different latitudes, for a star with an inclination of \(58^{\circ}\), as a function of the stellar rotation phase. When \(A/A_{\rm tot}=1\), we are seeing the spot in full. Approximating the accretion spot as an ideal lambertian surface, the light it emits in the direction of the line of sight obeys Lam
Figure 14: Mass accretion rate based on the accretion luminosity as a function of the mass accretion rate based on \(\mathcal{F}\) and \(f\), for the 2010 and 2012 data. The gray dotted line shows a one-to-one relationship. The uncertainty on the x-axis is shown on the lower left (see text for details). We note that the accretion shock model method finds almost systematically higher accretion rates than does the accretion luminosity method.
Figure 12: Mass accretion rate based on \(\mathcal{F}\) and \(f\) over time, shown folded in phase with an 8.2 day period, for the 2010 (top panel) and 2012 (bottom panel) data. Cycle 0 for the 2010 epoch starts on December 17, while for the 2012 epoch it starts on the first observation of 2012, as in previous figures. Different colors and symbols represent different rotation cycles. The uncertainty on the y-axis is shown on the lower left (see text for details).
Figure 13: Accretion luminosity as a function of the veiling. The black dots show the range of values for DK Tau, the gray dotted line is the fit. The maroon dashed line shows the trend found for RU Lup by Stock et al. (2022), with the continuous maroon line highlighting their range.
bert's law:
\[I=I_{\rm tot}\,\frac{A}{A_{\rm tot}}=I_{\rm tot}\cos\alpha \tag{5}\]
where \(I\) and \(I_{\rm tot}\) are respectively the perceived and the true luminous intensity of the bright accretion spot, and \(\alpha\) is the angle between the center of the visible stellar disk and the accretion spot. It depends on the inclination \(i\) of the star, on the latitude \(l\) of the spot and on the stellar rotation phase \(p\) via
\[\sin\alpha=\sin i\sin l+\cos i\cos l\cos p. \tag{6}\]
Because this range of latitudes holds under the assumption of a point-like spot, an extended spot would have the effect of enlarging the wings of the curves, especially the sharp ones.
By comparing by eye the curves in Fig. 17 with the shape of the curve for the filling factors (i.e., the fraction of the stellar surface covered by accretion spots) folded in phase (see Fig. 10), and assuming that the observed variation of the filling factors are caused by the stellar rotation, we can estimate the location of the accretion spot. For this, we are assuming that we are observing a single, stable accretion spot. In addition, the assumption is that the variation in filling factors depends only on the perceived change of area due to rotation (i.e., that it is only due to a geometrical effect, and is not due to the area of the spot truly changing). We infer that the accretion spot would be located between \(+45^{\circ}\) and \(+75^{\circ}\) in latitude for the 2010 epoch. While the hypotheses made are a simplification of a likely more complex configuration, the curves in Fig. 17 do seem to reproduce the curve in Fig. 10. For the 2012 epoch, however, the perturbation to the rotational modulation prevents a comparison between Fig. 17 and Fig. 10.
Given our current understanding of magnetospheric accretion, because the magnetic field lines channel the circumstellar material into accretion columns, the magnetic pole is thought to be found at a similar location as the accretion spot, as this is where the matter would accrete. For instance, in the magnetohydrodynamic simulations done by Espaillat et al. (2021) for GM Aur, the accretion spot is a few degrees away from the magnetic pole. If we assume that the accretion spot is close to the magnetic pole, we can infer the approximate location of the magnetic pole. For the 2010 epoch, it would be between \(+45^{\circ}\) and \(+75^{\circ}\) in latitude. This translate into a magnetic obliquity (i.e., the orientation of the magnetic field axis compared to the stellar rotation axis) ranging from \(15^{\circ}\) to \(45^{\circ}\). Although this estimation rests on several hypotheses, including the assumption that the observed variation of the filling factors are caused by the stellar rotation, it can be noticed that this estimated range is consistent with the one of \(16^{\circ}\) to \(26^{\circ}\) estimated by Neilssen et al. (2023) for the same epoch and based on measurements of the average line of sight magnetic field. It is also consistent with the magnetic obliquity of \(18^{\circ}\) for 2011 measured by McGinnis et al. (2020) and based on the radial velocity variability of the He i line. In comparison, typical obliquities ranging from \(5^{\circ}\) to \(60^{\circ}\) have been measured for other TTs (see e.g., Johnstone et al., 2014; Alencar
Figure 16: Sketch (not to scale) showing a star presenting with an accretion spot and seen with different inclinations (namely \(90^{\circ}\) or edge-on, \(60^{\circ}\), \(30^{\circ}\) and \(0^{\circ}\) or pole-one). The arrow represents the rotation axis. The case corresponding to DK Tau (i.e., \(58^{\circ}\)) is closer to the second one.
Figure 17: Projected surface of an accretion spot as a function of the rotation phase for a star inclined by \(58^{\circ}\). Different colors correspond to different locations in latitude for the spot.
Figure 15: Sketch (not to scale) showing an accretion spot at the equator of a star (seen edge on) that is rotating.
et al., 2018; Mcginnis et al., 2020; Pouilly et al., 2020; Donati et al., 2020; Finociety et al., 2023).
Regarding the structure of the accretion spots, the current consensus is that a lower-density area encircles a denser center (see Espaillat et al., 2021). However, a combination of at least two energy fluxes and their filling factors, for a single observation, are required in order to trace two density areas. Since we are fitting each observational veiling with a single shock model defined by one value of the energy flux and filling factor (see Sect. 3.2), we are approximating the accretion spot structure by a single-density region solely. Extrapolating from previous studies, where the high-density region dominates in the UV range (see e.g., Ingleby et al., 2013; Espaillat et al., 2021), we can speculate that that with our optical range, we are tracing the low-density part of the spot. This is supported by the fact that the highest energy flux that we find, that is \(\mathcal{F}=2.15\times 10^{11}\) erg s\({}^{-1}\) cm\({}^{-2}\), is lower than the value of \(1.00\times 10^{12}\) erg s\({}^{-1}\) cm\({}^{-2}\) used in the literature for the high-density region (see e.g., Espaillat et al., 2021; Pittman et al., 2022). We may therefore be underestimating the total energy flux by not taking the UV emission into account.
## 5 Conclusions
In this paper, we investigated the accretion onto DK Tau, a low-mass classical T Tauri star (cTTs), using spectra collected in 2010 and 2012. We studied the veiling (defined as the ratio between the flux coming from the accretion shock and the photospheric flux) across the optical range and found that the peak values (at \(\sim\)550 nm) range from 0.2 to 0.9 in 2010, and from 0.2 to 1.3 in 2012. On the nights when the peak values are higher, the slope of the values across the wavelength range is steeper.
We first calculated the mass accretion rates (\(\dot{M}_{\rm acc}\)) for each observation using accretion-powered emission lines and computing the accretion luminosities. We find values ranging from log (\(\dot{M}_{\rm acc}\)[\(M_{\odot}\) yr\({}^{-1}\)]) = -8.20 to log (\(\dot{M}_{\rm acc}\)[\(M_{\odot}\) yr\({}^{-1}\)]) = -7.62 in 2010, and from log (\(\dot{M}_{\rm acc}\)[\(M_{\odot}\) yr\({}^{-1}\)]) = -8.15 to log (\(\dot{M}_{\rm acc}\)[\(M_{\odot}\) yr\({}^{-1}\)]) = -7.40 in 2012. These values agree with these previously observed (see e.g., Gullbring et al., 1998; Fischer et al., 2011; Ingleby et al., 2013; Fang et al., 2018; Gangi et al., 2022).
Additionally, we find, as Stock et al. (2022) have found for the cTTs RU Lup, a power-law correlation between the accretion luminosity and the optical veiling. This means that, for DK Tau and within the uncertainties, by measuring the veiling at a single wavelength, one can infer the accretion luminosity, which is a global quantity in terms of wavelength. If a similar correlation is found for additional stars, it could be a helpful means of verifying the estimation of the accretion luminosity. Because the veiling does not depend on extinction, it could be particularly useful for high extinction regions.
We also derived the values of the mass accretion rates by fitting the optical veiling with accretion shock models. These models are characterized by an energy flux (\(\mathcal{F}\)) which is carried by the accretion column into the shock and is correlated with the density of the material in the column, and a filling factor (\(f\)) that represents the area of the star covered by the accretion spot. We find that \(\mathcal{F}\) ranges from \(1.30\times 10^{10}\) erg s\({}^{-1}\) cm\({}^{-2}\) to \(9.45\times 10^{10}\) erg s\({}^{-1}\) cm\({}^{-2}\) in 2010, and from \(1.00\times 10^{10}\) erg s\({}^{-1}\) cm\({}^{-2}\) to \(2.15\times 10^{11}\) erg s\({}^{-1}\) cm\({}^{-2}\) in 2012; while \(f\) ranges accordingly from 4.8 % to 2.6 % in 2010, and from 11.7 % to 3.0 % in 2012. These values are comparable to the ones mentioned in the literature (see e.g., Robinson and Espaillat, 2019; Espaillat et al., 2021; Pittman et al., 2022). For each night, we are using a single energy flux with its assigned filling factor, making the approximation that one homogeneous accretion spot is dominating the optical emission. The values of \(\dot{M}_{\rm acc}\) extracted from both methods agree within a factor of 2 for most nights.
Several quantities, including the values of veiling at 617.50 nm, the accretion luminosity, and the mass accretion rate seem to be rotationally modulated. This is more apparent for the 2010 dataset. The accretion might be more intrinsically variable in the 2012 epoch. For the 2010 epoch, we compared the values of the filling factors folded in phase with curves of the projected surface of a theoretical single, stable accretion spot for a star with DK Tau's inclination. We thus estimate that the spot could be located between \(+45^{\circ}\) and \(+75^{\circ}\) in latitude, under the assumption that the observed variation of the filling factors are caused by the stellar rotation. Assuming that the position of the accretion spot is found at a similar location as with the magnetic pole, we infer a magnetic obliquity (i.e., the angle between the magnetic field axis and the rotation axis of the star) ranging from \(15^{\circ}\) to \(45^{\circ}\). This is consistent with the ones estimated by Neilssen et al. (2023) and McGinnis et al. (2020).
###### Acknowledgements.
The authors thank N. Calvet for helpful discussions and E. Gangi for providing the flux-calibrated spectra of DK Tau. Based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 743029 (EASY: Ejection Accretion Structures in YSOs).
|
2309.14631 | Near-threshold hadron scattering with effective field theory | When an exotic hadron locates near the threshold with the channel couplings,
the internal structure of the exotic hadron is related to the scattering
length. To incorporate the threshold effect, the Flatt\'{e} amplitude has been
often used to determine the scattering length. It is however known that an
additional constraint is imposed on the Flatte amplitude near the threshold. We
discuss this problem by using the effective field theory for the
coupled-channel scattering. | Katsuyoshi Sone, Tetsuo Hyodo | 2023-09-26T03:10:57Z | http://arxiv.org/abs/2309.14631v1 | # Near-threshold hadron scattering with effective field theory
###### Abstract
When an exotic hadron locates near the threshold with the channel couplings, the internal structure of the exotic hadron is related to the scattering length. To incorporate the threshold effect, the Flatte amplitude has been often used to determine the scattering length. It is however known that an additional constraint is imposed on the Flatte amplitude near the threshold. We discuss this problem by using the effective field theory for the coupled-channel scattering.
## 1 Introduction
Exotic hadrons, such as \(T_{cc}\), \(X(3872)\), and \(f_{0}(980)\), are currently attracting attention [1; 2]. Many exotic hadrons are known to appear near the threshold of two hadron scattering. In such cases, the internal structure of exotic hadrons is strongly related to the scattering length. When there is not only one scattering channel but also decay channels, it is also necessary to consider the effect of the channel couplings. For the analysis of near-threshold exotic hadrons, the Flatte amplitude [3] is now often used, which includes the threshold effect. Since each component of the Flatte amplitude can be written in the form of the effective range expansion, the scattering length \(a_{\rm F}\) can be determined from the Flatte amplitude.
However, the Flatte amplitude has the following problem; in the case of the two-channel scattering, the Flatte amplitude has three parameters, but the number of parameters is reduced to two near the threshold [4]. Thus, the Flatte amplitude is not a general amplitude in the threshold region, and the Flatte scattering length \(a_{\rm F}\) may not be general. In more general frameworks, how would the scattering length be described?
## 2 Comparison of Flatte and EFT
We take the case of the two-channel scattering as an example and compare the Flatte amplitude with the general form of the scattering amplitude determined from the optical theorem. We consider the case that the threshold of channel 2 is higher than that of channel 1. The Flatte amplitude at energy \(E\) is written by
\[f^{\rm F}=\frac{1}{-2E+2E_{\rm BW}-iq_{1}^{2}p-iq_{2}^{2}k}\begin{pmatrix}g_{1 }^{2}&g_{1}g_{2}\\ g_{1}g_{2}&g_{2}^{2}\end{pmatrix}, \tag{1}\]
where \(g_{1}\) and \(g_{2}\) represent the coupling contants and \(E_{\rm BW}\) is the bare energy. \(p(E)\) and \(k(E)\) denote the momenta in channels 1 and 2, respectively. It is known that the Flatte amplitude
satisfies the optical theorem. However, when \(f^{\rm F}\) is expanded up to the first order in \(k\) near the channel 2 threshold, it is known that the amplitude depends only on \(R=g_{2}^{2}/g_{1}^{2}\) and \(\alpha=2E_{\rm BW}/(g_{1}^{2}p_{0})\), reducing the number of independent parameters to two [4], where \(p_{0}=p(E=0)\) is the momentum of channel 1 at the threshold of channel 2.
On the other hand, K-matrix, M-matrix, etc. are known as general scattering amplitudes satisfying the optical theorem [5]. In this study, we use the EFT amplitude [6] derived from the non-relativistic effective field theory (EFT) with contact interactions:
\[f^{\rm EFT}=\frac{1}{\frac{1}{a_{12}^{2}}-\left(\frac{1}{a_{11}}+ip\right)\left( \frac{1}{a_{22}}+ik\right)}\left(\begin{matrix}\frac{1}{a_{22}}+ik&\frac{1}{a_ {12}}\\ \frac{1}{a_{12}}&\frac{1}{a_{11}}+ip\end{matrix}\right). \tag{2}\]
\(a_{11},a_{12},a_{22}\) are the parameters of the EFT amplitude in units of the length. The EFT amplitude, like the Flatte amplitude, contains three parameters and satisfies the optical theorem. We expand the EFT amplitude up to the first order in \(k\) and it is shown that the number of parameters of the EFT amplitude does not decrease and remains three even near the threshold. Therefore, the use of the EFT amplitude solves the problem of the Flatte amplitude.
Although the EFT amplitude is found to be more general than the Flatte amplitude, the relationship between the EFT amplitude and the Flatte amplitude is not clear. This is because the EFT amplitude has an inverse matrix, while the Flatte amplitude does not, and thus the EFT amplitude and the Flatte amplitude cannot be directly mapped to each other. In order to clarify the relationship between the two, we construct a scattering amplitude that can represent both the EFT amplitude and the Flatte amplitude.
## 3 General amplitude
We introduce the general amplitude \(f^{\rm G}\) with a new parametrization based on the EFT amplitude. \(f^{\rm G}\) is represented by dimensionless constants \(\gamma\) and \(\epsilon\) and a parameter \(A_{22}\) in units of the length:
\[f^{\rm G}=\frac{1}{-\frac{1}{A_{22}^{2}}-i\frac{1}{A_{22}}\epsilon p-i\frac{1} {A_{22}}k-\gamma pk}\left(\begin{matrix}\frac{1}{A_{22}}\epsilon+i\gamma k& \frac{1}{A_{22}}\sqrt{\epsilon-\gamma}\\ \frac{1}{A_{22}}\sqrt{\epsilon-\gamma}&\frac{1}{A_{22}}+i\gamma p\end{matrix} \right). \tag{3}\]
When \(\gamma=\epsilon\), the channel couplings vanish and \(f^{\rm G}\) is written by \(f^{\rm G}=(1/(-1/(A_{22}\epsilon)-ip),1/(-1/(A_{22})-ik))\). From this, \(A_{22}\) represents the scattering length of channel 2 in the absence of the channel couplings.
Next, we discuss the relation between the general amplitude, the EFT amplitude, and the Flatte amplitude. When \(\gamma=0\), \(f^{\rm G}\) is given as follows:
\[f^{\rm G}=\frac{1}{-\frac{1}{A_{22}}-i\epsilon p_{0}-ik}\left(\begin{matrix} \epsilon&\sqrt{\epsilon}\\ \sqrt{\epsilon}&1\end{matrix}\right). \tag{4}\]
This amplitude is equivalent to the Flatte amplitude up to first order in \(k\). In other words, the general amplitude with \(\gamma=0\) reproduces the Flatte amplitude. Furthermore, the decreas of the Flatte amplitude parameters near the threshold can be understood from the condition \(\gamma=0\) in the general amplitude. In this case, the determinant of the general amplitude behaves as \(\lim_{\gamma\to 0}\det\left(f^{\rm G}\right)=0\). From this feature, the inverse of \(f^{\rm G}\) does not exist when \(\gamma=0\). Since general amplitude is obtained by a different parametrization of the EFT amplitude, for \(\gamma\neq 0\), the general amplitude corresponds to the EFT amplitude. In other words, both the Flatte amplitude and the EFT amplitude can be obtained from the general amplitude by choosing the parameter \(\gamma\).
Next, we perform the effective range expansion for \(f^{\rm G}\) in terms of \(k\), and determine the scattering length. We expand the denominator of \(f^{\rm G}_{22}\) in powers of the momentum \(k\):
\[f^{\rm G}_{22}=\frac{1}{-\frac{1}{A_{22}}\left(\frac{\frac{1}{A_{22}}+i\epsilon p _{0}}{A_{22}+i\gamma p_{0}}\right)-\frac{i(\epsilon-\gamma)}{2(1+iA_{22}\gamma p _{0})p_{0}^{2}}k^{2}-ik+O(k^{4})}. \tag{5}\]
This shows that, \(f^{\rm G}_{22}\) can be written as the effective range expansion in \(k\), and we can define the scattering length \(a_{\rm G}\) in the general amplitude as follows:
\[a_{\rm G}=A_{22}\left(\frac{\frac{1}{A_{22}}+i\gamma p_{0}}{\frac{1}{A_{22}}+i \epsilon p_{0}}\right). \tag{6}\]
In the same way, we expand \(f^{\rm G}_{11}\) in \(k\):
\[f^{\rm G}_{11}=\frac{\frac{\epsilon^{2}}{\epsilon-\gamma}}{-\frac{1}{A_{22}} \frac{\epsilon}{\epsilon-\gamma}-i\frac{\epsilon^{2}}{\epsilon-\gamma}p_{0}- \left(A_{22}\frac{\gamma}{\epsilon}+i\frac{\epsilon^{2}}{2(\epsilon-\gamma)p _{0}}\right)-ik+O(k^{3})}. \tag{7}\]
Because the power series in Eq. (7) contains terms such as \(k^{3}\), \(f^{\rm G}_{11}\) cannot be written in the form of the effective range expansion. Also, the coefficients of each term in the denominator of \(f^{\rm G}_{11}\) are different from those of \(f^{\rm G}_{22}\) in Eq. (5). In particular, the constant term in the denominator of \(f^{\rm G}_{11}\) is different from the scattering length in Eq. (6). On the other hand, as can be seen from Eq. (1), the coefficients of the power series of the Flatte amplitude are common for all the components, and the constant terms of the denominator of the scattering amplitudes are entirely given by the Flatte scattering length.
To summarize, from Eqs. (5) and (7), in general, the \(f_{22}\) component can be written as the effective range expansion near the threshold of channel 2, but the \(f_{11}\) component cannot be written and the scattering length cannot be defined. On the other hand, when \(\gamma=0\), \(f^{\rm G}\) reduces to the Flatte amplitude, and the scattering length \(a_{\rm G}\) becomes the Flatte scattering length \(a_{\rm F}\) as follows:
\[a_{\rm G}=A_{22}\left(\frac{\frac{1}{A_{22}}+i\gamma p_{0}}{\frac{1}{A_{22}}+ i\epsilon p_{0}}\right)\xrightarrow{\gamma=0}\frac{1}{\frac{1}{A_{22}}+i \epsilon p_{0}}=a_{\rm F}. \tag{8}\]
Similarly, the constant term of the denominator of \(f^{\rm G}_{11}\) in Eq. (7) becomes \(a_{\rm F}\):
\[\frac{1}{\frac{1}{A_{22}}\frac{\epsilon}{\epsilon-\gamma}+i\frac{\epsilon^{2 }}{\epsilon-\gamma}p_{0}}\xrightarrow{\gamma=0}\frac{1}{\frac{1}{A_{22}}+i \epsilon p_{0}}=a_{\rm F}, \tag{9}\]
In general, if \(\gamma\) is nonzero, the constant term in the denominator of \(f^{\rm G}_{11}\) is different from the correct scattering length \(a_{\rm G}\), so an analysis using the Flatte amplitude where the scattering length appears in \(f_{11}\) may not give the correct scattering length.
## 4 Application
In order to verify the effect of the value of \(\gamma\) on the scattering length \(a_{\rm G}\), we fix the constant term of the denominator of \(f^{\rm G}_{11}\) and vary \(\gamma\). In this study, we consider the \(\pi\pi\)-\(K\bar{K}\) system with \(f_{0}(980)\), which has already been analyzed by the Flatte amplitude. In Ref. [7], the constant term in the denominator of \(f_{\pi\pi}\) corresponding to \(f^{\rm G}_{11}\) is determined to be \(-1.0-1.0i\) GeV in the
analysis using the Flatte amplitude. In this case, two conditions are imposed to the parameters \(A_{22},\gamma,\epsilon\). In order to compare the scattering lengths \(a_{\rm G}\) and \(a_{\rm F}\) of the Flatte amplitude, we calculate the scattering length \(a_{\rm G}\) with difference values of \(\gamma\). The change of the scattering length \(a_{\rm G}\) when \(\gamma\) is varied from \(-0.04\) to \(+0.04\) is shown in Fig.1.
In Fig. 1, the point represented by the cross (\(\gamma=0\)) corresponds to the scattering length \(a_{\rm F}\) of the Flatte amplitude, and the general scattering length \(a_{\rm G}\) deviates from \(a_{\rm F}\) by \(\sim 0.5\) fm when \(\gamma\) is changed from \(-0.04\) to \(+0.04\). In the present case, the imaginary part of \(a_{\rm G}\) does not depend on \(\gamma\) as seen in Fig. 1. This property can be analytically shown by the imaginary part of Eq. (6). We find that the scattering length \(a_{\rm G}\) varies quantitatively for different \(\gamma\). Therefore, the scattering length determined from the Flatte amplitude \(a_{\rm F}\) with \(\gamma=0\) may deviate from the correct scattering length \(a_{\rm G}\) with \(\gamma\neq 0\) numerically.
## 5 Summary
In this study, we discuss the properties of general scattering amplitudes with the channel couplings. First, the EFT amplitude and the Flatte amplitude are compared, showing that the EFT amplitude does not reduce to the Flatte amplitude directly. Next, we solve this problem by introducing a new parametrization of the EFT amplitude to construct the general amplitude that includes both the EFT amplitude and the Flatte amplitude. Finally, by applying the general amplitude to the \(\pi\pi\)-\(K\bar{K}\) system and quantitatively comparing the correct scattering length with the one determined from the Flatte amplitude, we show that the scattering length of the Flatte amplitude may deviate from the correct scattering length by about 0.5 fm.
|
2309.06235 | AugerPrime Surface Detector Electronics | Operating since 2004, the Pierre Auger Observatory has led to major advances
in our understanding of the ultra-high-energy cosmic rays. The latest findings
have revealed new insights that led to the upgrade of the Observatory, with the
primary goal of obtaining information on the primary mass of the most energetic
cosmic rays on a shower-by-shower basis. In the framework of the upgrade,
called AugerPrime, the 1660 water-Cherenkov detectors of the surface array are
equipped with plastic scintillators and radio antennas, allowing us to enhance
the composition sensitivity. To accommodate new detectors and to increase
experimental capabilities, the electronics is also upgraded. This includes
better timing with up-to-date GPS receivers, higher sampling frequency,
increased dynamic range, and more powerful local processing of the data. In
this paper, the design characteristics of the new electronics and the enhanced
dynamic range will be described. The manufacturing and test processes will be
outlined and the test results will be discussed. The calibration of the SD
detector and various performance parameters obtained from the analysis of the
first commissioning data will also be presented. | The Pierre Auger Collaboration, A. Abdul Halim, P. Abreu, M. Aglietta, I. Allekotte, K. Almeida Cheminant, A. Almela, R. Aloisio, J. Alvarez-Muñiz, J. Ammerman Yebra, G. A. Anastasi, L. Anchordoqui, B. Andrada, S. Andringa, Anukriti, C. Aramo, P. R. Araújo Ferreira, E. Arnone, J. C. Arteaga Velázquez, P. Assis, G. Avila, E. Avocone, A. M. Badescu, A. Bakalova, F. Barbato, A. Bartz Mocellin, J. A. Bellido, C. Berat, M. E. Bertaina, G. Bhatta, M. Bianciotto, P. L. Biermann, V. Binet, K. Bismark, T. Bister, J. Biteau, J. Blazek, C. Bleve, J. Blümer, M. Boháčová, D. Boncioli, C. Bonifazi, L. Bonneau Arbeletche, N. Borodai, J. Brack, P. G. Brichetto Orchera, F. L. Briechle, A. Bueno, S. Buitink, M. Buscemi, A. Bwembya, M. Büsken, K. S. Caballero-Mora, S. Cabana-Freire, L. Caccianiga, R. Caruso, A. Castellina, F. Catalani, G. Cataldi, L. Cazon, M. Cerda, A. Cermenati, J. A. Chinellato, J. Chudoba, L. Chytka, R. W. Clay, A. C. Cobos Cerutti, R. Colalillo, A. Coleman, M. R. Coluccia, R. Conceição, A. Condorelli, G. Consolati, M. Conte, F. Convenga, D. Correia dos Santos, P. J. Costa, C. E. Covault, M. Cristinziani, C. S. Cruz Sanchez, S. Dasso, K. Daumiller, B. R. Dawson, R. M. de Almeida, J. de Jesús, S. J. de Jong, J. R. T. de Mello Neto, I. De Mitri, J. de Oliveira, D. de Oliveira Franco, F. de Palma, V. de Souza, B. P. de Souza de Errico, E. De Vito, A. Del Popolo, O. Deligny, N. Denner, L. Deval, A. di Matteo, M. Dobre, C. Dobrigkeit, J. C. D'Olivo, L. M. Domingues Mendes, J. C. dos Anjos, R. C. dos Anjos, J. Ebr, F. Ellwanger, M. Emam, R. Engel, I. Epicoco, M. Erdmann, A. Etchegoyen, C. Evoli, H. Falcke, J. Farmer, G. Farrar, A. C. Fauth, N. Fazzini, F. Feldbusch, F. Fenu, A. Fernandes, B. Fick, J. M. Figueira, A. Filipčič, T. Fitoussi, B. Flaggs, T. Fodran, T. Fujii, A. Fuster, C. Galea, C. Galelli, B. García, C. Gaudu, H. Gemmeke, F. Gesualdi, A. Gherghel-Lascu, P. L. Ghia, U. Giaccari, J. Glombitza, F. Gobbi, F. Gollan, G. Golup, J. P. Gongora, J. M. González, N. González, I. Goos, A. Gorgi, M. Gottowik, T. D. Grubb, F. Guarino, G. P. Guedes, E. Guido, M. Gómez Berisso, P. F. Gómez Vitale, D. Góra, S. Hahn, P. Hamal, M. R. Hampel, P. Hansen, D. Harari, V. M. Harvey, A. Haungs, T. Hebbeker, C. Hojvat, P. Horvath, M. Hrabovský, T. Huege, J. R. Hörandel, A. Insolia, P. G. Isar, P. Janecek, J. A. Johnsen, J. Jurysek, K. H. Kampert, B. Keilhauer, A. Khakurdikar, V. V. Kizakke Covilakam, H. O. Klages, M. Kleifges, F. Knapp, N. Kunka, B. L. Lago, N. Langner, M. A. Leigui de Oliveira, Y. Lema-Capeans, A. Letessier-Selvon, I. Lhenry-Yvon, L. Lopes, L. Lu, Q. Luce, J. P. Lundquist, A. Machado Payeras, M. Majercakova, D. Mandat, B. C. Manning, P. Mantsch, S. Marafico, F. M. Mariani, A. G. Mariazzi, I. C. Mariş, G. Marsella, D. Martello, S. Martinelli, M. A. Martins, O. Martínez Bravo, H. J. Mathes, J. Matthews, G. Matthiae, E. Mayotte, S. Mayotte, P. O. Mazur, G. Medina-Tanco, J. Meinert, D. Melo, A. Menshikov, C. Merx, S. Michal, M. I. Micheletti, L. Miramonti, S. Mollerach, F. Montanet, L. Morejon, C. Morello, K. Mulrey, R. Mussa, W. M. Namasaka, S. Negi, L. Nellen, K. Nguyen, G. Nicora, M. Niechciol, D. Nitz, D. Nosek, V. Novotny, L. Nožka, A. Nucita, L. A. Núñez, C. Oliveira, M. Palatka, J. Pallotta, S. Panja, G. Parente, T. Paulsen, J. Pawlowsky, M. Pech, R. Pelayo, L. A. S. Pereira, E. E. Pereira Martins, J. Perez Armand, L. Perrone, S. Petrera, C. Petrucci, T. Pierog, M. Pimenta, M. Platino, B. Pont, M. Pothast, M. Pourmohammad Shahvar, P. Privitera, M. Prouza, A. Puyleart, C. Pérez Bertolli, J. Pękala, S. Querchfeld, J. Rautenberg, D. Ravignani, J. V. Reginatto Akim, M. Reininghaus, J. Ridky, F. Riehn, M. Risse, V. Rizi, W. Rodrigues de Carvalho, E. Rodriguez, J. Rodriguez Rojo, M. J. Roncoroni, S. Rossoni, M. Roth, E. Roulet, A. C. Rovero, P. Ruehl, A. Saftoiu, M. Saharan, F. Salamida, H. Salazar, G. Salina, J. D. Sanabria Gomez, E. M. Santos, E. Santos, F. Sarazin, R. Sarmento, R. Sato, P. Savina, V. Scherini, H. Schieler, M. Schimassek, M. Schimp, D. Schmidt, O. Scholten, H. Schoorlemmer, P. Schovánek, F. G. Schröder, J. Schulte, T. Schulz, C. M. Schäfer, S. J. Sciutto, M. Scornavacche, A. Segreto, S. Sehgal, S. U. Shivashankara, G. Sigl, G. Silli, O. Sima, F. Simon, R. Smau, P. Sommers, J. F. Soriano, R. Squartini, M. Stadelmaier, S. Stanič, J. Stasielak, P. Stassi, M. Straub, S. Strähnz, T. Suomijärvi, A. D. Supanitsky, Z. Svozilikova, Z. Szadkowski, F. Sánchez, F. Tairli, A. Tapia, C. Taricco, C. Timmermans, O. Tkachenko, P. Tobiska, C. J. Todero Peixoto, B. Tomé, Z. Torrès, A. Travaini, P. Travnicek, C. Trimarelli, M. Tueros, M. Unger, L. Vaclavek, M. Vacula, J. F. Valdés Galicia, L. Valore, E. Varela, D. Veberič, C. Ventura, I. D. Vergara Quispe, V. Verzi, J. Vicha, J. Vink, J. Vlastimil, S. Vorobiov, A. Vásquez-Ramírez, C. Watanabe, A. A. Watson, A. Weindl, L. Wiencke, H. Wilczyński, D. Wittkowski, B. Wundheiler, B. Yue, A. Yushkov, O. Zapparrata, E. Zas, D. Zavrtanik, M. Zavrtanik, R. Šmída | 2023-09-12T13:51:14Z | http://arxiv.org/abs/2309.06235v3 | # AugerPrime Surface Detector Electronics
###### Abstract
Operating since 2004, the Pierre Auger Observatory has led to major advances in our understanding of the ultra-high-energy cosmic rays. The latest findings have revealed new insights that led to the upgrade of the Observatory, with the primary goal of obtaining information on the primary mass of the most energetic cosmic rays on a shower-by-shower basis. In the framework of the upgrade, called AugerPrime, the 1660 water-Cherenkov detectors of the surface array are equipped with plastic scintillators and radio antennas, allowing us to enhance the composition sensitivity. To accommodate new detectors and to increase experimental capabilities, the electronics is also upgraded. This includes better timing with up-to-date GPS receivers, higher sampling frequency, increased dynamic range, and more powerful local processing of the data. In this paper, the design characteristics of the new electronics and the enhanced dynamic range will be described. The manufacturing and test processes will be outlined and the test results will be discussed. The calibration of the SD detector and various performance parameters obtained from the analysis of the first commissioning data will also be presented.
Keywords:Large detector systems for particle and astroparticle physics, Detector readout concepts, electronics, trigger and data acquisition methods
## 1 Introduction
### AugerPrime components
* 3 Requirements and general implementation
* 3.1 Functional requirements
* 3.2 Configuration requirements
* 3.3 Electronics implementation
* 4 Design characteristics
* 4.1 The Surface Detector dynamic range
* 4.2 Front-end electronics
* 4.3 Timing
* 4.4 Control and monitoring
* 4.5 Firmware and trigger implementation
* 4.6 Local processing software
* 4.7 Implementation and interfaces
* 5 Production, tests, and installation
* 5.1 Production strategy
* 5.2 Tests and verification strategy
* 5.3 Manufacturing tests
* 5.4 Environmental stress screening
* 5.5 Assembly, final verification, and deployment
* 6 Calibration
* 7 Obtained performances
* 7.1 Noise performance
* 7.2 Dynamic range
* 7.3 Uniformity and long-term performance
* 8 Conclusions
* A Diagram of the UUB architecture
Introduction
The Pierre Auger Observatory is located near Malargue, Mendoza, Argentina. The surface detector (SD) array of the observatory consists of 1600 water-Cherenkov detectors (WCD) on a 1500 m triangular grid covering 3000 km\({}^{2}\). Another 60 WCDs, with a 750 m spacing, form a 27 km\({}^{2}\) infill region allowing extension to lower energies. The array is overlooked by four fluorescence detector (FD) sites each hosting 6 telescopes viewing a 180\({}^{\circ}\) azimuth by 30\({}^{\circ}\) elevation field of view. Three additional telescopes at one of the sites can be tilted 30\({}^{\circ}\) higher to view lower energy showers and overlook the infilled surface array.
Secondary particles of extensive air showers (EAS) induced by ultra-high-energy cosmic rays (UHECRs) are sampled at ground level by the SD. The FD measures EAS development by detecting the nitrogen UV light produced by the shower particles along their passage through the atmosphere. Additional instrumentation for R&D on muon (UMD) and radio-based (AERA) detection is also located on the site. A description of the current observatory can be found in Ref. [1].
In almost 20 years of operation, the Pierre Auger Observatory has provided, with unprecedented statistics and precision, major breakthroughs in the field of UHECRs. The steepening of their flux is now confirmed beyond any doubt as a succession of different power laws [2]. The primary mass composition is found to get heavier with increasing energy [3]. A large-scale anisotropy has been discovered above 8\(\times\)10\({}^{18}\) eV, proving that these UHECRs are of extragalactic origin [4], while anisotropies that mirror the distribution of nearby extragalactic matter have been evidenced at intermediate angular scales above \(\simeq\)4\(\times\)10\({}^{19}\) eV [5]. Furthermore, important results have been obtained also for neutrinos and photons.
To make further progress, the Auger Collaboration decided to improve the SD sensitivity to the cosmic ray composition. The Observatory is therefore undergoing a significant upgrade of its experimental capabilities called AugerPrime, with the main aim of disentangling the muonic and electromagnetic components of extensive air showers, thereby enhancing the ability to study UHECR composition. This will allow us to understand the origin of the flux suppression, providing fundamental constraints on the sources and their properties, to perform composition-assisted anisotropies, and to add information about hadronic interaction effects at the highest energies. Enhanced trigger capabilities will furthermore provide higher sensitivity to neutrinos and photons.
After a brief description of the different components of AugerPrime, the design of the Surface Detector upgraded electronics will be described in the following. The test processes and the various test results will be presented. The calibration of SD stations will be outlined. Finally, the detector performance inferred from the analysis of the first data will be discussed.
## 2 AugerPrime components
A WCD consists of a 3.6 m diameter tank containing a sealed liner with a reflective inner surface. The liner contains 12 000 liters of ultra-pure water. Three 9-inch diameter Photonis XP1805/D1 photomultiplier tubes (PMTs) are symmetrically distributed on the surface of the liner at a distance of 1.20 m from the tank center axis and look downward through windows of clear polyethylene into the water. They record the Cherenkov light produced by the passage of relativistic charged particles through the water. The tank height of 1.2 m makes it also sensitive to high energy photons,
which convert to electron-positron pairs in the water volume. Each surface detector station is self-contained. A solar power system provides currently an average of 10 W for the PMTs and electronics package consisting of a processor, Global Positioning System (GPS) receiver, radio transceiver and power controller.
To increase the dynamic range of the WCD signal measurement, a small PMT (SPMT), a 1-inch Hamamatsu R8619 PMT, dedicated to the unsaturated measurement of large signals, is added to one of WCD liner ports. An already existing LED flasher is mounted to another port of the water tank liner. The LED flasher incorporates two LEDs which can be pulsed independently or simultaneously and with variable amplitude. This allows testing of the linearity of the photomultipliers to be conducted remotely.
A scintillator-based surface detector (SSD) consists of an aluminum box of \(3.8\,\mathrm{m}\times 1.3\,\mathrm{m}\), containing two scintillator panels, each composed of extruded polystyrene scintillator bars of \(1.6\,\mathrm{m}\) length, \(5\,\mathrm{cm}\) width, and \(1\,\mathrm{cm}\) thickness. The scintillator light is read out with wavelength-shifting fibers inserted into straight extruded holes in the scintillator bars. The 1-mm diameter fibers are bundled in a PMMA (poly(methyl methacrylate)) cylinder which is connected to a single PMT. The PMT is a 1.5-inch diameter bi-alkali Hamamatsu R9420. The power supply of the PMT is based on a custom design manufactured by the ISEG company. The charge value for a Minimum Ionizing Particle (MIP) determined by using a hodoscope trigger is more than 30 photo-electrons (p.e.).
The Radio Detector (RD) is a short aperiodic loaded loop antenna of 122 cm diameter, measuring radio signals from extensive air showers in the 30 to 80 MHz band. It features a simple mechanical design, minimizing cost and easing handling and maintenance. The antenna features a \(392\,\mathrm{\SIUnitSymbolOhm}\) resistor at the bottom, which shapes the antenna main lobe towards the zenith and suppresses the dependence on structures below it, in particular the SSD, the WCD and potentially variable ground conditions. The SSD and RD are mounted atop each WCD detector except for the detector stations on the border of the array where the shower core measurement is no longer necessary since a high-lever trigger requires a ring of stations around the shower core.
In addition, an Underground Muon Detector (UMD), consisting of buried muon counters deployed in the infill area, gives a direct measurement of the muon content of the showers and of its time structure. The UMD basic unit consists of \(3\times 10\,\mathrm{m}^{2}\) modules, each segmented into 64 plastic scintillator strips, buried \(2.3\,\mathrm{m}\) alongside a WCD at a distance of at least \(7\,\mathrm{m}\).
The upgrade of the SD electronics (SDEU) allows us to process signals from SSD and SPMT, in addition to those of the WCD large PMTs, to obtain an absolute time indication, and to provide digital interface for RD and UMD detectors. Furthermore, the new electronics is designed to improve both resolutions and data processing capabilities. In the main array, the existing communication infrastructure of the stations is used, and therefore, no upgrade of the main communication system is required. The station power system remains unchanged except for new solar panels to accommodate the increased power consumption due to the RD.
A general description of AugerPrime and its physics motivations can be found in the Preliminary Design Report [6]. An AugerPrime detector station with the SSD scintillator and the RD antenna atop the WCD detector is shown in Fig. 1.
An AugerPrime engineering array (EA) of 12 stations has been operating at the Auger Observatory site since October, 2016. The EA allowed us to validate the design and to test the integration of the AugerPrime stations into the standard Observatory operation and the Central Data Acquisition
ystem (CDAS) through the Auger communication network. The description of the preliminary design and the results obtained from the EA can be found in Refs. [7; 8; 9].
The deployment of the pre-production and production electronics, together with SPMTs, started in mid-2020. All the PMTs are procured and tested and the production of the electronics boards is completed. The deployment on-site was completed early July 2023. The commissioning studies have been in progress since December 2020 and various performance parameters are being monitored. The RD detector prototypes have been tested on the SD array; their installation started mid-2023 and it is planned to be completed by early 2024. The installation of UMD is well advanced too and is foreseen to be completed by early 2025.
Figure 1: AugerPrime detector with the SSD and RD atop the WCD. The UUB is hidden underneath the dome visible on top of the WCD.
Requirements and general implementation
The global design objectives of the electronics upgrade are to increase the data quality: faster sampling for Analog-Digital-Converter (ADC) traces, better timing accuracy, increased dynamic range, enhanced local trigger and processing capabilities, more powerful local station processor with a Field Programming Gate Array (FPGA), and improved calibration and monitoring capabilities. Backwards-compatibility with the current dataset is maintained by retaining the current timespan of the PMT-traces and providing for digital filtering and downsampling of the traces to emulate the current triggers in addition to any new triggers. The design objectives also aim for higher reliability and easy maintenance. The most important functional and configuration requirements are listed below followed by a description of the general implementation.
### Functional requirements
* 10 ADC analog inputs to handle the two gains for each of the three existing PMTs, the added PMT of the SSD detector and the SPMT (plus a spare channel).
* The total RMS integrated noise at the ADC input should not exceed 0.5 LSB (Least Significant Bit) for the low-gain channel and 2 LSB for the high-gain channel.
* Digitization of the PMTs anode signals at a sampling frequency of 120 MS/s with a resolution of 12 bit minimum.
* Existing and additional trigger configurations implemented in the FPGA firmware.
* Event time tagging with a resolution of 5 ns with a stability better than 5% depending on temperature variation.
* Independent programmable Slow-Control unit to monitor voltage and environmental sensors, and control the PMT high voltages and the FPGA low voltages.
* Calibration system based on two LEDs, controlled in time and amplitude.
* Ethernet and USB (Universal Serial Bus) communication capabilities.
### Configuration requirements
* All functions contained on a single board (except for the GPS receiver).
* Use of up to date commercial GPS receivers.
* Embedded diagnostics.
* Digital ports allowing communication with additional detector systems.
* Power-supply unit including safety features and an efficiency better than 80% for a total consumption between 10 and 11 W.
### Electronics implementation
The major portion of the AugerPrime electronics upgrade replaces the original Unified Board (UB) with the Upgraded Unified Board (UUB). In the UUB, various functions (front-end, calibration, time tagging, trigger, monitoring) are implemented on a single board. It is designed to fit the existing RF-enclosure, and to accept the SSD PMT and SPMT cables together with the existing PMTs, GPS antenna, and communications cables. In addition, UUB provides digital interface for the RD and UMD detectors giving them access to the communication system. The new electronics also employs faster ADCs (120 MHz instead of 40 MHz) with larger dynamic range (12 bit each instead of 10 bit).
The UUB architecture is designed with a Xilinx Zynq FPGA containing two embedded ARM Cortex A9 333 MHz microprocessors. The FPGA is connected to a 4 Gbit LP-DDR2 memory and a 2 Gbit Flash memory. The FPGA implements all basic digital functions such as the read-out of the ADCs, the generation of triggers, the interface to LED flasher, GPS receiver, clock generator, and memories. High-level functions like the data handling and communications with the radio transceiver are implemented under Linux. A simplified functional diagram of the UUB architecture is shown in Fig. 2. A more detailed diagram is shown in Fig. 16 in the Appendix.
## 4 Design characteristics
The station electronics was designed to use more advanced and less power consuming electronics components. It takes advantage of the existing mechanical interfaces, and the existing communication and power systems. Furthermore, the new firmware/software was adapted from the previous one ensuring compatibility with the Central Data Acquisition System (CDAS). In the following, the design characteristics for the different components of the AugerPrime Surface Detector electronics together with the added SPMT, are described.
### The Surface Detector dynamic range
The dynamic range of the SD measurements extends from a few photoelectrons in stations far from the shower core and for the low energy muons used for calibration, to hundreds of thousands in stations near the impact point of the shower core at the ground where the particle density dramatically increases. To improve the SD data quality, an extension of the acquisition dynamic range is implemented in both the WCD and the SSD, allowing us to measure non-saturated signals at distances as close as 250 m from the shower core, in particular for the highest energy events, which are of extreme importance for the physics goals.
To achieve this aim, the WCD is equipped with an additional small PMT (SPMT), a 1-inch Hamamatsu R8619 photomultiplier, assembled with a pure passive 66.5 M\(\Omega\) tapered ratio HV divider for high linearity and low power consumption. The SPMT is installed in an hitherto unused and easily accessible 30 mm window on the Tyvek bag containing the ultra-pure water, located close to one of the large PMTs (LPMT1). The SPMT features the same bialkali photocathode as the XP1805 LPMTs, but with an active area of about 1/80, thus potentially allowing for an equivalent dynamic range extension. The required range up to 20 000 VEM (Vertical Equivalent Muon, see Section 6) can be obtained by adjusting the gain in such a way that the ratio of the large to the small PMT signals is 32.
Figure 2: Functional diagram of the Upgraded Unified Board.
The SPMT output is required to be linear within 5% for a peak current up to 50 mA at a gain of 7\(\times\)10\({}^{5}\). All the small photomultipliers have been validated in a test facility by measuring their gain and linearity [10]. To minimize the number of failures and to ease the maintenance, the SPMT was designed with a passive base, moving the power supply into a separate high voltage power supply (HVPS) module, a custom-made CAEN A7501 HV DC-DC converter, which also provides a measurement of the current flowing through the divider. All the HVPS modules have undergone specific tests to verify their reliability in the challenging environmental conditions and high thermal excursions of the Argentinian Pampa [11].
For consistency with the associated WCD, the dynamic range in the SSD spans from the signal of a single particle, needed for calibration, to large signals, up to \(\sim\)2\(\times\)10\({}^{4}\) MIP. The SSD PMT has been chosen accordingly, being linear within 5% for peak currents up to 160 mA (for a gain of 8\(\times\)10\({}^{4}\)).
### Front-end electronics
The analog Front-End (FE) has three different configurations, depending on the type of PMT that is connected. For most channels, the amplification of the signal is differential with two amplifier stages. A 7th-order Bessel low-pass filter with 60 MHz cutoff frequency, designed to preserve the leading-edge timing with minimal distortion of the signal shape, is situated between the two stages.
The signals are digitized by commercial 12-bit 120 MHz dual channel FADCs (Analog Devices AD9628), which achieve this performance with high precision, low noise and minimal power consumption, an important consideration due to the station's small power budget of 10 W.
The anode channel inputs for the 3 large XP1805 PMTs are split in two and amplified to have a gain ratio of one on the first channel (low gain), and 32 on the second one (high gain). The anode-channel input for SPMT has a single unitary gain. The anode channel of the SSD PMT is split in two and amplified to have a gain ratio of 0.25 on the first channel (low gain), and 32 on the second one (high gain). This yields a total gain ratio of 128. The signals are filtered and digitized similarly to the WCD LPMT signals. The SPMT anode signal is also digitized with 12 bit at 120 MHz in a separate channel. The overlap in the dynamic range of LPMT and SPMT is \(\sim\)7 bit which is sufficient to obtain the cross-calibration for SPMT (see Section 6).
A block diagram of the front-end electronics channels is shown in Fig. 3 and a scheme of the dynamic ranges is shown in Fig. 4.
The intrinsic electronic noise measured in laboratory on the high gain channels is about 2 LSB and 1/2 LSB on low gain channels.
### Timing
Synchronization of the detectors is provided by tracking variations of the local 120 MHz clock with respect to the 1 PPS signal of the Global Positioning System (GPS). For the upgraded electronics we have selected the Synergy SSR-6TF timing GPS receivers. This receiver is functionally compatible with the Motorola Oncore UT+ GPS, the one that was used with the former electronics. The fundamental architecture of the time-tagging firmware module parallels the time-tagging design concept used in the former electronics and is implemented in the UUB board FPGA. The on-board software for initialization of the time-tagging modules, GPS hardware control, and timing data w
Figure 4: Overlap of dynamic ranges.
Figure 3: Block diagram of the front-end electronics. The total gain factors are indicated.
is similar to the former one, with minor modifications needed for the new UUB hardware. The manufacturer claims an intrinsic GPS device accuracy after the applied granularity correction (the so-called negative saw-tooth) of \(\sim\)2 ns.
The timing performance of the SSR-6FF GPS receiver has been verified in the laboratory, relative to an FS275 GPS-disciplined rubidium atomic clock. The one-standard-deviation _absolute_ timing accuracy is found to range from 2.3 ns when measured over timescales of a few seconds to about 6 ns when measured over timescales of several hours.
More importantly, the _relative_ timing accuracy (variance on timing of common signal between two SSR-6TF receivers) is measured, to range from better than 1.8 ns within a temperature-controlled environment to 2.1 ns when measured in a thermal chamber where temperatures variations are programmed to simulate those expected on the Observatory site (from \(-\)20\({}^{\circ}\)C and up to +70\({}^{\circ}\)C under the electronics dome). Additionally, a laboratory test stand that reproduces the time-tagging architecture as implemented in the UUB, was developed. This test stand is used to verify the timing accuracy and measure any timing offsets for each receiver before it is deployed to the field. Results from measurements show relative timing accuracy ranging between 4 and 6 ns.
The verification of the timing performance of GPS SSR-6TF receivers deployed in the field within UUB prototypes was done by using a synchronization cable to send timing signals between two closely positioned (\(\sim\)20 m) UUB-equipped SD stations. Using this method, a timing accuracy of about 5 ns was achieved, a result consistent with the lab measurements and the timing granularity as implemented on the UUB.
### Control and monitoring
A powerful 16-bit RISC CPU ultra-low-power micro-controller (MSP430) is used for the control and monitoring of the PMT high voltages, the supervision of the various supply voltages and the reset functionality. The power-on sequence of the several supplies for the FPGA is quite complex, and is also controlled by the micro-controller. This device is optimized for low power budget environments.
For all these purposes, it controls 16 logic I/O lines, steers a 12 bit digital-to-analog converter (DAC) with eight analog outputs, and senses through multiplexers up to 64 analog signals with its internal 12 bit ADC. The MSP430 also provides a USB interface, which can be used to monitor and control the various power supplies through a command line interface. This is used for maintenance. The MSP430 is tied via an I\({}^{2}\)C-bus to an 256 kbit EEPROM and a pressure/temperature/humidity on board sensor. The system is also in charge of managing the master reset, part of the watchdog and the radio reset. The slow-control is able to restart the UUB after a low battery state due to long bad weather conditions (typically one week without sun on the Observatory site).
More than 90 monitoring variables, including currents and voltages of the power supplies and the PMTs, are managed by the slow-control firmware and stored in a central database. The firmware also includes diagnostics and safety features. The block diagram of the slow-control electronics is shown in Fig. 5.
### Firmware and trigger implementation
The heart of the UUB is a Xilinx Zynq-7020 All Programmable SoC (Artix-7 FPGA and associated Cortex A9 Dual 333 MHz ARM co-processor) instead of the older Altera Cyclone series FPGAs
used in the previous electronics. Whereas the logic code of the previous FPGAs is written in an Altera specific variant of the hardware description language VHDL called AHDL, the logic code of AugerPrime version is primarily written in IEEE standard synthesizable Verilog. Xilinx Vivado is used for the overall framework, and for standard modules such as memories, UARTs (Universal Asynchronous Receiver Transmitter), and processor bus interfaces. Xilinx PetaLinux runs on the embedded ARM processor.
The FPGA implements in programmable logic basic digital functions like the readout of the ADCs, the generation of triggers, and the interfaces to the LED flasher, GPS receiver, and memories. High-level functions like data handling and interactions with the communications radio transceiver are implemented under Linux. The addition of accessible trigger IN/OUT signals and high-speed USB facilitates tests both in the laboratory and on the Observatory site.
A multi-level triggering scheme is used. The lowest trigger level for each trigger type is denoted T1. This is formed by the programmable logic and causes the traces to be transferred to the ARM processor. The higher level triggers (T2, T3,...) are performed in software and discussed in Section 4.6.
The previous local triggers [12, 13, 14] (threshold trigger, time-over-threshold trigger (ToT), time-over-threshold deconvolved (ToTd), multiplicity of positive steps (MoPS) trigger) are transferred to new electronics. The ToT trigger requires an extended duration signal. The ToTd variation of the ToT removes to first order the tails of signals from a single particle due to multiple reflections from the station walls. The MoPS trigger aims to do a similar operation by only looking at the rising edge of signals. All of these have higher purity and are more efficient for electromagnetic showers and in stations away from the shower core than the simple threshold trigger. The triggers are implemented by using digitally filtered and down sampled waveforms to reproduce the previous trigger characteristics. This consists of taking the full-band traces of UUB with 2048 bins and filtering them, using an FIR Nyquist filter with a 20 MHz cut-off, to approximate the frequency response of the previous electronics. In addition, to reproduce the sampling at 40 MHz of the former electronics, the FADC traces are down-sampled by choosing every third bin on which to apply the trigger algorithm used in the former electronics. This allows detectors with the new electronics to
Figure 5: Block diagram of the slow-control system.
behave identically to the former configuration at the trigger level and allows deployment of new electronics during the maintenance of the existing system without disturbance to the data taking. To distinguish these down sampled triggers from newer triggers that utilize the full ADC sampling, we include the modifier "compatibility". The T2-rates are about 20 Hz for previous and new electronics and the shower trigger (ToT) rates are around one Hz for both electronics.
The increased local processing capabilities allow new triggers, targeted to neutral primaries, to be implemented such as asymmetry-based triggers, and combined SSD and WCD triggers. Short traces triggered by muon-like signals are stored in so-called "muon buffers". These buffers are read into the processor to facilitate online calibration. Scalers keep a continuous record of a "scaler trigger" rate, and are used to search for correlated increases in rate across the array. A "random" trigger allows acquisition of background data to assist in noise characterization and trigger design. Finally, the FPGA allows playback of previously recorded or simulated traces to test and verify the implemented trigger algorithms.
### Local processing software
The speed of the upgraded CPU is more than 10 times faster than that of the previous one, Power PC 403GCX [1], with a similar increase in memory. This allows more sophisticated processing in the local station. The previous UB code, which used the OS9 operating system, has been ported to Linux. In this process, the code was adjusted to account for the differences in OS9 and Linux system calls and for the different design structures in the UUB. Fig. 6 gives an overview of the local processing software implemented in the UUB.
In the following, a short description of the local processing software is given. The short names refer to those used in Fig. 6.
The data satisfying the T1-trigger condition in "SHWR Buffer" is transferred to a temporary event buffer ("temp. Evt. Buff.") in the RAM memory. The process "Trigger Ctrl" determines if the event passes the T2-trigger condition and calculates the calibration parameters. In case the T2-trigger condition is fulfilled, the event is copied to the main event buffer ("Main Evt. Buff.").
To decouple trigger rates from station-to-station and PMT-to-PMT gain variations, (most of) the trigger thresholds are computed as a multiple of the most probable peak value of the background vertical equivalent muons (VEM\({}_{\text{pk}}\)) generated in each PMT (see Section 6). The calculation of VEM\({}_{\text{pk}}\) by the "Trigger Ctrl" process proceeds as follows: It starts by setting VEM\({}_{\text{pk}}\) to a default value. With this value the threshold above the baseline for each PMT is set as
\[\text{Th}_{\text{pmt}}^{\text{type}}=\alpha^{\text{type}}\;\text{VEM}_{\text{pk}} \tag{10}\]
where \(\alpha^{\text{type}}\) is a constant which depends on the trigger type. For compatibility single-bin trigger threshold,1 the value for \(\alpha^{\text{T1}}=1.75\,\text{VEM}_{\text{pk}}\).
Footnote 1: The VEM\({}_{\text{pk}}\) for compatibility mode triggers is calculated using filtered and down sampled signals.
After this, "Trigger Ctrl" determines if the signal passes the T1 condition and calculates the rates for those PMTs that pass the threshold Th\({}_{\text{pmt}}^{\text{70 Hz}}\), where \(\alpha^{\text{T70}}=2.5\,\text{VEM}\) (e.g. at a threshold of \(2.5\,\text{VEM}\) the rate has been found empirically to be \(70\,\text{Hz}\)). In case the rate is lower (higher) than \(70\,\text{Hz}\), the VEM\({}_{\text{pk}}\) is decreased (increased) and the thresholds are reset following the Eq. (10).
After some iterations, VEM\({}_{\rm pk}\) stabilizes to the value that corresponds to the PMT gain. At this value, the T1 rate is \(\sim\)100 Hz and the T2 rate (\(\alpha^{\rm T2}\) = 3.2 VEM) is \(\sim\)20 Hz [15].
The timestamps of all T2 events ("T2 list") are sent to the process "Msg. Server" which at the end sends the message to Central Data Acquisition System (CDAS). Furthermore, the "Msg. Server" is responsible to transmit all the messages from all the processes to CDAS ordered by priority, following the radio protocol. It also receives the messages from CDAS and delivers them to the corresponding processes.
When CDAS finds a coincidence between different stations in the "T2 list", it emits a level 3 (T3) trigger which goes to the "Evt. Server" of the corresponding stations. This process gets the event from the "Main Evt. Buff.", adds the calibration information and histograms, and sends the complete event information back to CDAS.
Short traces which are acquired from the muon buffers by "Calib. Buffer" are used by "Calib. Hist." to construct histograms of signal amplitude and charge. The "CalMon" process collects the calibration data as well as the power system monitoring data through the process "monitor" and reports them periodically to CDAS.
The process "Control" searches for the acquisition configuration and stores it in the "Config." structure which is shared with all the other processes. Besides this, it oversees the processes through
Figure 6: Block diagram of local processing Software. The rounded corner blocks are the processes which run on the operating system.
the "Status" shared structure. All the acquisition processes update their own information, so that "Control" can identify possible problems.
In addition to sending T2 timestamps, event traces, calibration, and monitoring data to CDAS, as well as accepting T3 requests from CDAS, the communication protocol allows sending files and even arbitrary Linux commands from CDAS to selected stations or as a broadcast. This allows updating the local processing software, and even the compiled programmable logic "bitstreams" for the UUB and RD.
### Implementation and interfaces
All the functions described above except for the GPS receiver, have been gathered on a single board of \(340\,\mathrm{mm}\times 215\,\mathrm{mm}\) size. The printed circuit board (PCB) is a ten copper layer FR4 class 6 board. The board is fully coated after assembly, on both external sides and edges, using a silicon removable coating product, including UV marker and RoHS-2 compliant (hazardous substances free, following the European directive). This is done to protect the board against the harsh environment (temperature variations from \(-20^{\circ}\mathrm{C}\) to \(+70^{\circ}\mathrm{C}\) under the electronics dome cover, air salinity and humidity). A photo of the assembled UUB is shown in Fig. 7.
The UUB, together with the GPS receiver board, are mounted inside the existing metallic RF-proof enclosure. A new front panel is designed, integrating existing and new connectors for the additional detectors and features. This allows us to keep the current mechanical components of the SD detectors.
Two 8 bit digital ports are provided for additional detectors. The UUB is interfaced with the actual communication system providing 1200 bps data transmission rate, and with the power system providing 24 V from the batteries. The previous power budget of 10 W is increased up to nearly 20 W by installing new solar panels on each SD station. Additionally, the electrical design of the UUB is made to reduce the conducted and emitted electromagnetic interference to an acceptable level for the RD system by using appropriate filters and shielding material. Fig. 8 shows all electrical interfaces.
The embedded software of the UUB is interfaced with the existing radio transceiver, using a proprietary communication protocol, the new GPS receiver, using a communication language identical to the the previous receiver, and the new additional detectors, RD and UMD, interfaced via the digital ports.
## 5 Production, tests, and installation
1700 units of SPMTs were procured from Hamamatsu and separate custom-designed HV modules were procured from CAEN. All the modules were tested in Europe prior to shipment to Argentina (see Section 4.1). The production and test strategy of the UUBs is described in detail in the following sub-sections. A short description of the deployments strategy of UUBs, SPMTs together with the SSD PMTs is given in the end of the section.
### Production strategy
For the mass production of the UUBs, only one manufacturer was selected to reduce the risks of discrepancies that could occur if UUB batches were produced by different companies. The selected
manufacturer was the A4F company (Angel for Future, formerly SITAEL), in Italy.
The items requested to the manufacturer were:
* Procurement of all the components and materials, except those already procured by the Auger Collaboration.
* Manufacturing or procurement of the printed-circuit boards.
* Mounting and assembly of the boards according to instructions provided by the Auger Collaboration.
* Quality control and testing of the boards according to instructions, test plan and test benches provided by the Auger Collaboration (Manufacturing tests).
* Packing and shipment of the boards with a delivery to the Observatory according to a staged schedule.
* Warranty on manufacturing and behavior of the boards for a defined period.
### Tests and verification strategy
The UUB validation and test process has three steps (see Fig. 9):
Figure 7: Assembled UUB, equipped with front panel, GPS receiver, cables and shielding covers.
* The manufacturer test, to verify the proper behavior of almost all the functions of the UUB after assembly, performed at the manufacturer plant.
* The Environmental and Stress Screening test, to stress the UUB in a climate chamber after manufacturing and to measure performance. This test is performed in Europe, in a Pierre Auger collaboration laboratory.
* The final test, performed in Malargue after delivery, together with the final assembly and before the deployment on site.
Therefore, three types of test benches have been developed, each one designed to perform one of the tests described above.
### Manufacturing tests
The manufacturer test aims at verifying that all functional blocks of the UUB are correctly assembled and in operation. Two identical test benches were developed and installed at the manufacturer site. Each of them allows to test one UUB at a time.
The UUB to be tested is mounted on a plastic support frame and locked with two clamps to a support structure. It is connected to a multi-channel pulse generator through SMA quick-fit connectors mounted on a slider, and it is powered by a programmable power supply. All test results are recorded via an Ethernet interface or via a digital oscilloscope. Adapters are connected to the
Figure 8: Electrical interfaces of the Upgraded Unified Board.
UUB during the test to provide specific voltage levels or feedback control signals. Fig. 10 shows the schematics of the manufacturer test bench.
As a first step of the testing procedure, all UUBs must pass an initial automated optical inspection (AOI) with a system provided by the manufacturer. The inspection can detect problems related to the soldering process (such as excessive or insufficient solder paste) and issues related to component assembly (such as missing components, wrong orientation or distortion of integrated circuits, wrong component polarity) with high efficiency.
Figure 10: Test bench: Loop-back adapters in green, SMA quick fit connectors in red.
Figure 9: Three steps of the UUB testing strategy.
Once the automatic procedure is complete, the operator moves the board to a semi-automatic test bench. After connecting all inputs to the test system, a script is executed to install updated firmware for the MSP microcontroller and the FPGA. The UUB then reboots and is ready for the full functionality test. Through a web page it is possible to execute specific tests on the UUB using the Application Programming Interface (API) running under PetaLinux. The test results are automatically analysed and the data is loaded into the web page for inspection. Information about the configuration of the UUB is automatically acquired, formatted and saved together with the test results in a database. The database allows the export of results into spreadsheets to produce statistics about parameter variations (e.g. voltages and currents).
### Environmental stress screening
After the manufacturing test, the UUBs are submitted to an Environmental Stress Screening (ESS) which is performed to characterize the behaviour of the new electronics under changing environmental conditions typically observed at the Observatory site and to provoke early failures. ESS tests consist of a burn-in procedure followed by temperature cycling, using a dedicated climate chamber. A batch of ten UUBs can be submitted to this test at a time. During the burn-in UUBs are subject to rapid temperature changes for 24 hours. Noise, baselines and temperature readings are monitored regularly. This is followed by 10 cycles, from \(-20^{\circ}\)C to +70\({}^{\circ}\)C (temperature change of 3\({}^{\circ}\)C/min). At five temperature points the performance of each UUB is monitored. The tests performed include noise, baseline and linearity dependence on temperature, stability of the ADCs and the anti-alias filter, and over/under voltage protection test.
The scheme of the ESS test bench is shown in Fig. 11. Communication with the boards is done via Ethernet connection through a Gigabit switch placed inside the climate chamber. The test signal is issued by a function generator (AFG3252C, Tektronix), amplified/attenuated and distributed into 60 channels via a custom-made distribution unit. To power the boards, a commercial power supply
Figure 11: ESS test bench scheme. The arrows indicate the information flow: communication with the instruments (USB or ethernet) is depicted in blue, analog test signal is shown in violet, test data acquisition is shown in turquoise. Powering of the devices under test is represented in red.
is used, passing through another custom interlock unit allowing monitoring of the current drawn by individual boards and switching the boards off one by one in case of a failure. The last custom-made unit generates the trigger signal for all ten boards as well as for the pulse generator, with an appropriate time offset.
The handling of the UUBs follows all the ESD-safe precautions specified by the IEC EN 61340-5-1 standard. Further details on the tests performed can be found in [16].
The complete test procedure is fully automatic, takes 45 hours and is monitored online using the Grafana package ([https://grafana.com](https://grafana.com)), which permits observation from any part of the world. All tests results are summarized in a database.
Among the most frequent failures encountered during the ESS test process, the following issues can be listed:
* The ADCs were not correctly initialized after rebooting at 70\({}^{\circ}\)C. This problem is overcome by a software patch, which re-initializes the ADCs when a stuck value is identified.
* Flipping ADC bits were observed especially at low temperatures. This issue was attributed to faulty components amounting to about 2% of the batch. Affected ADCs were replaced.
* Baseline instabilities were occurring mainly at high temperatures. However, this issue should not affect the data since the baseline is taken from the same trace as the signal.
* Some 3.3 V DC/DC converter failures were also traced to faulty components, which were replaced.
* Other, less frequent faults (individual cases) were detected such as soldering issues, broken/missing components or booting problems.
### Assembly, final verification, and deployment
When the UUBs are delivered at the Pierre Auger Observatory site, they undergo a verification process before being deployed. The first step is to visually inspect each board, searching for minor manufacturing issues or transportation damages. Once the UUBs successfully pass this inspection, they are integrated with the GPS receiver module and the so-called loose parts (i.e. cables, connectors, front panel, etc.). The complete setup is mounted inside a sturdy metal RF-enclosure, that acts both as a physical protection and also as an electromagnetic shielding for any radiated RF energy from the UUB (especially from switch mode power supplies) that may interfere with other detectors, in particular the RD.
After the assembly, a final end-to-end verification is performed. This phase performs more than 70 measurements and routines, including the communication via Ethernet and USB ports, the connection with the radio transceiver, the monitoring of the ADC signals, power supplies voltages and currents and the functioning of external connectors. Any non-conformance detected during the visual inspection or tests, initiates a more detailed diagnostic process, allowing us to resolve the issues. This process is performed by expert technicians, fully experienced on the UUB and also on the former electronics (UB). All the assembly, test and diagnostic processes are performed in an electrostatic safe environment, following the usual standards (JEDEC, IEC).
The deployment of the electronics boxes in the field encompasses the integration of the new electronics into the SD array and the data acquisition. This is performed by specific technician teams, fully trained to install the UUB together with the SPMT. Severe weather and site constraints can occur in this phase, challenging the optimization of resources and schedule. The deployment rate per team is roughly between 3 and 4 per day.
## 6 Calibration
The calibration of the large WCD PMTs is performed by using atmospheric muons. The Vertical Equivalent Muon signal (VEM, the signal corresponding to a vertical muon crossing the WCD in the center) is the reference unit of the WCD high-gain calibrations and was previously determined on a test tank with an external trigger hodoscope to give on average 95 photoelectrons at the cathode of the XP1805 PMTs [15]. This corresponds to \(\sim\)1380 integrated ADC counts above the pedestal after signal digitization on the UUB (see Section 7).
The SSD calibration is based on the signal of a minimum ionizing particle (MIP) going through the detector. About 40% of the triggered muons of the WCD produce a MIP in the SSD, corresponding to ratio of the SSD surface to the WCD surface. The sensitivity to the muon component used for the calibration can also be increased via a coincidence calibration between WCD and SSD. An example of the MIP and VEM calibration histograms is shown in the left panel of Fig. 12. The muon calibration data is continuously recorded and allows us to compensate for the effect of outside temperature variations. The cross-calibration between high gain and low gain channels is set by the electronics design, 32 in the case of WCD and 128 in the case of SSD. This cross-calibration was verified in the ESS test-bench to be \(32.2\pm 0.3\) for the WCD channels and \(126.7\pm 1.3\) for the SSD channels (at room temperature and 10 MHz frequency).
Due to its operating parameters, no direct calibration of the SPMT with atmospheric muons is feasible. In this case, the absolute scale in physical units is obtained by cross-calibrating the SPMT using the VEM-calibrated signals of the three large PMTs. A dedicated selection of local small showers2 is setup to this aim, and the cross-calibration is performed in a superposition region limited at the lower end by imposing a minimum threshold of \(\sim\)80 VEM on the WCD PMTs to guarantee a reasonably large signal in the SPMT, only marginally affected by statistical fluctuations, and at the higher end by the LPMT saturation. The logarithm of the charge spectrum in one of the upgraded AugerPrime WCD stations is shown in the right panel of Fig. 13. The dynamic range is extended to more than 20 000 VEM, as one can see by comparing the unsaturated spectrum from the LPMTs to the one obtained with the SPMT. The cross-calibration is performed in 8-hour sliding windows in order to follow the daily evolution of the SPMT gain due to the temperature variations. The choice of 8 hour intervals assures a precision of about 2.2%. As such, it can be considered as an optimal trade-off between a large integration period, granting the stability, and a shorter one, needed to compensate for the effects of temperature variations. The relative difference between the calibrated SPMT and LPMT calibrated signals is always better than 5% in the whole inter-calibration region.
The accuracy of the LPMT and SPMT signals contribute to the final SPMT signal accuracy. This final accuracy is better than 5% above about 3000 VEM, a value corresponding to the signal produced at 250 m from the core by showers with energy of \(10^{19}\) eV.
## 7 Obtained performances
All the SSD detectors have been installed atop the WCDs. The deployment of UUBs, together with SPMTs and SSD PMTs, was completed in July 2023. Data taking for commissioning is in progress since the end of 2020. In parallel, the CDAS program, the monitoring program, and the data
Figure 12: Calibration histogram for SSD (left) and the three large PMTs of WCD (right).
Figure 13: Extension of the dynamic range to 20 000 VEM using the small PMT. In blue, the average of the signals from the 3 large PMTs. The slight discrepancy between the large and small PMT signals at around 1000 VEM is due to a saturation effect of the large PMT ADCs.
analysis pipeline have been updated for AugerPrime. The data from AugerPrime is continuously monitored and analyzed to obtain resolutions and to assess the uniformity of detector stations and their long-term performance. In the following, some results obtained in the commissioning studies are presented.
### Noise performance
Fig. 14 shows the baseline RMS value of the high gain channel of the three large PMTs. The RMS value is an average over about 500 detector stations. As can be seen in the figure, the noise for the high-gain channel is below 2 ADC channels, meeting the requirements. Similarly, the noise of the high-gain channel of the SSD PMTs is below 2 ADC channels. The SPMT and the low-gain channels of LPMTs and SSD PMTs are well below 1 ADC channel.
Thunderstorms induce noise in the ADC traces, which increases the trigger rates, and can lead to loss of data if the communications bandwidth becomes saturated. It is currently estimated that this noise would lead to an acceptance loss of about 2% per year. Studies are in progress to better identify thunderstorm events in order to limit the triggering on noise.
### Dynamic range
The excellent correlation between the calibrated signals of the WCD and SSD is shown in Fig. 15, which includes reconstructed data from the Infill region of the SD. Both scales are expressed in the corresponding physics units (VEM for the WCD and MIP for the SSD). The signals in the WCD are measured by the LPMTs up to saturation and by the SPMT in the region above (from \(\sim\)650 to 20 000 VEM and above). The required dynamic range is reached in both detectors; the obtained correlation clearly shows the validity of the two independent calibrations.
### Uniformity and long-term performance
The various performance parameters are continuously monitored to ensure good detector uniformity and long-term performance. The mean charge values measured for VEM and MIP are about 1400 and 110 ADC channels, respectively.
Figure 14: The noise of the high gain channel of the three large PMTS.
he day/night temperature variation can be larger than 20\({}^{\circ}\)C. This induces a typical day/night variations of few ADC channels for the PMT signals mainly due to the sensitivity of the PMTs to temperature. The muon calibration both for WCD and SSD are made online every minute, allowing the correction for these temperature effects.
## 8 Conclusions
To accommodate new detectors and to increase experimental capabilities, the AugerPrime station electronics has been upgraded. This includes better timing with up-to-date GPS receivers with 5 ns timing resolution and higher sampling frequency (120 MHz instead of 40 MHz) for the ADC traces. Furthermore, a more powerful local processing of the data is obtained by using a Xilinx Zynq-7020 FPGA. The station electronics is gathered on a single board, called UUB. Furthermore, a SPMT is added to WCD detectors to increase the dynamic range. The deployment of the electronics together with the SPMTs was completed mid-2023.
The test results as well as the commissioning studies show that the design meets the requirements. In particular, the noise for the high-gain channel is below 2 ADC channels for all PMTs and the results of the commissioning data analysis show good uniformity and stable long-term performance. To reproduce the trigger behavior of the previous electronics, a compatibility mode was designed for UUB triggering in the FPGA firmware. This allows a smooth transition from the previous SD array to the AugerPrime array.
Figure 15: Correlation between SSD and WCD signals. The WCD signal are measured up to saturation by the LPMTs (blue dots), and by the SPMT above it (red dots).
## Appendix A Diagram of the UUB architecture
The Fig. 16 shows a more detailed diagram of the UUB architecture.
Figure 16: Functional diagram of the Upgraded Unified Board. |
2309.11749 | Zigzag chain order of LiVSe$_2$ developing away from the vanadium trimer
phase transition boundary | The phenomenon of self-assembly of constituent elements to form molecules at
low temperatures appears ubiquitously in transition metal compounds with
orbital degrees of freedom. Recent progress in local structure studies using
synchrotron radiation x-rays is shifting the interest in structural studies in
such molecule-forming systems from the low-temperature ordered phase to the
short-range order that appears like a precursor at high temperatures. In this
study, we discuss both experimentally and theoretically the relationship
between the trimer structure that appears in the layered LiV$X_2$ ($X$ = O, S,
Se) system with a two-dimensional triangular lattice of vanadium and the zigzag
chain-like local structure that appears near the phase transition boundary
where molecular formation occurs. The vanadium trimerization that persistently
appears in both low-temperature phases of LiVO$_2$ and LiVS$_2$ disappears in
LiVSe$_2$, and a regular triangular lattice is thought to be realized in
LiVSe$_2$, but this study reveals that the zigzag chain local distortion
appears with a finite correlation length. This zigzag chain state local
distortions are similar to the motif of local distortions in the
high-temperature phase of LiVS$_2$, indicating that the local distortions are
persistent away from the trimer phase transition boundary. On the other hand,
it is concluded that the zigzag chain order appearing in LiVSe$_2$ is more
stable than that in LiVS$_2$ in terms of the temperature variation of atomic
displacement and correlation length. The zigzag chain order is considered to be
competitive with the trimer order appearing in the LiV$X_2$ system. In this
paper, we discuss the similarities and differences between the parameters that
stabilize these electronic phases and the local distortions that appear in
other molecular formation systems. | K. Kojima, N. Katayama, K. Sugimoto, N. Hirao, Y. Ohta, H. Sawa | 2023-09-21T03:14:43Z | http://arxiv.org/abs/2309.11749v1 | Zigzag chain order of LiVSe\({}_{2}\) developing away from the vanadium trimer phase transition boundary
###### Abstract
The phenomenon of self-assembly of constituent elements to form molecules at low temperatures appears ubiquitously in transition metal compounds with orbital degrees of freedom. Recent progress in local structure studies using synchrotron radiation x-rays is shifting the interest in structural studies in such molecule-forming systems from the low-temperature ordered phase to the short-range order that appears like a precursor at high temperatures. In this study, we discuss both experimentally and theoretically the relationship between the trimer structure that appears in the layered LiV\(X_{2}\) (\(X\) = O, S, Se) system with a two-dimensional triangular lattice of vanadium and the zigzag chain-like local structure that appears near the phase transition boundary where molecular formation occurs. The vanadium trimerization that persistently appears in both low-temperature phases of LiVO\({}_{2}\) and LiV\({}_{2}\) disappears in LiVSe\({}_{2}\), and a regular triangular lattice is thought to be realized in LiVSe\({}_{2}\), but this study reveals that the zigzag chain local distortion appears with a finite correlation length. This zigzag chain state local distortions are similar to the motif of local distortions in the high-temperature phase of LiV\({}_{2}\), indicating that the local distortions are persistent away from the trimer phase transition boundary. On the other hand, it is concluded that the zigzag chain order appearing in LiVSe\({}_{2}\) is more stable than that in LiV\({}_{2}\) in terms of the temperature variation of atomic displacement and correlation length. The zigzag chain order is considered to be competitive with the trimer order appearing in the LiV\(X_{2}\) system. In this paper, we discuss the similarities and differences between the parameters that stabilize these electronic phases and the local distortions that appear in other molecular formation systems.
## I Introduction
Since the discovery of the Verwey transition of magnetite in the 1930s [1], many physical and structural studies have been devoted to transition metal compounds that undergo molecular formation at low temperatures [2; 3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. The molecular formation phenomena in transition metal compounds often appear in pyrochlore and triangular lattice compounds with high symmetry. High \(d\)-orbital degeneracy and associated orbital degrees of freedom, the low dimensionality of electronic structures such as hidden one-dimensionality produced by orbital linkages, and competition between itinerancy and localization are considered to be important factors in molecular formation phenomena [2; 3; 4; 5], and together with drastic physical properties such as metal-insulator transition and giant entropy change that appear with molecular formation [22; 23; 18; 24], they have attracted much attention from both basic and applied perspectives. With recent progress in structural analysis techniques using synchrotron radiation x-rays and neutrons, the interest in structural studies of these molecular formation systems is shifting from elucidating the complex molecular formation patterns that appear in the low-temperature phase to the precursor local distortions that appear in the high-temperature phase. For examples, in the spinel lattice system AlV\({}_{2}\)O\({}_{4}\), vanadium heptamer (trimer + tetramer), which appears at low temperatures, persists as short-range order at high temperatures far beyond the phase transition [8], and in CuIr\({}_{2}\)S\({}_{4}\), which forms complex dimer patterns with charge ordering at low temperatures, short-ranged tetragonal distortion appears in the high-temperature phase [25], as revealed by a pair distribution function analysis (PDF) using synchrotron radiation x-rays. The existence of such local distortions indicates that orbital ordering is localized in the high-temperature phase, which was believed to maintain high orbital degeneracy [25], and may affect our understanding of the mechanism of molecular formation and the thermodynamics associated with phase transitions.
Layered LiV\(X_{2}\) (\(X\) = O, S, Se) with a two-dimensional triangular lattice provides a unique playground for studying the relationship between the molecular formations that appear in the low-temperature phase and the local distortions that appear in the high-temperature phase. The electronic phase diagram characterizing the LiV\(X_{2}\) system is shown in Fig. 1, which was modified from Fig.1 in the Ref. [9] based on the zigzag chain order of LiVSe\({}_{2}\), which will be presented in this article later. LiVO\({}_{2}\) is an insulator in the whole temperature range and shows a nonmagnetic insulator transition at about 480 K. It has long been argued that a vanadium trimer state appears |
2302.14543 | RRT and Velocity Obstacles-based motion planning for Unmanned Aircraft
Systems Traffic Management (UTM) | In this paper, an algorithm for Unmanned Aircraft Systems Traffic Management
(UTM) for a finite number of unmanned aerial vehicles (UAVs) is proposed. This
algorithm is developed by combining the Rapidly-Exploring Random Trees (RRT)
and Velocity Obstacle (VO) algorithms and is referred to as the RRT-VO UTM
algorithm. Here, the RRT algorithm works offline to generate obstacle-free
waypoints in a given environment with known static obstacles. The VO algorithm,
on the other hand, operates online to avoid collisions with other UAVS and
known static obstacles. The boundary of the static obstacles are approximated
by small circles to facilitate the formulation of VO algorithm. The proposed
algorithm's performance is evaluated using numerical simulation and then
compared to the well-known artificial potential field (APF) algorithm for
collision avoidance. The advantages of the proposed method are clearly shown in
terms of lower path length and collision avoidance capabilities for a
challenging scenario. | Himanshu, Jinraj V Pushpangathan, Harikumar Kandath | 2023-02-28T13:08:11Z | http://arxiv.org/abs/2302.14543v1 | RRT and Velocity Obstacles-based motion planning for Unmanned Aircraft Systems Traffic Management (UTM)
###### Abstract
In this paper, an algorithm for Unmanned Aircraft Systems Traffic Management (UTM) for a finite number of unmanned aerial vehicles (UAVs) is proposed. This algorithm is developed by combining the Rapidly-Exploring Random Trees (RRT) and Velocity Obstacle (VO) algorithms and is referred to as the RRT-VO UTM algorithm. Here, the RRT algorithm works offline to generate obstacle-free waypoints in a given environment with known static obstacles. The VO algorithm, on the other hand, operates online to avoid collisions with other UAVS and known static obstacles. The boundary of the static obstacles are approximated by small circles to facilitate the formulation of VO algorithm. The proposed algorithm's performance is evaluated using numerical simulation and then compared to the well-known artificial potential field (APF) algorithm for collision avoidance. The advantages of the proposed method are clearly shown in terms of lower path length and collision avoidance capabilities for a challenging scenario.
## I Introduction
Over the last decade, unmanned aerial vehicles (UAVs) have been widely used in both the military (intelligence, surveillance and reconnaissance missions) [1] and civilian (agriculture, mapping and surveying, entertainment, deliveries, disaster relief missions etc.) sectors [2]. This sudden increase in UAVs poses a serious risk to civilians on the ground, as well as piloted aircraft and other UAVs operating in the same airspace. The rise in the number of UAVs in a given airspace necessitates a system referred to as the Unmanned Aircraft Systems Traffic Management (UTM) system that monitors and manages these UAVs, especially at low altitudes. The UTM's objective is to provide a safe, efficient, and secure airspace management system for the UAVs in the same airspace.
The primary responsibility of UTM is to ensure that UAVs keep a safe distance from other aircraft and obstacles. To accomplish this, several methods are developed. The graph-based methods are used in [3] to plan the flight paths of multiple UAVs and maintain a minimum distance between them. In [4], the problem of collision avoidance is solved by decomposing a large Markov decision process (MDP) into small MDPs to find a solution offline and then combined online iteratively to produce a locally optimal solution. The A* algorithm is utilized in [5] to generate flight paths for multiple UAVs in urban airspace. Thereafter, an evolutionary algorithm is employed to schedule the flights to avoid conflicts. One of the most commonly used algorithms in UTM is the Artificial Potential Field (APF). The dynamic artificial potential field was used for collision avoidance in [6]. In [7], authors investigate the use of modified APF along with a haptic-based tele-operation. Authors of [8] use the APF and priority allocation rules for conflict resolution in UAVs. Trajectory prediction method and APF was used for maintaining a safe distance between UAVs and non cooperative obstacles in [9]. A computationally intensive collision avoidance algorithm using predictive control is proposed in [10] to avoid collision among multiple UAVs in the context of UTM.
The main contribution of this paper is the new UTM algorithm, termed the RRT-VO UTM algorithm, for a finite number of UAVs operating in a 2D environment with known 2D static obstacles. This algorithm combines elements of an offline path planning algorithm and an online collision avoidance algorithm to enable efficient and safe coordination of multiple UAVs. Here, the offline path planning is accomplished using the Rapidly Exploring Random Tree (RRT) algorithm [11] and Velocity Obstacle (VO) algorithm [12] provides collision avoidance. The RRT is chosen among the existing offline path planning algorithms due to its efficient exploratory ability. Similarly, the Velocity Obstacle (VO) is used for collision avoidance due to its ease of implementation and ability to handle multiple UAVs, making it well-suited for the UTM. To avoid collisions, the VO algorithm in the RRT-VO UTM algorithm searches for a suitable velocity vector for each UAV. In the proposed algorithm, the boundaries of 2D known static obstacles are represented using overlapping circles in order to apply the VO algorithm against the 2D known static obstacles. The proposed algorithm's effectiveness is demonstrated by generating feasible paths for five quadcopters operating in the same 2D environment with known static obstacles. Also, the collision avoidance among UAVs and UAV to static obstacles is demonstrated for navigation through a challenging environment. A comparison is provided with the well known artificial potential filed (APF) method. The APF method fail to avoid collision, whereas using the proposed method, all the UAVs were able to navigate successfully avoiding collisions.
In Velocity Obstacle technique there are several ways to find a safe velocity to prevent collisions. In [13], the authors describe two main methods: a global search and a heuristic search. The global search method is a comprehensive approach that finds the safe velocity by building a tree of possible maneuvers over time intervals. The heuristic search, on the other hand, is a more targeted approach that only
searches for safe velocities at one time step using certain rules, or heuristics, to choose the best velocity from the options available. These heuristics include "To Goal", which prioritizes reaching the target with the highest velocity along the direct path; "Maximum Velocity", which chooses the highest avoidance velocity within a certain angle from the line to the goal; and "Structure", which selects the velocity that best avoids obstacles based on the level of risk they pose.
Our proposed RRT-VO UTM algorithm uses a hybrid approach that blends the "Structure" and "To Goal" heuristics. This means it tries to reach the goal as quickly as possible while taking into account the presence of obstacles. If an obstacle is detected, the algorithm will choose a velocity that not only avoids the obstacle but is also similar to the original velocity that was heading toward the goal. This hybrid approach considers both the direction and speed of the new velocity, allowing the unmanned aerial vehicle (UAV) to slow down or stop if necessary.
The rest of this paper is organized as follows: Section II gives the preliminaries and problem formulation. Section III describes the development of the RRT-VO UTM algorithm using the RRT and VO algorithms. The simulation results are discussed in Section IV. The key findings of this paper and future research directions are summarized in Section V.
## II Preliminaries and Problem formulation
The increase in the volume of UAVs used for urban applications like package delivery, air ambulance etc. (as shown in Figure 1) demands low-altitude flight. Efficient low-altitude flight in the presence of a large number of buildings and multiple UAVs requires algorithms that can generate optimal offline paths and also real-time collision avoidance. Static obstacles like buildings are approximated by rectangles (as illustrated in Figure 2) for offline path generation.
The UAVs considered are here of quadrotor type with first order velocity control implemented in the outer loop as given in equations (1) and (2) for X-axis and Y-axis respectively.
\[\dot{X}=V_{x} \tag{1}\]
\[\dot{Y}=V_{y} \tag{2}\]
Here \([X,Y]\) are the 2-dimensional coordinates of the UAV with respect to the inertial frame and the control input is the velocity given by \(\mathbf{V=[}V_{x},V_{y}{]}^{T}\). The following assumptions are made in this paper.
1. The obstacles in the environment are static and their positions are known.
2. The known static obstacles are approximated by rectangles.
3. The position and velocity of each UAV are broadcasted to all other UAVs.
4. Furthermore, the UAVs are approximated by a circle with a given radius from their centre of gravity location.
5. The starting and end waypoint location for each UAV is known and is given as input to the offline planner.
6. The UAVs fly at a constant altitude from the ground. Thus, the motion planning is done in 2D space.
The first objective here is to generate a safe offline path for UAVs to traverse from starting location to the destination location avoiding static obstacles. The second objective is to avoid UAV-to-UAV collision as well as UAV-to-static obstacle collision by generating a safe trajectory online, with a low computational complexity.
## III RRT-VO UTM Algorithm
In this section, a new UTM algorithm called the RRT-VO UTM algorithm based on the RRT and VO algorithms is presented. This algorithm's goal is to generate feasible paths for N numbers of UAVs that operate in the same environment (airspace) with known static obstacles. The feasible path of a UAV is the one that has starting and goal positions and does not collide with known static obstacles or other UAVs. The RRT-VO UTM algorithm generally has an offline path planning module and a collision avoidance module. Given the environment with known static obstacles, the RRT algorithm acts as an offline path planner that determines a path that avoids collision with known static obstacles for UAVs before their take off and starting their mission. The VO algorithm, on the other hand, is used in real-time to avoid collisions between UAVs and static obstacles in the environment. Next, the RRT algorithm is briefly described.
### _Rapidly-Exploring Random Tree Algorithm_
Rapidly-Exploring Random Tree (RRT) is a sampling-based motion planning algorithm that is well-suited for high-dimensional complex configuration spaces (environment) with a large number of obstacles or other constraints. The main idea behind RRT is to construct a tree-like structure in the configuration space by incrementally adding vertices that represent the possible states of the system. In each iteration, a
Fig. 1: UAVs trying to navigate the airspace with other UAVs and buildings.
new random configuration is chosen, and the nearest vertex in the tree is found. The algorithm then attempts to move towards the random configuration from the nearest vertex, creating a new vertex in the tree if the new configuration is in free space and not in collision. The process is repeated until a path between the start and goal configurations is found. Figure 2 shows the path generated by the RRT algorithm. In this figure, the tree-like structure is represented with green lines, and the small blue crosses represent the random configuration. The red squares represent the waypoints generated by the RRT algorithm.
### _Velocity Obstacle Algorithm_
The Velocity Obstacle algorithm considers the relative motion of other UAVs in the environment and determines a set of safe velocities for the UAVs to avoid collisions. This algorithm creates a virtual "cone" around each UAV based on its current velocity and position, as well as the velocities and positions of other UAVs/obstacles in the environment. The algorithm then finds the set of safe velocities for the UAV to avoid collisions with all of the other UAVs and static obstacles by finding the intersection of the feasible velocities for the UAV. This intersection is known as the "safe velocity set" and the UAV can choose its velocity from this set to avoid collision.
Figure 3 shows the collision cone formed between UAV A and UAV B. The collision cone, which is used to determine potential collisions between a UAV (referred to as UAV A) and other UAVs (UAV B) or obstacles (Obstacle B), is defined using two tangents on a circle from an external point. The radius of this circle is the sum of the radii of UAV A and UAV B (Obstacle B). Let \(\mathbf{V_{A}}\) be the velocity of UAV A with respect to (w.r.t) a reference frame, \(\mathbf{V_{B}}\) be the velocity of UAV B w.r.t the same reference frame, and \(\mathbf{V_{AB}}\) be the relative velocity vector of UAV A w.r.t UAV B. If \(\mathbf{V_{AB}}\) falls within the collision cone, then a collision between UAV A and UAV B is guaranteed. To prevent this collision, the algorithm selects the \(\mathbf{V_{AB}}\) that does not belong to this collision cone. Using this new \(\mathbf{V_{AB}}\), the algorithm calculates new velocities (\(\mathbf{V_{A}}\)) for UAV A as potential solutions.
To compute cone, let \(\angle\)_ABX_, \(\angle\)_DAB_, and \(\angle\)_CAB_ denote the angle between the line AB and the X-axis, the angle between the lines AD and AB, and the angle between the lines CA and AB, respectively. Moreover, the position of UAV A in 2D space is represented by the coordinates \((X_{A},Y_{A})\), and the position of UAV B is represented by the coordinates \((X_{B},Y_{B})\). Also, the radius of UAV A and B is represented by \(R_{A}\) and \(R_{B}\), respectively. Moreover, \(\angle\)\(C_{left}\) symbolizes the angle of left cone boundaries and \(\angle\)\(C_{right}\) exemplifies the angle of right cone boundaries. \(D_{AB}\) represents distance between A and B. Now, the collision cone is computed as per equations given below.
\[\angle\)_ABX_ = \(arctan\bigg{(}\frac{Y_{B}-Y_{A}}{X_{B}-X_{A}}\bigg{)} \tag{3}\]
\[\angle\)_DAB_ = \(\angle\)_CAB_ = \(arcsin\bigg{(}\frac{R_{A}+R_{B}}{D_{AB}}\bigg{)} \tag{4}\]
\[\angle\)\(C_{left}=\angle\)_ABX_ + \(\angle\)_DAB_
Fig. 3: Formation of Collision cone between the UAV A and UAV B (or Obstacle B).
Fig. 2: Offline path generated by RRT algorithm for navigation of UAV from starting to goal position.
\[\angle C_{right}=\angle ABX-\angle CAB \tag{6}\]
Here Eqn. 3 is for calculating \(\angle ABX\), Eqn. 4 calculates \(\angle DAB\) and \(\angle CAB\). Equation 5 and 6 is for calculating the angle for left \(\angle C_{left}\) and right \(\angle C_{right}\) cone boundaries.
### _RRT-VO UTM Algorithm_
The RRT-VO UTM algorithm is presented in this section. To explain the RRT-VO UTM algorithm, first assume that the starting and goal positions of all the UAVs are known. Following that, the environment is formed by placing the obstacles in a rectangular shape. The RRT algorithm is then used offline to generate obstacle-free (collision-free with known static obstacles) waypoints from the starting and goal positions for all the UAVs in this environment. Collisions between known static obstacles and UAVs can be avoided if the UAVs follow these waypoints. However, collisions between the UAVs may happen. Also, if UAVs attempt to avoid these collisions near static obstacles, they may collide with known static obstacles in the environment. Hence, the UTM requires collision-avoidance capabilities that can also avoid these 2D static obstacles. The velocity obstacle (VO) method is used in the RRT-VO UTM algorithm for collision avoidance. To provide the desired collision-avoidance capabilities to the RRT-VO UTM algorithm using the velocity obstacle method, rectangular static obstacles are represented using circular obstacles of radius \(R_{obs}\) with \(L\) distance apart, as shown in Fig. 4. This figure depicts the boundary of a known static obstacle as an overlapping circle. The RRT-VO UTM algorithm can compute these circles as the static obstacle's boundary and circle radius are known in advance. Therefore, using the circular representations of the known static obstacle's boundaries, the VO algorithm is capable of avoiding collisions between UAVs even when they are close to known static obstacles. Likewise, each UAVs are represented as circular obstacles of radius \(R_{A}+R_{B}\) as shown in Fig. 3 for collision avoidance between UAVs.
After converting the boundaries of rectangular obstacles into overlapping circular obstacles, the new UTM algorithm assigns the first waypoint to each UAV and calculates the velocity for tracking the assigned waypoint. Subsequently, the UTM algorithm computes the relative distance between each UAV using the position information of all the UAVs. When the relative distance of one UAV, say UAV A, from some other UAV, say UAV B, is less than a specific value (\(Dist_{uav}\)), then the relative velocity \(\mathbf{V_{AB}}\) is computed and checked if this velocity is inside the collision cone angles [\(\angle C_{left}\), \(\angle C_{right}\)]. If it fall within the cone, then the UTM algorithm search for new relative velocity, \(\mathbf{V^{\prime}_{AB}}\). The UTM algorithm has a search loop to identify appropriate angle (denoted by \(\theta\)) ranging from of 0 to 2\(\pi\) radians with a step size of 0.2 radians for the \(\mathbf{V^{\prime}_{AB}}\). Within this loop, another search loop is implemented to determine an appropriate magnitude of new relative velocity (represented by \(M\)), \(\mathbf{V^{\prime}_{AB}}\), ranging from 0 to \(\|\mathbf{V_{AB}}\|_{2}\) with step size of 0.2. Now, for each \(\theta\) and \(M\), \(V^{\prime}_{AB}\) constructed as per Eqn. 7.
\[\mathbf{V^{\prime}_{AB}}=[M*cos(\theta),M*sin(\theta)] \tag{7}\]
If \(\mathbf{V^{\prime}_{AB}}\) falls outside of the cone, then it is considered as the potential solution and the new velocity for UAV A, \(\mathbf{V^{\prime}_{A}}\in\mathbb{R}^{2}\), is calculated with equation mentioned in Eqn. 8.
\[\begin{split}\mathbf{V^{\prime}_{A}}=[M*cos(\theta)+\|\mathbf{V_ {B}}\|_{2}*cos(\angle\mathbf{V_{B}}),\\ M*sin(\theta)+\|\mathbf{V_{B}}\|_{2}*sin(\angle\mathbf{V_{B}})] \end{split} \tag{8}\]
The computed values of feasible velocities are then stored in an array \(fV_{A}\). Similarly, the collision cones are constructed for all other UAVs or obstacles.However, no further search for new \(\mathbf{V_{AB}}\) is performed. Instead, each entry in \(fV_{A}\) is evaluated, and the corresponding relative velocity \(\mathbf{V^{\prime}_{AB}}\) is calculated. If this \(\mathbf{V^{\prime}_{AB}}\) falls within the cone, the corresponding \(\mathbf{V^{\prime}_{A}}\) is removed from \(fV_{A}\). After this process has been completed for all other UAVs and obstacles, the velocity \(\mathbf{V^{\prime}_{A}}\) is selected from \(fV_{A}\) based on its similarity to original velocity \(\mathbf{V_{A}}\), as determined by the Euclidean norm. Then, the position of the UAV A is updated by using the selected \(\mathbf{V^{\prime}_{A}}\) as given in Equation 9 and 10.
\[X_{A}=X_{A}+dt*VX^{\prime}_{A} \tag{9}\]
\[Y_{A}=Y_{A}+dt*VY^{\prime}_{A} \tag{10}\]
In Eqn. 9 & 10, \(VX^{\prime}_{A}\) and \(VY^{\prime}_{A}\) are the x and y components of \(\mathbf{V^{\prime}_{A}}\) and \(dt\) is the time-step used for integrating the
Fig. 4: Representation of a rectangular obstacle using circular obstacles to apply Velocity obstacle algorithm for collision avoidance with known static obstacles.
velocities. In the RRT-VO algorithm, the aforementioned process of determining a new velocity to avoid collision is repeated for each UAV in the airspace at each time-step. The pseudocode of the RRT-VO algorithm is given in Algorithm 1.
```
1:Input: UAV start, goal position \((X,Y)\) and initial velocity \((VX,VY)\) and obs. pos., width, height
2: Calculate waypoints for each UAV using RRT
3: Generate circular obstacles from rectangular obstacles
4: Assign initial waypoints to each UAV
5:while goals not reached do
6:for each UAV A do
7:if distance between A, waypoint\(<Dist_{wp}\)then
8: Assign next waypoint \((X_{Awp},Y_{Awp})\) to A
9:endif
10:endfor
11:for each UAV A do
12: Calculate velocity \(\mathbf{V_{A}}=[kp*(X_{Awp}-X_{A}),kp*(Y_{Awp}-Y_{A})]\)
13: Initialize feasible velocity set \(fV_{A}\)
14:for each remaining UAV B do
15:if distance between A, B \(<Dist_{aav}\)then
16: Calculate relative velocity \(\mathbf{V_{AB}}=[VX_{A}-VX_{B},VY_{A}-VY_{B}]\)
17: Calculate angle bound for collision cone from Eqn. 5 and 6
18:if\(\mathbf{V_{AB}}\) lies in cone and \(fV_{A}\) is empty then
19: Search for \(\mathbf{V^{\prime}_{AB}}\) outside cone by Eqn.7
20: Calculate \(\mathbf{V^{\prime}_{A}}\) from \(\mathbf{V^{\prime}_{AB}}\) using Eqn. 8 and store in \(fV_{A}\)
21:endif
22:if\(\mathbf{V_{AB}}\) lies in cone & \(fV_{A}\) not empty then
23:for each \(\mathbf{V^{\prime}_{A}}\) in \(fV_{A}\)do
24: Calculate \(\mathbf{V^{\prime}_{AB}}\) from \(\mathbf{V^{\prime}_{A}}\)
25:if\(\mathbf{V^{\prime}_{AB}}\) lies in cone then
26: Remove \(\mathbf{V^{\prime}_{A}}\) from \(fV_{A}\)
27:endif
28:endfor
29:endif
30:endif
31:endfor
32:for each obstacle B do
33:if distance between A, B\(<Dist_{obs}\)then
34:Repeat steps 16-29
35:endif
36:endfor
37: Select \(\mathbf{V^{\prime}_{A}}\) from \(fV_{A}\) similar to original \(\mathbf{V_{A}}\)
38: Update UAV A's position using \(\mathbf{V^{\prime}_{A}}\) from Eqn. 7 and 8
39:endfor
40:endwhile
```
**Algorithm 1** RRT-VO UTM Algorithm
## IV Simulation Results
In this section, the effectiveness of the RRT-VO UTM algorithm is evaluated for multiple UAVs operating in the same environment with a dimension of 400 m \(\times\) 400 m. The environment is made up of multiple rectangular obstacles that are converted into overlapping circular obstacles. The performance of the proposed algorithms is assessed in terms of safety & collision-free trajectory generation and path length for five and seven quadcopters in the given environment with known static obstacles. For comparison with the proposed algorithm, the Artificial Potential Field (APF)-based UTM algorithm is used to generate paths for these UAVs operating in the same environment. APF works on the basis of attraction force to reach goal/waypoint and repulsion force to avoid collisions with other UAVs and obstacles. Suppose \((X_{A},Y_{A})\) is the position of UAV A, \((X_{Awp},Y_{Awp})\) as the position of assigned waypoint for UAV A and \((X_{B},Y_{B})\) is the position of other UAV or obstacle B. The equations for calculating attractive and repulsive force are mentioned in Eqn. 12 & 14.
\[Dir_{att}=[X_{Awp}-X_{A},Y_{Awp}-Y_{A}] \tag{11}\]
\[F_{att}=k_{att}*\frac{Dir_{att}}{\|Dir_{att}\|} \tag{12}\]
\[Dir_{rep}=[X_{A}-X_{B},Y_{A}-Y_{B}] \tag{13}\]
\[F_{rep}=k_{rep}*\frac{Dir_{rep}}{\|Dir_{rep}\|} \tag{14}\]
Here, Eqn. 11 & 13 calculates the direction vector (\(Dir_{att}\) & \(Dir_{rep}\)) for attraction and repulsion force respectively.
The attraction and repulsion force are added together to calculate total force (shown in Eqn 15) which is then used to update the position of UAV A as shown in Eq. 16 & 17.
\[TF=F_{att}+F_{rep} \tag{15}\]
\[X_{A}=X_{A}+dt*TF_{X} \tag{16}\]
\[Y_{A}=Y_{A}+dt*TF_{Y} \tag{17}\]
where \(TF_{X}\) and \(TF_{Y}\) are the x and y component of total force (\(TF\)). Position of each UAV is updated similarly using equations 11-17 to avoid obstacles and other UAVs.
The RRT-VO UTM algorithm (**Algorithm 1**) is implemented in Matlab and uses the parameters given in Table I for the simulations. The APF-based UTM algorithm uses the same waypoints generated by the RRT algorithm. It also uses basic mechanism such as activation of collision avoidance when another UAV or obstacle is below certain distance (\(Dist_{tau}\)/\(Dist_{obs}\)) from current UAV and assigning a new waypoint when distance of UAV from previous waypoint is less than a certain value (\(Dist_{wp}\)). The parameters used in APF algorithm are mentioned in Table II.
The feasible paths generated by the RRT-VO UTM algorithm is shown in Fig. 5. These paths are not colliding with any static obstacles. Also, the paths shown in Fig. 5 indicates that the proposed UTM algorithm is able to avoid collision with known static obstacles and other UAVs. This is substantiated with relative distance between UAVs shown in Fig. 6. The relative distances are non-zero which suggest non collision between UAVs. For RRT-APF method it is clearly evident from non zero relative distance (shown in Fig. 8) that each UAV is able to avoid collision with other UAV. However, Fig. 7 shows that UAV 2 and UAV 5 collided with obstacles (represented by circle) while trying to navigate through the environment.
Another point of comparison between the two algorithms is the path length. Table III shows the path length of both the algorithms for the environment with five UAVs as shown in Fig. 5 and Fig. 7. The "-" term in RRT-APF column for UAV 2 and 5 indicates the occurrence of a collision, thus ignoring the path length values for these cases. For each of the five UAVs, RRT-VO UTM algorithm is generating lower path length as compared to RRT-APF UTM algorithm. Similarly, Table IV shows path length of seven UAVs navigating in the same environment. In this experiment, UAV 3, UAV 4 & UAV 6 collided with the obstacle when using RRT-APF method. Thus, the path length of these UAVs is mentioned as "-". On the other hand, the UAVs using RRT-VO UTM algorithm have shorter path lengths without any collisions when compared with RRT-APF for seven UAVs.
In this paper, only quadcopters are used for simulations. However, it is important to note that the RRT-VO UTM algorithm can be used to generate paths for the fixed-wing aircraft or a combination of fixed wing and quadcopters.
## V Conclusion
A new UTM algorithm referred to as the RRT-VO UTM algorithm is developed by combining the Rapidly-Exploring Random Trees (RRT) and Velocity Obstacles (VO) algorithms. The tractability and effectiveness of this algorithm are demonstrated by generating safe and collision-free paths for multiple quadrotors. Furthermore, the comparison study indicates that the RRT-VO UTM algorithm outperforms the APF-based UTM algorithm in terms of safe navigation and lower path length. The proposed algorithm can also generate feasible paths for fixed-wing aircraft or a combination of fixed-wing and quadcopters. Future research will be focused
\begin{table}
\begin{tabular}{|c||c|} \hline Parameter & Value \\ \hline \(k_{art}\) & 8 \\ \(k_{rep}\) & 15 \\ \(dt\) & 0.1 \\ \(Dist_{up}\) & 10 \\ \(Dist_{nav}\) & 50 \\ \(Dist_{obs}\) & 20 \\ \hline \end{tabular}
\end{table} TABLE II: Parameters for simulating the RRT-APF UTM algorithm
\begin{table}
\begin{tabular}{|c||c|} \hline Parameter & Value \\ \hline \(k_{p}\) & 0.2 \\ \(dt\) & 0.1 \\ \(R_{amb}\) & 12 \\ \(R_{obs}\) & 12 \\ \(L\) & 15 \\ \(Dist_{nvp}\) & 10 \\ \(Dist_{nav}\) & 50 \\ \(Dist_{obs}\) & 20 \\ \hline \end{tabular}
\end{table} TABLE I: Parameters for simulating the RRT-VO UTM algorithm
Fig. 5: Five UAVs navigating the environment using RRT-VO UTM algorithm.
\begin{table}
\begin{tabular}{|c||c||c|} \hline & RRT-VO (\(m\)) & RRT-APF (\(m\)) \\ \hline UAV 1 & 603.35 & 676.67 \\ UAV 2 & 623.7 & – \\ UAV 3 & 534.77 & 632.73 \\ UAV 4 & 469.23 & 616.39 \\ UAV 5 & 696.16 & – \\ \hline \end{tabular}
\end{table} TABLE III: Path Length of five UAVs navigating the environment using RRT-VO UTM and RRT-APF UTM algorithm.
Fig. 6: Distance between the pair of UAVs while navigating the environment using RRT-VO UTM algorithm.
on extending the RRT-VO UTM algorithm to generate feasible 3D paths for multiple UAVs.
## Acknowledgment
Author's would like to acknowledge the funding provided by ROCKWELL COLLINS (INDIA) ENTERPRISES PVT LTD, under the project "Online Planner for UTM" to carry out this research work.
|
2309.16740 | Three-body decays of $π_{2}(1670)$ and $ρ_{3}(1690)$ via the Sill
distribution | We present the three-body decays for the resonances $\pi_{2}(1670)$ and $
\rho_3(1690)$. We use an effective model based on the flavor symmetry and have
assumed the Sill distribution for describing the lie-shape of intermediate
unstable resonances. The vector-pseudoscalar meson decay channel is considered
as an intermediate step to explain the decay of three-pseudoscalar meson
channels. | Shahriyar Jafarzade, Enrico Trotti | 2023-09-28T08:01:27Z | http://arxiv.org/abs/2309.16740v3 | # Three-body decays of \(\pi_{2}(1670)\) and \(\rho_{3}(1690)\) via the Sill distribution
###### Abstract
We present the three-body decay results for the resonances \(\pi_{2}(1670)\) and \(\rho_{3}(1690)\). We use an effective model based on the flavor symmetry and have assumed the Sill distribution function. A vector-pseudoscalar meson decay channel is used as an intermediate step to explain the decay of three pseudoscalar meson channels.
## 1 Introduction
The light ground state mesons such as \(\pi_{2}(1670)\) and \(\rho_{3}(1690)\) (with \(J^{PC}=2^{-+}\,,\,3^{--}\) ) are well-established according to the Particle Data Group (PDG) [1]. Their total decay widths are \(\Gamma^{\rm tot}_{\pi_{2}(1670)}=258^{+8}_{-9}\) MeV and \(\Gamma^{\rm tot}_{\rho_{3}(1690)}=161\pm 10\) MeV. Experimental data for the decay processes of these resonances was used in [2; 3; 4; 5] to test the corresponding spontaneous and anomalous breaking features of QCD symmetries used in the extended Linear Sigma Model (eLSM) [6]. We list some of the vector-pseudoscalar meson decay data of these resonances in Tab. 1.
In this work, we present new results for three-body decays of these resonances based on the effective model constructed in Ref. [3; 4]. To this end, we use the Sill-approximation developed in Ref. [7] and successively used in the case of e.g. spin-2 mesons [8] and hybrids [9] (for other works that have applied the Sill distribution, see also [10; 11; 12]).
\begin{table}
\begin{tabular}{|c|c|c|} \hline Decay process & Model [3; 4] & PDG [1] \\ \hline \(\pi_{2}(1670)\to\rho(770)\,\pi\) & \(80.6\pm 10.8\) & \(80.6\pm 10.8\) \\ \hline \(\pi_{2}(1670)\to\bar{K}^{*}(892)\,K+{\bf c.c}\) & \(11.7\pm 1.6\) & \(10.9\pm 3.7\) \\ \hline \(\rho_{3}(1690)\to\omega(782)\,\pi\) & \(35.8\pm 7.4\) & \(25.8\pm 9.8\) \\ \hline \(\rho_{3}(1690)\to\rho(770)\,\eta\) & \(3.8\pm 0.8\) & – \\ \hline \(\rho_{3}(1690)\to\bar{K}^{*}(892)\,K+{\bf c.c.}\) & \(3.4\pm 0.7\) & – \\ \hline \end{tabular}
\end{table}
Table 1: Decay rates (MeV) of \(\pi_{2}(1670)\) and \(\rho_{3}(1690)\). (The first entry has been used to determine the model parameter.)
## 2 Effective Model
The following effective interaction term describes the decay of the pseudostensor mesons \(\left(P_{2}{=}\left\{\pi_{2}(1670),K_{2}(1770),\eta_{2}(1870),\eta_{2}(1645) \right\}\right)\) into the vector \(\left(V{=}\left\{\rho(770),K^{*}(892),\omega(782),\phi(1020)\right\}\right)\) and pseudoscalar \(\left(P{=}\left\{\pi,K,\eta(547),\eta^{\prime}(958)\right\}\right)\) mesons [4]:
\[\mathcal{L}_{p_{2}ep}=g_{p_{2}ep}\,\mathrm{tr}\left[P_{2}^{\mu\nu}[V_{\mu}\,, (\partial_{\nu}P)]_{-}\right]\,, \tag{1}\]
which leads to the decay rate formula
\[\Gamma_{P_{2}\to V+P}(m_{p_{2}},m_{v},m_{p})=\frac{g_{p_{2}up}^{2}\,|\vec{k}_ {p_{2},v,p}|^{3}}{120\,\pi\,m_{p_{2}}^{2}}\Big{(}5+\frac{2\,|\vec{k}_{p_{2},v, p}|^{2}}{m_{v}^{2}}\Big{)}\,\kappa_{i}\,\Theta(m_{p_{2}}-m_{v}-m_{p})\,, \tag{2}\]
where \(\kappa_{i}\) is the Clebsh-Gordon coefficient, extended from the eLSM. The coupling constant is determined to be:
\[g_{p_{2}ep}^{2}=(11.9\pm 1.6)\,. \tag{3}\]
A similar Lagrangian describing the decay of spin-3 tensor mesons \(\left(W_{3}{=}\left\{\rho_{3}(1690),K_{3}^{*}(1780),\phi_{3}(1850),\omega_{3}( 1670)\right\}\right)\) has the following form [3]:
\[\mathcal{L}_{w_{3}ep}=g_{w_{3}ep}\,\varepsilon^{\mu\nu\rho\sigma}\,\mathrm{tr }\left[W_{3,\mu\alpha\beta}\left\{(\partial_{\nu}V_{\rho}),\,(\partial^{\alpha }\partial^{\beta}\partial_{\sigma}P)\right\}_{+}\right]. \tag{4}\]
The tree-level decay rate formula in this scenario is:
\[\Gamma_{w_{3}\to V+P}(m_{w_{3}},m_{v},m_{p})=g_{w_{3}ep}^{2}\,\frac{|\vec{k}_ {w_{3},v,p}|^{7}}{105}\,\kappa_{i}\,\Theta(m_{w_{3}}-m_{v}-m_{p})\,. \tag{5}\]
As a consequence of the parameter determination [3], we obtain the following value for the coupling
\[g_{w_{3}ep}^{2}=(9.2\pm 1.9)\cdot 10^{-16}\,\,\mathrm{MeV}^{-6}\,. \tag{6}\]
Some decay channels are listed in Tab.1 (see Ref. [3] for more details ).
## 3 Results for three-body decays
One can find the following decay rates in PDG [1]:
\[\Gamma_{\pi_{2}(1670)\to\pi^{\pm}\pi^{\mp}\pi^{-}}=137\pm 11\,\,\mathrm{MeV}\,, \qquad\Gamma_{\rho_{3}(1690)\to\kappa\overline{K}\pi}=6.1\pm 2.0\,\mathrm{MeV}\,. \tag{7}\]
In order to theoretically describe them, we introduce the "Sill" spectral function \(d^{\mathrm{Sill}}\), which is chosen because of the following features: it is normalized even for broad states, it has a vanishing real part contribution of loop of virtual particles, and it takes into account the decay threshold. For further details, see Ref. [7].
Considering the spectral function for \(\rho(770)\)
\[d_{\rho}^{\mathrm{Sill}}(y)=\frac{2y}{\pi}\frac{\sqrt{y^{2}-4m_{\pi}^{2}}\, \tilde{\Gamma}_{\rho}}{(y^{2}-m_{\rho}^{2})^{2}+(\sqrt{y^{2}-4m_{\pi}^{2}}\, \tilde{\Gamma}_{\rho})^{2}}\,\Theta(y-2m_{\pi})\,,\qquad\qquad\int_{2m_{\pi}}^ {\infty}dy\,d_{\rho}^{\mathrm{Sill}}(y)=1\, \tag{8}\]
where the definition of \(\tilde{\Gamma}_{\rho}\) is
\[\tilde{\Gamma}_{\rho}\equiv\frac{\Gamma^{\rm PDG}_{\rho\to 2\pi}\,m_{\rho}}{ \sqrt{m_{\rho}^{2}-4m_{\pi}^{2}}}\,\,\,\,, \tag{9}\]
with the PDG values \(\Gamma^{\rm PDG}_{\rho\to 2\pi}=149.1\) MeV and \(m_{\rho}=775.11\) MeV, we obtain the following result
\[\Gamma_{\pi_{2}\to\rho\pi\to\pi\pi\pi}\simeq\int_{2m_{\pi}}^{\infty}dy\,d^{ \rm Sill}_{\rho}(y)\,\Gamma_{\pi_{2}\to\rho\pi}(m_{\pi_{2}},y,m_{\pi})\simeq(73.9\pm 9.9)\,{\rm MeV}\,\,, \tag{10}\]
The Sill distribution for the vector \(K^{\star}(892)\)-meson reads:
\[d^{\rm Sill}_{K^{\star}}(y)=\frac{2y}{\pi}\frac{\sqrt{y^{2}-(m_{K}+m_{\pi})^{2} }\,\tilde{\Gamma}_{K^{\star}}}{(y^{2}-m_{K^{\star}}^{2})^{2}+(\sqrt{y^{2}-(m_{ K}+m_{\pi})^{2}}\,\tilde{\Gamma}_{K^{\star}})^{2}}\,\Theta(y-m_{K}-m_{\pi})\,, \tag{11}\]
with the normalization
\[\int_{m_{k}+m_{\pi}}^{\infty}dy\,d^{\rm Sill}_{K^{\star}}(y)=1\,\,. \tag{12}\]
In this case, three-body decay of \(\rho_{3}(1690)\) can be calculated as:
\[\Gamma_{\rho_{3}(1690)\to\bar{K}^{\star}(892)\,K\to K\overline{K}\pi}\simeq \int_{m_{k}+m_{\pi}}^{\infty}dy\,d^{\rm Sill}_{K^{\star}}(y)\,\Gamma_{\rho_{3} \to\bar{K}^{\star}\,K}(m_{\rho_{3}},y,m_{K})\simeq(3.43\pm 0.70)\,{\rm MeV}\,\,, \tag{13}\]
where \(\tilde{\Gamma}_{K^{\star}}\) is linked to the PDG values \(\Gamma^{\rm PDG}_{K^{\star}\to K\pi}=51.4\) MeV and \(m_{K^{\star}}=890\) MeV according to:
\[\tilde{\Gamma}_{K^{\star}}\equiv\frac{\Gamma^{\rm PDG}_{K^{\star}\to K\pi}\,m _{K^{\star}}}{\sqrt{m_{K^{\star}}^{2}-(m_{K}-m_{\pi})^{2}}}\,\,\,\,. \tag{14}\]
The decay channel \(\Gamma_{\rho_{3}\to\eta\pi\pi}\) has also been seen experimentally [13]; we estimate it as:
\[\Gamma_{\rho_{3}(1690)\to\rho(770)\eta\to\eta\pi\pi}\simeq\int_{2m_{\pi}}^{ \infty}dy\,d^{\rm Sill}_{\rho}(y)\,\Gamma_{\rho_{3}\to\rho\eta}(m_{\rho_{3}},y, m_{\eta})\simeq(3.95\pm 0.81)\,{\rm MeV}\,\,. \tag{15}\]
By using the decay rates given in Tab.1, we present various three-body decay channels assuming the Sill distribution in Tab. 2
\begin{table}
\begin{tabular}{|c|c|c|} \hline Decay channel & Sill distribution & PDG \\ \hline \(\pi_{2}(1670)\to\rho(770)\pi\to\pi^{\pm}\pi^{+}\pi^{-}\) & \(73.9\pm 9.9\) & \(137\pm 11\) \\ \hline \(\pi_{2}(1670)\to\overline{K}^{\star}(892)\pi\to K\overline{K}\pi\) & \(10.4\pm 1.4\) & \\ \hline \(\rho_{3}(1690)\to\rho(770)\eta\to\eta\pi\pi\) & \(3.88\pm 0.80\) & seen \\ \hline \(\rho_{3}(1690)\to\overline{K}^{\star}(892)\pi\to K\overline{K}\pi\) & \(3.43\pm 0.70\) & \(6.1\pm 2.0\) \\ \hline \end{tabular}
\end{table}
Table 2: Three body decay rates of \(\pi_{2}(1670)\) and \(\rho_{3}(1690)\) mesons (MeV).
## 4 Conclusion
In this note, we used a hadronic model and the Sill distribution, introduced in the vector meson decays, to evaluate the three-body decays of \(\pi_{2}(1670)\) and \(\rho_{3}(1690)\). Results are compared to the ones listed in PDG: they agree within the error bars for \(\rho_{3}(1690)\to K\overline{K}\pi\), while it is two times off the experimental result for \(\pi_{2}(1670)\to 3\pi\) which implies the necessity of the consideration of multi-channel version of the Sill distribution [7] in the future.
## Acknowledgement
We are thankful to Francesco Giacosa, Adrian Koenigstein, and Vanamali Shastry for useful discussions. We acknowledge financial support through the project "Development Accelerator of the Jan Kochanowski University of Kielce", co-financed by the European Union under the European Social Fund, with no. POWR.03.05.00-00-Z212 / 18.
|
2308.16469 | Link Prediction for Wikipedia Articles as a Natural Language Inference
Task | Link prediction task is vital to automatically understanding the structure of
large knowledge bases. In this paper, we present our system to solve this task
at the Data Science and Advanced Analytics 2023 Competition "Efficient and
Effective Link Prediction" (DSAA-2023 Competition) with a corpus containing
948,233 training and 238,265 for public testing. This paper introduces an
approach to link prediction in Wikipedia articles by formulating it as a
natural language inference (NLI) task. Drawing inspiration from recent
advancements in natural language processing and understanding, we cast link
prediction as an NLI task, wherein the presence of a link between two articles
is treated as a premise, and the task is to determine whether this premise
holds based on the information presented in the articles. We implemented our
system based on the Sentence Pair Classification for Link Prediction for the
Wikipedia Articles task. Our system achieved 0.99996 Macro F1-score and 1.00000
Macro F1-score for the public and private test sets, respectively. Our team
UIT-NLP ranked 3rd in performance on the private test set, equal to the scores
of the first and second places. Our code is publicly for research purposes. | Chau-Thang Phan, Quoc-Nam Nguyen, Kiet Van Nguyen | 2023-08-31T05:25:04Z | http://arxiv.org/abs/2308.16469v2 | # Link Prediction for Wikipedia Articles as a Natural Language Inference Task
###### Abstract
Link prediction task is vital to automatically understanding the structure of large knowledge bases. In this paper, we present our system to solve this task at the Data Science and Advanced Analytics 2023 Competition "Efficient and Effective Link Prediction" (DSAA-2023 Competition) [1] with a corpus containing 948,233 training and 238,265 for public testing. This paper introduces an approach to link prediction in Wikipedia articles by formulating it as a natural language inference (NLI) task. Drawing inspiration from recent advancements in natural language processing and understanding, we cast link prediction as an NLI task, wherein the presence of a link between two articles is treated as a premise, and the task is to determine whether this premise holds based on the information presented in the articles. We implemented our system based on the Sentence Pair Classification for Link Prediction for the Wikipedia Articles task. Our system achieved 0.99996 Macro F1-score and 1.00000 Macro F1-score for the public and private test sets, respectively. Our team UIT-NLP ranked 3rd in performance on the private test set, equal to the scores of the first and second places. Our code1 is publicly for research purposes.
Footnote 1: [https://github.com/phanchauthang/dsaa-2023-kaggle/](https://github.com/phanchauthang/dsaa-2023-kaggle/)
Footnote 2: [https://www.wikipedia.org/](https://www.wikipedia.org/)
Link prediction, Natural Language Inference, DSAA2023 Competition
## I Introduction
Wikipedia1, the world's largest collaborative encyclopedia, has become an invaluable resource for obtaining knowledge on a wide range of topics. With millions of articles covering diverse subjects, Wikipedia offers a vast repository of information that is constantly expanding. However, despite its impressive size, the interlinking between articles within Wikipedia is not always comprehensive, leading to gaps in information connectivity.
Footnote 1: [https://github.com/phanchauthang/dsaa-2023-kaggle/](https://github.com/phanchauthang/dsaa-2023-kaggle/)
Link prediction, a fundamental task in network analysis, aims to predict missing links in a given network based on existing connections. In the context of Wikipedia, link prediction becomes particularly relevant as it can help enhance the navigability of the encyclopedia, improve information accessibility, and promote a deeper understanding of related topics. The DSAA 2023 challenge focuses on the link prediction task applied to Wikipedia articles. Additionally, the focus of the DSAA 2023 challenge [1] is to propose methods for link prediction in network-like data structures, such as network reconstruction and network development, using Wikipedia articles as the primary data source.
Natural Language Inference (NLI) is the task of determining whether a "hypothesis" is true (entailment), false (contradiction), or undetermined (neutral) given a "premise" [2]. The link prediction for Wikipedia Articles task is defined as giving a sparsified sub-graph of the Wikipedia network and predicting if a link exists between two Wikipedia pages. In this paper, we leverage the inherent similarities between the Natural Language Inference task and link prediction for Wikipedia articles. Drawing upon this connection, we apply the widely used Sentence Pair Classification (SPC) technique, commonly employed in NLI, to the specific context of the link prediction task.
This paper addresses the link prediction challenge for Wikipedia articles by combining SPC based on NLI and pre-processing techniques. Traditional methods for link prediction in Wikipedia often rely on graph-based algorithms or textual similarity measures [3], which may overlook the nuanced relationships embedded within the text of articles. By integrating sentence pair classification and pre-processing techniques, we aim to capture the semantic and contextual information in Wikipedia articles more effectively, improving link prediction accuracy.
Our contributions are summarized as follows:
* Firstly, we adopted efficient data pre-processing techniques to cleanse the comments obtained from Wikipedia. The utilization of these techniques serves to elevate the overall quality of the data and yields significant improvements in the extraction of relevant information before training the model for the Link Prediction task.
* Secondly, we leverage the similarities between NLI and link prediction for Wikipedia articles by applying the widely used technique of SPC from NLI to link prediction.
* Finally, we achieved the best result on this task, accounting for 0.99996 on the public test and 1.00000 on the private test, ranking 8th and 3rd, respectively.
## II Related Works
### _Link Prediction_
Link prediction, a critical issue in network-structured data [3], was initially introduced as a supervised learning task by |
2309.09660 | A family of stabilizer-free virtual elements on triangular meshes | A family of stabilizer-free $P_k$ virtual elements are constructed on
triangular meshes. When choosing an accurate and proper interpolation, the
stabilizer of the virtual elements can be dropped while the quasi-optimality is
kept. The interpolating space here is the space of continuous $P_k$ polynomials
on the Hsieh-Clough-Tocher macro-triangle, where the macro-triangle is defined
by connecting three vertices of a triangle with its barycenter. We show that
such an interpolation preserves $P_k$ polynomials locally and enforces the
coerciveness of the resulting bilinear form. Consequently the stabilizer-free
virtual element solutions converge at the optimal order. Numerical tests are
provided to confirm the theory and to be compared with existing virtual
elements. | Xuejun Xu, Shangyou Zhang | 2023-09-18T10:55:24Z | http://arxiv.org/abs/2309.09660v2 | # A family of stabilizer-free virtual elements on triangular meshes
###### Abstract.
A family of stabilizer-free \(P_{k}\) virtual elements are constructed on triangular meshes. When choosing an accurate and proper interpolation, the stabilizer of the virtual elements can be dropped while the quasi-optimality is kept. The interpolating space here is the space of continuous \(P_{k}\) polynomials on the Hsieh-Clough-Tocher macro-triangle, where the macro-triangle is defined by connecting three vertices of a triangle with its barycenter. We show that such an interpolation preserves \(P_{k}\) polynomials locally and enforces the coercivity of the resulting bilinear form. Consequently the stabilizer-free virtual element solutions converge at the optimal order. Numerical tests are provided to confirm the theory and to be compared with existing virtual elements.
Key words. virtual element, stabilizer free, elliptic equation, Hsieh-Clough-Tocher macro-triangle, triangular mesh.
AMS subject classifications. 65N15, 65N30
## 1. Introduction
In this work, we construct a family of stabilizer-free \(P_{k}\) virtual elements ([4, 5, 9, 10, 12, 13, 14, 16, 17, 21, 22, 23]) on triangular meshes.
For solving the following model equation,
\[-\Delta u =f\ \ \ \text{in}\ \ \Omega,\] \[u =0\ \ \ \text{on}\ \ \partial\Omega, \tag{1.1}\]
where \(\Omega\subset\mathbb{R}^{2}\) is a bounded polygonal domain and \(f\in L^{2}(\Omega)\), the weak form reads: Find \(u\in H^{1}_{0}(\Omega)\) such that
\[(\nabla u,\nabla v)=(f,v)\ \ \ \forall v\in H^{1}_{0}(\Omega), \tag{1.2}\]
where \((\cdot,\cdot)\) denotes the \(L^{2}\) inner product on \(\Omega\) and we have \(|v|_{1}^{2}=(\nabla v,\nabla v)\).
Let \(\mathcal{T}_{h}=\{K\}\) be a quasi-uniform triangular mesh on \(\Omega\) with \(h\) as the maximum size of triangles \(K\). Let \(\mathcal{E}_{h}\) be the set of edges \(e\) in \(\mathcal{T}_{h}\). For \(k\geq 1\), the virtual element space is defined as
\[\tilde{V}_{h}=\{v\in H^{1}_{0}(\Omega):\tilde{v}|_{\partial K}\in\mathbb{B}_{k }(\partial K),\Delta\tilde{v}|_{K}\in P_{k-2}(K)\}, \tag{1.3}\]
Introduction
Let \(\mathcal{T}_{h}\) be a bounded bounded bounded bounded operator on \(\mathbb{R}^{n}\) and \(\mathcal{T}_{h}\) be a bounded bounded bounded operator on \(\mathbb{R}^{n}\). The operator \(\mathcal{T}_{h}\) is defined as
\[\mathcal{T}_{h}=\{v_{h}\in\mathcal{T}_{h}:v_{h}\in\mathcal{T}_{h}\}, \tag{1.1}\]
where \(\mathcal{T}_{h}\) is the space of bounded
where \(K\) is split in to three triangles by connecting its barycenter with three vertices, cf. Figure 1. We call \(K\) a Hsieh-Clough-Tocher macro-triangle [15, 32, 37, 48, 49].
The interpolation operator \(\Pi_{h}^{\nabla}\) in (1.5) is naturally defined by \(v_{h}=\Pi_{h}^{\nabla}\tilde{v}\in\mathbb{V}_{k}(K)\) of (1.7) satisfying
\[\begin{split} v_{h}&=\tilde{v}\qquad\qquad\qquad \text{ on }\ \partial K,\\ (\nabla v_{h},\nabla w_{h})_{K}&=(\nabla\tilde{v}, \nabla w_{h})_{K}\quad\forall w_{h}\in H^{1}_{0}(K)\cap\mathbb{V}_{k}(K).\end{split} \tag{1.8}\]
We note that a different interpolation space only changes the numerical quadrature formula for computing \((\nabla u_{h},\nabla v_{h})=(\nabla\Pi_{h}^{\nabla}\tilde{u},\nabla\Pi_{h}^{ \nabla}\tilde{v})\) in the virtual elements equation. An accurate calculation of local interpolation does not increase the computational cost once the stiffness matrix is generated. One may see no advantage of this stabilizer-free virtual element over the Lagrange finite element. But the virtual elements are mainly for polygonal and polyhedral meshes. In [24], this stabilization technique is applied to 2D polygons and 3D polyhedra. We separate the case of triangles because it shows the idea clearly while the polygons and polyhedra have natural triangular and tetrahedral subdivisions, cf. [24].
Eliminating the stabilizer would not only reduce computational cost, but also likely to improve the condition number of the resulting system. In the numerical test, we show how the condition number of the standard virtual element is improved by three methods. But the improved condition number is still worse than that of this stabilizer-free virtual element.
Eliminating the stabilizer would likely utilize fully every degree of freedom in the discrete approximation. Thus it often leads to discoveries of superconvergence. In [25], it is shown that only this stabilizer-free \(P_{1}\) virtual element converges three orders above the optimal order in \(H^{1}\)-norm, and two orders above the optimal order in \(L^{2}\)-norm and \(L^{\infty}\)-norm, when solving the Poisson equation on honeycomb meshes.
The stabilizer is eliminated first in the weak Galerkin finite element method [2, 19, 20, 26, 33, 34, 38, 39, 41], then in the \(H(\mathrm{div})\) finite element method [27, 40], in the \(C^{0}\) or \(C^{-1}\) finite element methods for the biharmonic equation [43] and in the discontinuous Galerkin finite element method [18, 29]. It leads to two-order superconvergent WG finite elements [3, 35, 36, 47] and two-order superconvergent DG finite elements [44, 45] for second order elliptic equations, one or two-order superconvergent WG finite elements for the Stokes equations [28, 42], four-order superconvergent WG finite elements [46] and four-order superconvergent DG finite elements [44, 47] for the biharmonic equation. That is, for an example, a \(P_{3}\) discontinuous finite element method, with the stabilizer-free technique, produces the same order approximate solution as a \(C^{1}\)-\(P_{7}\) finite element method does, in solving a 2D biharmonic equation.
In this paper, we show that with the new interpolation (1.8), the stabilizer-free virtual element equation (1.6) has a unique and quasi-optimal solution, on triangular meshes. Numerical tests on the new stabilizer-free virtual elements are performed. Numerical comparisons are presented, with the other stabilizer-free virtual elements and with the standard virtual elements.
## 2. The well-posedness
We show in this section that the stabilizer-free virtual element equation has a unique solution.
**Lemma 2.1**.: _The interpolation operator \(\Pi_{h}^{\nabla}\) is well defined in (1.8) and it preserves \(P_{k}\) polynomials,_
\[\Pi_{h}^{\nabla}\tilde{v}=\tilde{v}\ \ \ \ \mathrm{if}\ \ \tilde{v}\in P_{k}(K). \tag{2.1}\]
Proof.: Because \(\tilde{v}|_{\partial K}\in\mathbb{B}_{k}(\partial K)\), \(v_{h}\) can assume the boundary condition \(v_{h}=\tilde{v}\) exactly on \(\partial K\). The linear system of equations in (1.8) is a finite dimensional square system. The existence is implied by the uniqueness. To show the uniqueness, we let \(\tilde{v}=0\) in (1.8). Letting \(w_{h}=v_{h}\) in (1.8), we get
\[\nabla v_{h}=\mathbf{0}\ \ \ \ \mathrm{on}\ \ K.\]
Thus \(v_{h}=c\) is a constant on \(K\). As \(v_{h}\) is continuous on edges, \(v_{h}=c\) is a global constant on the whole domain. By the boundary condition, we get \(0=\tilde{v}|_{\partial\Omega}=v_{h}|_{\partial\Omega}=c\). Hence \(v_{h}=0\) and (1.8) has a unique solution.
If \(\tilde{v}\in P_{k}(K)\subset\mathbb{V}_{k}(K)\), defined in (1.4), then the solution of (1.8) says, letting \(w_{h}=v_{h}-\tilde{v}\),
\[\nabla(v_{h}-\tilde{v})=\mathbf{0}.\]
Thus \(v_{h}-\tilde{v}\) is a global constant which must be zero as it vanishes at all \(\partial K\). (2.1) is proved.
**Lemma 2.2**.: _The stabilizer-free virtual element equation (1.6) has a unique solution, where the interpolation \(\Pi_{h}^{\nabla}\) is defined in (1.8)._
Proof.: As both \(\tilde{u},\tilde{v}\in\tilde{V}_{h}\), (1.6) is a finite square system of linear equations. The uniqueness of solution implies the existence. To show the uniqueness, we let \(f=0\) and \(\tilde{v}=\tilde{u}\) in (1.6). It follows that
\[|\Pi_{h}^{\nabla}\tilde{u}|_{1,h}=0.\]
Thus \(\Pi_{h}^{\nabla}\tilde{u}=c\) is constant on each \(K\). But \(\Pi_{h}^{\nabla}\tilde{u}\) is continuous on the whole domain. By the boundary condition, we get \(0=\Pi_{h}^{\nabla}\tilde{u}|_{\partial\Omega}=c\). That is,
\[\Pi_{h}^{\nabla}\tilde{u}=0. \tag{2.2}\]
On one triangle \(K=\cup_{i=1}^{3}K_{i}\),
\[\tilde{u}=\Pi_{h}^{\nabla}\tilde{u}=0\quad\text{ on }\ \partial K. \tag{2.3}\]
Inside the triangle, by (1.8), (2.2), (2.3) and integration by parts, we have
\[(-\Delta\tilde{u},w_{h})=(\nabla\tilde{u},\nabla w_{h})=0\quad\forall w_{h}\in H _{0}^{1}(K)\cap\mathbb{V}_{k}(K). \tag{2.4}\]
By the space \(\tilde{V}_{h}\) definition (1.3), we denote
\[p_{k-2}=-\Delta\tilde{u}\in P_{k-2}(K). \tag{2.5}\]
Let the \(w_{h}\) in (2.4) be
\[w_{h}=p_{k-2}\phi_{\mathbf{x}_{0}}(\mathbf{x})\in H_{0}^{1}(K)\cap\mathbb{V}_ {k}(K), \tag{2.6}\]
where \(\phi_{\mathbf{x}_{0}}(\mathbf{x})\) is the \(P_{1}\) Lagrange basis function at node \(\mathbf{x}_{0}\), cf. Figure 2. That is, \(\phi_{\mathbf{x}_{0}}(\mathbf{x})|_{K_{i}}=\lambda_{i,0}\) is the barycentric coordinate at \(x_{0}\) on triangle \(K_{i}\), i.e., a linear function which assumes value \(1\) at \(\mathbf{x}_{0}\) and vanishing on the line \(e_{i}\), cf. Figure 2.
With the \(w_{k}\) in (2.6), we get from (2.4) and (2.5) that
\[\int_{K}p_{k-2}^{2}\phi_{\mathbf{x}_{0}}(\mathbf{x})d\mathbf{x}=0.\]
As \(\phi_{\mathbf{x}_{0}}(\mathbf{x})>0\) inside \(K\), it follows that
\[p_{k-2}^{2}=0\ \ \text{and}\ \ p_{k-2}=0\ \ \text{on}\ \ K.\]
By (2.3) and (2.5), \(\Delta\tilde{u}=0\) in \(K\) and \(\tilde{u}=0\) on \(\partial K\). Thus, by the unique solution of the Laplace equation, \(\tilde{u}=0\). The lemma is proved.
## 3. Convergence
We prove the optimal order convergence of the stabilizer-free virtual element solutions in this section.
**Theorem 3.1**.: _Let \(u\in H^{k+1}\cap H^{1}_{0}(\Omega)\) be the exact solution of (1.2). Let \(u_{h}\) be the stabilizer-free virtual element solution of (1.6). It holds that_
\[|u-u_{h}|_{1}\leq Ch^{k}|u|_{k+1}. \tag{3.1}\]
Proof.: Since \(w_{h}\in V_{h}\subset H^{1}_{0}(\Omega)\), we subtract (1.6) from (1.2) to get
\[(\nabla(u-u_{h}),\nabla w_{h})_{h}=0\ \ \ \forall w_{h}\in V_{h}.\]
Applying the Schwarz inequality, it follows that
\[|u-u_{h}|_{1}^{2} =(\nabla(u-u_{h}),\nabla(u-I_{h}u))\] \[\leq|u-u_{h}|_{1}|u-I_{h}u|_{1}\leq Ch^{k}|u|_{k+1}|u-u_{h}|_{1},\]
where \(I_{h}u\) is the Scott-Zhang interpolation on quasi-uniform triangulation \(\mathcal{T}_{h}\)[30]. The proof is complete.
To get the optimal order \(L^{2}\) error bound, we need a full regularity of the dual equation that the solution of the equation,
\[-\Delta w =g\ \ \ \ \text{in}\ \ \Omega,\] \[w =0\ \ \ \ \text{on}\ \ \partial\Omega,\]
satisfies
\[|w|_{2}\leq C\|g\|_{0}. \tag{3.2}\]
**Theorem 3.2**.: _Let \(u\in H^{k+1}\cap H^{1}_{0}(\Omega)\) be the exact solution of (1.2). Let \(u_{h}\) be the stabilizer-free virtual element solution of (1.6). Assuming (3.2), it holds that_
\[\|u-u_{h}\|_{0}\leq Ch^{k+1}|u|_{k+1}.\]
Proof.: Let \(w\in H^{2}(\Omega)\cap H^{1}_{0}(\Omega)\) be the dual solution,
\[(\nabla w,\nabla v)=(u-u_{h},v),\ \forall v\in H^{1}_{0}(\Omega). \tag{3.3}\]
Thus, by (3.3), (3.2) and (3.1), we get
\[\|u-u_{h}\|_{0}^{2} =(\nabla w,\nabla(u-u_{h}))=(\nabla(w-w_{h}),\nabla(u-u_{h}))\] \[\leq Ch|w|_{2}h^{k}|u|_{k+1}\leq Ch^{k+1}|u|_{k+1}\|u-u_{h}\|_{0},\]
where \(w_{h}\) is the virtual element solution to the equation (3.3). We obtain the \(L^{2}\) error bound.
## 4. Numerical test
We solve numerically the Poisson equation (1.1) on the domain \(\Omega=(0,1)\times(0,1)\), where an exact solution is chosen as
\[u(x,y)=\sin(\pi x)\sin(\pi y). \tag{4.1}\]
The computation is done first on a family of slightly irregular triangular meshes shown in Figure 3.
Figure 3. The first two levels of grids in the computation of Tables 1–4.
In Table 1, we first list the errors and the computed order of convergence for the \(P_{1}\) and \(P_{2}\) stabilizer-free virtual elements (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for both elements and in both \(L^{2}\) and \(H^{1}\) norms. Here we use \(\Pi_{h}^{\nabla}u\) instead of \(u\) to check the error so that we can detect possible superconvergence. At the bottom of Table 1, we test a method of [8] where the interpolation space is \(P_{2}\cup H_{3}\), i.e., enriching the \(P_{2}(K)\) space by two harmonic \(P_{3}\) polynomials. The method is not proved yet. The numerical test shows it works well, producing optimal order errors.
In Table 2, we first list the errors and the computed order of convergence for the \(P_{3}\) stabilizer-free virtual elements, \(k=3\) in (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. At the bottom of Table 2, we test a method of [8] where the interpolation space is \(P_{3}\cup H_{5}\), i.e., enriching the \(P_{3}(K)\) polynomial space by four harmonic \(P_{4}\) and \(P_{5}\) polynomials. The method is not proved yet. The numerical test shows it works well, producing optimal order errors.
In Table 3, we first list the errors and the computed orders of convergence for the \(P_{4}\) stabilizer-free virtual elements, \(k=4\) in (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. Here the
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{3}{c}{By the \(P_{1}\) virtual element with HTC interpolation.} \\ \hline
7 & 0.8208E-04 & 2.00 & 0.7621E-02 & 1.00 \\
8 & 0.2052E-04 & 2.00 & 0.3817E-02 & 1.00 \\
9 & 0.5129E-05 & 2.00 & 0.1911E-02 & 1.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{2}\) virtual element with HTC interpolation.} \\ \hline
7 & 0.7537E-07 & 3.00 & 0.6668E-04 & 1.99 \\
8 & 0.9423E-08 & 3.00 & 0.1670E-04 & 2.00 \\
9 & 0.1178E-08 & 3.00 & 0.4179E-05 & 2.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{2}\) virtual element with 2 h.p. [8].} \\ \hline
7 & 0.8627E-07 & 3.00 & 0.7861E-04 & 2.00 \\
8 & 0.1079E-07 & 3.00 & 0.1968E-04 & 2.00 \\
9 & 0.1350E-08 & 3.00 & 0.4925E-05 & 2.00 \\ \hline \end{tabular}
\end{table}
Table 1. Error profile on Figure 3 meshes for (4.1).
computer accuracy is exhausted when computing the last grid solution. At the bottom of Table 3, we test a method of [8] where the interpolation space is \(P_{4}\cup H_{10}\), i.e., enriching the \(P_{4}(K)\) polynomial space by 12 harmonic \(P_{5}\), \(P-6\), \(P_{7}\), \(P_{8}\), \(P_{9}\) and \(P_{10}\) polynomials. Since the method does not work, we tested by adding more harmonic polynomials until the error can not be reduced anymore.
In Table 4, we first list the errors and the computed orders of convergence for the \(P_{5}\) stabilizer-free virtual elements, \(k=5\) in (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. At the middle of Table 4, we test a method of [8] where the interpolation space is \(P_{5}\cup H_{10}\), i.e., enriching the \(P_{5}(K)\) polynomial space by 10 harmonic \(P_{6}\), \(P_{7}\), \(P_{8}\), \(P_{9}\) and \(P_{10}\) polynomials. Since the method does not work, we tested by
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & By the \(P_{3}\) virtual element with HTC interpolation. & & & \\ \hline
6 & 0.4664E-08 & 4.00 & 0.3068E-05 & 2.99 \\
7 & 0.2916E-09 & 4.00 & 0.3844E-06 & 3.00 \\
8 & 0.1824E-10 & 4.00 & 0.4811E-07 & 3.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{3}\) virtual element with 8 h.p. [8].} \\ \hline
6 & 0.5676E-08 & 4.01 & 0.3392E-05 & 2.99 \\
7 & 0.3542E-09 & 4.00 & 0.4257E-06 & 2.99 \\
8 & 0.2222E-10 & 3.99 & 0.5372E-07 & 2.99 \\ \hline \end{tabular}
\end{table}
Table 2. Error profile on Figure 3 meshes for (4.1).
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & By the \(P_{4}\) virtual element with HTC interpolation. & & & \\ \hline
5 & 0.7612E-09 & 5.00 & 0.3811E-06 & 3.99 \\
6 & 0.2374E-10 & 5.00 & 0.2389E-07 & 4.00 \\
7 & 0.7616E-12 & — & 0.1495E-08 & 4.00 \\ \hline & \multicolumn{3}{c}{By the \(P_{4}\) virtual element with 12 h.p. [8].} \\ \hline
4 & 0.4589E-07 & 5.01 & 0.7336E-05 & 3.94 \\
5 & 0.1525E-08 & 4.91 & 0.5830E-06 & 3.65 \\
6 & 0.9493E-10 & 4.01 & 0.9239E-07 & 2.66 \\ \hline \end{tabular}
\end{table}
Table 3. Error profile on Figure 3 meshes for (4.1).
adding more harmonic polynomials until the error can not be reduced anymore. Comparing to the last case, the convergent order deteriorates a lot. At the bottom of Table 4, we list the errors and the computed orders of convergence for the \(P_{6}\) stabilizer-free virtual elements, \(k=6\) in (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms.
The next part of computation is done on the uniform triangular meshes, shown as in Figure 4. This is mainly for detecting possible superconvergence.
In Table 5, we first list the errors and the computed orders of convergence for the \(P_{1}\) stabilizer-free virtual elements, \(k=1\) in (1.4) with the
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & By the \(P_{5}\) virtual element with HTC interpolation. \\ \hline
3 & 0.3339E-07 & 6.01 & 0.5275E-05 & 4.96 \\
4 & 0.5170E-09 & 6.01 & 0.1665E-06 & 4.99 \\
5 & 0.8033E-11 & 6.01 & 0.5223E-08 & 4.99 \\ \hline & \multicolumn{3}{c}{By the \(P_{5}\) virtual element with 10 h.p. [8].} \\ \hline
2 & 0.4262E-05 & 5.18 & 0.2158E-03 & 4.24 \\
3 & 0.1566E-06 & 4.77 & 0.2319E-04 & 3.22 \\
4 & 0.1819E-07 & 3.11 & 0.5702E-05 & 2.02 \\ \hline & \multicolumn{3}{c}{By the \(P_{6}\) virtual element with HTC interpolation.} \\ \hline
2 & 0.1696E-06 & 7.30 & 0.1576E-04 & 6.19 \\
3 & 0.1320E-08 & 7.01 & 0.2515E-06 & 5.97 \\
4 & 0.1025E-10 & 7.01 & 0.3956E-08 & 5.99 \\ \hline \end{tabular}
\end{table}
Table 4. Error profile on Figure 3 meshes for (4.1).
Figure 4. The first three levels of grids for the computation in Tables 5–7.
Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. In fact, we have one-order superconvergence in \(H^{1}\) semi-norm. At the bottom of Table 5, we list the errors and the computed orders of convergence for the \(P_{1}\) standard virtual elements, \(k=1\) in (1.4). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. Again, we have one-order \(H^{1}\) superconvergence for this element. Comparing the errors, the new method is slightly better which is understandable as their interpolation space \(P_{1}(K)\) is a subspace of our interpolation space \(\mathbb{V}_{1}=C(K)\cap\cup_{i=1}^{3}P_{1}(K_{i})\).
In Table 6, we first list the errors and the computed orders of convergence for the \(P_{2}\) stabilizer-free virtual elements, \(k=2\) in (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space \(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. Unlike the traditional \(P_{2}\) finite element, we do not have any superconvergence. Comparing to the traditional \(P_{2}\) finite element, we compute a solution in a larger vector space but get a worse result. This is because the added Hsieh-Clough-Tocher macro-bubbles destroy the symmetry of \(P_{2}\) finite element equations on uniform triangular meshes. At the bottom of Table 6, we list the errors and the computed orders of convergence for the standard \(P_{2}\) virtual elements, \(k=2\) in (1.4). Optimal orders are achieved for the element in both \(L^{2}\) and \(H^{1}\) norms. Comparing the two errors, the new method is much better which is understandable as there is no stabilizer here.
In Table 7, we list the errors and the computed orders of convergence for the \(P_{3}\), \(P_{4}\), \(P_{5}\) and \(P_{6}\) stabilizer-free virtual elements, \(k=3,4,5\), or \(6\) in (1.4) with the Hsieh-Clough-Tocher macro-triangle interpolation space
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & By the \(P_{1}\) SF virtual element with HTC interpolation. \\ \hline
7 & 0.3032E-03 & 2.00 & 0.1382E-02 & 2.00 \\
8 & 0.7586E-04 & 2.00 & 0.3457E-03 & 2.00 \\
9 & 0.1897E-04 & 2.00 & 0.8644E-04 & 2.00 \\ \hline & \multicolumn{3}{c}{By the standard \(P_{1}\) virtual element [4].} \\ \hline
6 & 0.1210E-02 & 1.98 & 0.5518E-02 & 1.99 \\
7 & 0.3032E-03 & 2.00 & 0.1382E-02 & 2.00 \\
8 & 0.7586E-04 & 2.00 & 0.3457E-03 & 2.00 \\ \hline \end{tabular}
\end{table}
Table 5. Error profile on Figure 4 meshes for (4.1).
\(\mathbb{V}_{k}\) in (1.7). Optimal orders are achieved for all the elements in both \(L^{2}\) and \(H^{1}\) norms. Comparing the errors of same virtual elements, the uniform triangular meshes are much better than the triangular meshes shown in Figure 3.
We would compare more the stabilizer-free virtual element with the standard virtual elements of [4]. The standard \(H^{1}\) interpolation is defined by
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\mathbb{V}}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\mathbb{V}}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{3}{c}{By the \(P_{2}\) SF virtual element with HTC interpolation.} \\ \hline
7 & 0.5604E-07 & 3.72 & 0.2302E-04 & 2.48 \\
8 & 0.5261E-08 & 3.41 & 0.4990E-05 & 2.21 \\
9 & 0.5901E-09 & 3.16 & 0.1194E-05 & 2.06 \\ \hline & \multicolumn{3}{c}{By the standard \(P_{2}\) virtual element [4].} \\ \hline
6 & 0.1885E-05 & 3.15 & 0.3225E-03 & 2.08 \\
7 & 0.2285E-06 & 3.04 & 0.7958E-04 & 2.02 \\
8 & 0.2833E-07 & 3.01 & 0.1983E-04 & 2.00 \\ \hline \end{tabular}
\end{table}
Table 6. Error profile on Figure 4 meshes for (4.1).
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\mathbb{V}}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\mathbb{V}}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{3}{c}{By the \(P_{2}\) SF virtual element with HTC interpolation.} \\ \hline
8 & 0.5261E-08 & 3.41 & 0.4990E-05 & 2.21 \\
9 & 0.5901E-09 & 3.16 & 0.1194E-05 & 2.06 \\ \hline & \multicolumn{3}{c}{By the standard \(P_{2}\) virtual element [4].} \\ \hline
6 & 0.1885E-05 & 3.15 & 0.3225E-03 & 2.08 \\
7 & 0.2285E-06 & 3.04 & 0.7958E-04 & 2.02 \\
8 & 0.2833E-07 & 3.01 & 0.1983E-04 & 2.00 \\ \hline \end{tabular}
\end{table}
Table 7. Error profile on Figure 3 meshes for (4.1).
\(\Pi_{h}^{1}\tilde{u}\in P_{k}(K)\) such that
\[(\nabla\Pi_{h}^{1}\tilde{u},\nabla p)_{K} =-(\tilde{u},\Delta p)_{K}+\langle\tilde{u},\nabla p\cdot\mathbf{n} \rangle_{\partial K}\quad\forall p\in P_{k}(K)\setminus P_{0}(K),\] \[\langle\Pi_{h}^{1}\tilde{u},p\rangle_{\partial K} =\langle\tilde{u},p\rangle_{\partial K}\quad\forall p\in P_{0}(K), \tag{4.2}\]
where \(\mathbf{n}\) is the unit outer normal vector. \(\tilde{u}\) is defined by the degrees of freedom on triangle \(K=\mathbf{x}_{1}\mathbf{x}_{2}\mathbf{x}_{3}\), \(\{F_{i},i=1,\ldots,N_{K}\}\), cf. [4],
\[F_{i}(\tilde{u})=\begin{cases}\tilde{u}(\mathbf{x}_{j}),&j=1,2,3,\\ \tilde{u}(\frac{\left\{\mathbf{x}_{j}+(k-l)\mathbf{x}_{\mathrm{mod}(j,3)+1} \right\}}{k}),&l=1,\ldots,k-1;\ j=1,2,3,\\ \frac{\int_{K}\tilde{u}(x-x_{0})^{j}(y-y_{0})^{l}d\mathbf{x}}{\int_{K}1d \mathbf{x}},&0\leq j+l\leq k-2,\end{cases} \tag{4.3}\]
where \(\mathbf{x}_{0}=(x_{0},y_{0})\) is the barycenter of \(K\). The standard stabilizer in [4] is defined as
\[S(\tilde{u}-\Pi_{h}^{1}\tilde{u},\tilde{v}-\Pi_{h}^{1}\tilde{v})_{K}=\sum_{i= 1}^{N_{K}}F_{i}(\tilde{u}-\Pi_{h}^{1}\tilde{u})F_{i}(\tilde{v}-\Pi_{h}^{1} \tilde{v}). \tag{4.4}\]
In Table 8, we compute the solution (4.1) again by the HTC-interpolating VM method and by the standard virtual element method [4] defined by (4.2), (4.3) and (4.4). We first read the condition number \(\kappa_{2}(A)\) in \(l^{2}\) norm for the stiffness matrix \(A\) in Table 8. We can see the condition number is huge for the [4]\(P_{3}\) VM, with (4.3) and (4.4). It would make a direct solver fail on higher level meshes. When we read the stiffness matrix of the [4]\(P_{3}\) VM, we find the term for the basis function associate with \((x-x_{0})^{1}\)-moment is much bigger than that with \((x-x_{0})^{0}\)-moment. Therefore we replace the degrees of freedom (4.3) by a better scaled set,
\[F_{i}(\tilde{u})=\begin{cases}\tilde{u}(\mathbf{x}_{j}),&j=1,2,3,\\ \tilde{u}(\frac{(\frac{\mathbf{x}_{j}+(k-l)\mathbf{x}_{\mathrm{mod}(j,3)+1} }{k})}{k}),&l=1,\ldots,k-1;\ j=1,2,3,\\ \frac{\int_{K}\tilde{u}(x-x_{0})^{j}(y-y_{0})^{l}d\mathbf{x}}{(\int_{K}[(x-x_ {0})^{j}(y-y_{0})^{l}]^{2}d\mathbf{x})^{1/2}},&0\leq j+l\leq k-2.\end{cases} \tag{4.5}\]
The condition number of the [4]\(P_{3}\) VM (with (4.5) and (4.4)) is improved, see the third part of Table 8. When we read the new stiffness matrix of the [4]\(P_{3}\) VM, we find the term for the basis function associate with \((x-x_{0})^{0}\)-moment is much bigger than that with the degree of freedom \(\tilde{u}(\mathbf{x}_{1})\). Therefore we scale the degrees of freedom (4.3) again,
\[F_{i}(\tilde{u})=\begin{cases}\tilde{u}(\mathbf{x}_{j}),&j=1,2,3,\\ \tilde{u}(\frac{(\frac{\mathbf{x}_{j}+(k-l)\mathbf{x}_{\mathrm{mod}(j,3)+1} }{k})}{k}),&l=1,\ldots,k-1;\ j=1,2,3,\\ \frac{10\int_{K}\tilde{u}(x-x_{0})^{j}(y-y_{0})^{l}d\mathbf{x}}{(\int_{K}[(x- x_{0})^{j}(y-y_{0})^{l}]^{2}d\mathbf{x})^{1/2}},&0\leq j+l\leq k-2.\end{cases} \tag{4.6}\]
The condition number is reduced again, seen in the fourth part of Table 8. This is nearly the best we can do about the conditioning. It is better only at the first level than the new virtual element's condition number. Supposedly, changing the basis does not change the solution. But here the error of the solution in the fourth part of Table 8 is changed (smaller). It indicates that the bad condition number of the [4]\(P_{3}\) VM does increase round-off errors.
The [4]\(P_{3}\) virtual element solution does not converge at the correct order in Table 8. This problem does not happen to the \(P_{1}\) and \(P_{2}\) VM solutions, cf. Tables 5 and 6. Thus we increase the power of the stabilizer \(S(\cdot,\cdot)_{K}\) in
\begin{table}
\begin{tabular}{c|c c|c c|c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) & \(\kappa_{2}(A)\) \\ \hline & \multicolumn{4}{c}{By the \(P_{3}\) virtual element with HTC interpolation.} \\ \hline
[MISSING_PAGE_POST]
\hline \end{tabular}
\end{table}
Table 8. Error profile on Figure 3 meshes for (4.1).
(4.4) by a scaling,
\[S(\tilde{u}-\Pi^{1}_{h}u,\tilde{v}-\Pi^{1}_{h}v)_{K}=h^{\alpha}\sum_{i=1}^{N_{K}}F_ {i}(\tilde{u}-\Pi^{1}_{h}u)F_{i}(\tilde{v}-\Pi^{1}_{h}v), \tag{4.7}\]
where \(\alpha\) is to be specified, depending on the polynomial degree \(k\). In Table 9, the error and the computed order of convergence are listed for the [4]\(P_{3}\) VM, with stabilizer's \(\alpha=0\) and \(-1\) in (4.7). We can see, for the latter, the method can converge at the optimal order. The errors of the stabilizer-free VM are slightly smaller, see Table 2.
In Table 10, the error and the computed order of convergence are listed for the [4]\(P_{4}\) VM, with stabilizer's \(\alpha=0\), \(-1\) and \(-2\) in (4.7). We can see that the standard stabilizer (4.4) does not work when \(k=4\). We can see, for the last \(\alpha=-2\) (depending on polynomial degree \(k=4\) here), the method does converge at the optimal order, in Table 10. The errors of the stabilizer-free VM are slightly smaller, see Table 3.
## 5. Ethical Statement
### Compliance with Ethical Standards
The submitted work is original and is not published elsewhere in any form or language.
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi^{\nabla}_{h}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi^{\nabla}_{h}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{2}{c}{By the [4]\(P_{3}\) VM, with (4.6) and (4.7), \(\alpha=0\).} \\ \hline
1 & 0.4089E-02 & 0.00 & 0.7384E-01 & 0.00 \\
2 & 0.4278E-03 & 3.26 & 0.1359E-01 & 2.44 \\
3 & 0.3176E-04 & 3.75 & 0.2216E-02 & 2.62 \\
4 & 0.3018E-05 & 3.40 & 0.4332E-03 & 2.35 \\ \hline & \multicolumn{2}{c}{By the [4]\(P_{3}\) VM, with (4.6) and (4.7), \(\alpha=-1\).} \\ \hline
1 & 0.4093E-02 & 0.00 & 0.7393E-01 & 0.00 \\
2 & 0.4069E-03 & 3.33 & 0.1261E-01 & 2.55 \\
3 & 0.2529E-04 & 4.01 & 0.1681E-02 & 2.91 \\
4 & 0.1583E-05 & 4.00 & 0.2172E-03 & 2.95 \\
5 & 0.9949E-07 & 3.99 & 0.2765E-04 & 2.97 \\
6 & 0.6245E-08 & 3.99 & 0.3491E-05 & 2.99 \\ \hline \end{tabular}
\end{table}
Table 9. Error profile on Figure 3 meshes for (4.1).
### Funding
Xuejun Xu was supported by National Natural Science Foundation of China (Grant No. 12071350), Shanghai Municipal Science and Technology Major Project No. 2021SHZDZX0100, and Science and Technology Commission of Shanghai Municipality.
### Conflict of Interest
There is no potential conflict of interest.
### Ethical approval
This article does not contain any studies involving animals. This article does not contain any studies involving human participants.
### Informed consent
This research does not have any human participant.
### Availability of supporting data
This research does not use any external or author-collected data.
\begin{table}
\begin{tabular}{c|c c|c c} \hline Grid & \(\|\Pi_{h}^{\nabla}u-u_{h}\|_{0}\) & \(O(h^{r})\) & \(|\Pi_{h}^{\nabla}u-u_{h}|_{1}\) & \(O(h^{r})\) \\ \hline & \multicolumn{2}{c}{By the [4] \(P_{4}\) VM, with (4.6) and (4.7), \(\alpha=0\).} \\ \hline
1 & 0.1655E-02 & 0.00 & 0.3146E-01 & 0.00 \\
2 & 0.8753E-04 & 4.24 & 0.3987E-02 & 2.98 \\
3 & 0.1025E-04 & 3.09 & 0.9204E-03 & 2.12 \\
4 & 0.1298E-05 & 2.98 & 0.2294E-03 & 2.00 \\ \hline & \multicolumn{2}{c}{By the [4] \(P_{4}\) VM, with (4.6) and (4.7), \(\alpha=-1\).} \\ \hline
1 & 0.1655E-02 & 0.00 & 0.3146E-01 & 0.00 \\
2 & 0.6940E-04 & 4.58 & 0.3162E-02 & 3.31 \\
3 & 0.5106E-05 & 3.76 & 0.4866E-03 & 2.70 \\
4 & 0.3993E-06 & 3.68 & 0.7665E-04 & 2.67 \\ \hline & \multicolumn{2}{c}{By the [4] \(P_{4}\) VM, with (4.6) and (4.7), \(\alpha=-1\).} \\ \hline
1 & 0.1655E-02 & 0.00 & 0.3146E-01 & 0.00 \\
2 & 0.5695E-04 & 4.86 & 0.2511E-02 & 3.65 \\
3 & 0.2260E-05 & 4.66 & 0.2115E-03 & 3.57 \\
4 & 0.7827E-07 & 4.85 & 0.1489E-04 & 3.83 \\
5 & 0.2514E-08 & 4.96 & 0.9577E-06 & 3.96 \\
6 & 0.7946E-10 & 4.98 & 0.6012E-07 & 3.99 \\ \hline \end{tabular}
\end{table}
Table 10. Error profile on Figure 3 meshes for (4.1).
### Authors' contributions
All authors made equal contribution.
### Acknowledgments
None.
|
2309.16998 | Natural dualities for varieties generated by finite positive MV-chains | We provide a simple natural duality for the varieties generated by the
negation- and implication- free reduct of a finite MV-chain. We study these
varieties through the dual equivalence thus obtained. For example, we fully
characterize their algebraically closed, existentially closed and injective
members. We also explore the relationship between this natural duality and
Priestley duality in terms of distributive skeletons and Priestley powers. | Wolfgang Poiger | 2023-09-29T05:54:05Z | http://arxiv.org/abs/2309.16998v2 | # Natural dualities for varieties generated by finite positive MV-chains
###### Abstract.
We provide a simple natural duality for the varieties generated by the negation- and implication- free reduct of a finite MV-chain. We study these varieties through the dual equivalence thus obtained. For example, we fully characterize their algebraically closed, existentially closed and injective members. We also explore the relationship between this natural duality and Priestley duality in terms of distributive skeletons and Priestley powers.
Key words and phrases:Positive MV-algebras, Natural dualities, Finite-valued Lukasiewicz logics, Priestley duality, Boolean power 2020 Mathematics Subject Classification: 08C20, 06D35, 06D50
## 1. Introduction
Just like distributive lattices are the negation-free subreducts of Boolean algebras, _positive_ MV_-algebras_ (recently introduced in [1]), are the negation-free subreducts of MV-algebras. While the variety MV of MV-algebras provides algebraic semantics for Lukasiewicz infinite-valued logic (see, _e.g._, [7, Chapter 4]), its subvarieties, generated by finite MV-chains, provide algebraic semantics for Lukasiewicz finitely-valued logics (the subvarieties were first studied in [12]). In this paper, we study the varieties generated by finite _positive_ MV-chains (that is, negation-free reducts of ). Our main tool for this study is the theory of _natural dualities_.
In its simplest form, natural duality theory [8] provides a general framework to obtain a dual equivalence between a quasi-variety generated by a finite algebra and a category of structured Stone spaces generated by a discrete structure based on the same set as (and therefore called an _alter ego_ of ). Examples of natural dualities are _Stone duality_, which arises if is the two-element Boolean algebra and its alter ego is the two-element discrete space, and _Priestley duality_, which arises if is the two-element distributive lattice and its alter-ego is the two-element discrete space with order. The great utility of these dualities may be seen as a consequence of the fact that the alter-egos described above are very simple structures. A general theme of natural duality theory may be phrased as _simple structures yield useful dualities_.
For the varieties, natural dualities were developed and studied in [20]. Since finite MV-chains are _semi-primal_, it is easy to come up with simple alter-egos. Indeed, by [8, Theorem 3.3.14] the only structure relevant for this duality is given by the collection of subalgebras. One reason why this is sufficient is that every subalgebra of is a direct product of subalgebras. This is _not_ true anymore in the case of (for example, the order relation is is a subalgebra of ). We show that, instead, a simple alter-ego of can be obtained |
2309.10331 | Hardness results for decoding the surface code with Pauli noise | Real quantum computers will be subject to complicated, qubit-dependent noise,
instead of simple noise such as depolarizing noise with the same strength for
all qubits. We can do quantum error correction more effectively if our decoding
algorithms take into account this prior information about the specific noise
present. This motivates us to consider the complexity of surface code decoding
where the input to the decoding problem is not only the syndrome-measurement
results, but also a noise model in the form of probabilities of single-qubit
Pauli errors for every qubit.
In this setting, we show that quantum maximum likelihood decoding (QMLD) and
degenerate quantum maximum likelihood decoding (DQMLD) for the surface code are
NP-hard and #P-hard, respectively. We reduce directly from SAT for QMLD, and
from #SAT for DQMLD, by showing how to transform a boolean formula into a
qubit-dependent Pauli noise model and set of syndromes that encode the
satisfiability properties of the formula. We also give hardness of
approximation results for QMLD and DQMLD. These are worst-case hardness results
that do not contradict the empirical fact that many efficient surface code
decoders are correct in the average case (i.e., for most sets of syndromes and
for most reasonable noise models). These hardness results are nicely analogous
with the known hardness results for QMLD and DQMLD for arbitrary stabilizer
codes with independent $X$ and $Z$ noise. | Alex Fischer, Akimasa Miyake | 2023-09-19T05:29:01Z | http://arxiv.org/abs/2309.10331v3 | # Hardness results for decoding the surface code with Pauli noise
###### Abstract
Real quantum computers will be subject to complicated, qubit-dependent noise, instead of simple noise such as depolarizing noise with the same strength for all qubits. We can do quantum error correction more effectively if our decoding algorithms take into account this prior information about the specific noise present. This motivates us to consider the complexity of surface code decoding where the input to the decoding problem is not only the syndrome-measurement results, but also a noise model in the form of probabilities of single-qubit Pauli errors for every qubit.
In this setting, we show that Maximum Probability Error (MPE) decoding and Maximum Likelihood (ML) decoding for the surface code are NP-hard and #P-hard, respectively. We reduce directly from SAT for MPE decoding, and from #SAT for ML decoding, by showing how to transform a boolean formula into a qubit-dependent Pauli noise model and set of syndromes that encode the satisfiability properties of the formula. We also give hardness of approximation results for MPE and ML decoding. These are worst-case hardness results that do not contradict the empirical fact that many efficient surface code decoders are correct in the average case (i.e., for most sets of syndromes and for most reasonable noise models). These hardness results are nicely analogous with the known hardness results for MPE and ML decoding of arbitrary stabilizer codes with independent \(X\) and \(Z\) noise.
###### Contents
* 1 Introduction
* 1.1 Summary of results
* 2 Background
* 2.1 The surface code
* 2.2 Noise models
* 2.3 Maximum Probability Error (MPE) decoding versus Maximum Likelihood (ML) decoding
* 3 Reduction from SAT to Maximum Probability Error (MPE) decoding
* 3.1 Overview of the reduction
* 3.2 Convert between \(X/Z\) string gadgets
* 3.3 \(X/Z\) wire crossing gadget
* 3.4 FAN-OUT gadget
* 3.5 NOT gadget
* 3.6 AND gadget
* 3.7 Putting it all together: spacing between the gadgets
* 3.8 Hardness of approximate MPE decoding
* 4 Reduction from #SAT to Maximum Likelihood (ML) decoding
* 4.1 Hardness of approximate ML decoding
* 5 Hardness of decoding with more regular noise models
* 5.1 More regular noise models for MPE reduction
* 5.2 More regular noise models for ML reduction
* 6 Conclusion and discussion
* 7 Acknowledgments
* A Case analysis of the AND gadget
* A.1 Case 1: all 3 locations have \(Z\) errors
* A.2 Case 2: of those 3 possible \(Z\) errors, only location Z1 has a \(Z\) error
* A.3 Case 2a: location Z1 has a \(Z\) error, and XZ1 has a \(Z\) error
* A.4 Case 2b: location Z1 has a \(Z\) error, and location Y1 has a \(Y\) error
* A.5 Case 3: of those 3 possible \(Z\) errors, only location Z2 has a \(Z\) error
* A.6 Case 4: of those 3 possible \(Z\) errors, only location Z3 has a \(Z\) error
* B Proof of Lemma 1
* B.1 Case 1: left input variables is changed, right input variable is false
* B.2 Case 2: left input variables is changed, right input variable is true
* B.3 Putting it all together: change in the error for the whole circuit
## 1 Introduction
Quantum computers need to perform long, error-free computations in order to solve problems of interest. All the physical qubits we can build now (and all those we are likely to be able to build in the future) have error rates that are much too high to run long computations error-free. Therefore in order to run computations of interest with these physical qubits, we have to use quantum error correction to do fault-tolerant quantum computation. This involves encoding individual logical qubits with many physical qubits using quantum error correcting codes, performing computations by using logical operations that act on the logical qubits, and continuously correcting errors on the encoded logical qubits as they occur throughout the computation.
The surface code[1] is one of the most promising candidates for a quantum error correcting code on which to do large-scale fault-tolerant quantum computation, due to its high error thresholds and locality of all operations necessary for fault tolerance. The locality of the error correction operations is important because many physical implementations of qubits, such as superconducting qubits and other solid-state qubits, directly support only local, nearest-neighbor 2-qubit operations. Fault-tolerant quantum computers made with solid-state qubits are likely to use surface codes or related topological codes[2, 3, 4]. Small
versions of the surface code have been experimentally implemented on several superconducting quantum computers[5, 6, 7].
Decoding is the computational task of determining what error correction operations to apply, given a set of syndrome measurement results extracted from the code. For superconducting qubits, syndromes are measured (and need to be decoded) on the order of once every microsecond[5, 6, 7]. Additionally, it may be required to use surface codes with up to thousands[8] or even tens of thousands[9] of physical qubits per logical qubit in order to perform quantum computations of interest. Processing thousands of syndromes per microsecond (per logical qubit) is a formidable computational task that has motivated the study of efficient and accurate decoding algorithms for the surface code, and for quantum codes in general.
There are very few special cases where any surface code decoding algorithms are known to be optimal or even approximately optimal in any rigorous sense. Existing decoding algorithms either make simplifying assumptions about the noise (such as independence of \(X\) and \(Z\) noise[10, 11]) that allow them to decode optimally, or are heuristic algorithms without rigorous performance guarantees. See [12] for a review of surface code decoding algorithms. Given all the effort put into efficient decoders for the surface code, the lack of success in finding provably optimal algorithms suggests that there may be computational complexity reasons blocking progress. In this paper we establish computational complexity results that explain this lack of progress.
Related work on the complexity of maximum probability error and maximum likelihood decoding of quantum codes has focused on general stabilizer codes, rather than the surface code. Informally, Maximum Probability Error (MPE) decoding is the task of actually finding the error that occurred, while Maximum Likelihood (ML) decoding is attempting to merely find a good correction for the error, taking into account that many different corrections can correct the same error, and many different errors can be corrected by the same correction. What we call ML decoding is also sometimes called degenerate quantum maximum likelihood decoding, and what we call MPE decoding is also sometimes called non-degenerate quantum maximum likelihood decoding, or simply quantum maximum likelihood decoding. See Section 2.3 for formal definitions of MPE and ML decoding.
Decoding general stabilizer codes is known to be NP-hard for MPE decoding and #P-hard for ML decoding. Hsieh and Le Gall[13] showed that both MPE and ML decoding for general stabilizer codes are NP-hard when every qubit has independent \(X\) and \(Z\) noise with some constant probability, by reducing from the problem of decoding classical binary linear codes, which is also NP-hard[14]. Kuo and Lu[15, 16] showed that MPE and ML decoding of stabilizer codes remain NP-hard with depolarizing noise. Iyer and Poulin[17] strengthened the former result by showing that ML decoding of stabilizer codes with independent \(X\) and \(Z\) noise is actually #P-hard, not merely NP-hard. #P is the complexity class of function problems that count the number of accepting paths for a nondeterministic Turing machine. The canonical complete problem for #P is #SAT: counting the number of satisfying assignments of a boolean formula.
The above results show that decoding general stabilizer codes is hard by embedding a hard computational problem into a set of stabilizers and measurement results for those stabilizers (i.e., syndromes). They show that an optimal decoding algorithm that works for any quantum stabilizer code can be used to solve NP-hard (or #P-hard) problems. Our results on the hardness of decoding the surface code instead show that an optimal decoding algorithm that only works on the surface code (but importantly, works for any Pauli noise model) can be used to solve NP-hard (or #P-hard) problems. We need the Pauli noise ingredient for our results; because we don't have the freedom to choose stabilizers in a
way that cleverly encodes a hard computational problem, we instead choose a Pauli noise model that encodes a hard computational problem in our reductions.
Much of the research on decoders for the surface code and related topological codes has focused on algorithms that use information about the specific noise present in order to decode better. Real quantum computers will not be subject to simple noise such as depolarizing noise, but rather more complicated noise that may be different for each qubit. We would like our decoders to take advantage of this fact. Such _biased noise_ decoders, as they are called, generally take as input not only the syndromes but also a noise model, often in the form of Pauli error probabilities for each qubit. With physically realistic variation in the noise, these decoders can have much better error correction properties than what can be attained by approximating the noise with something like independent identically distributed depolarizing noise[18, 19], so these decoders will play an important role in building large fault-tolerant quantum computers. Therefore it is natural to consider the hardness of surface code decoding where the input to the problem is not only the syndromes but also Pauli error probabilities for each qubit. This is the exact problem we show to be hard in this work.
Decoding the surface code is known to be closely related computing partition functions of Ising models, which is known to be #P-hard[20]. Specifically, the coset probability of an error is exactly the partition function of an Ising model constructed from that error and the noise model[10, 21, 22]. So computing partition functions allows one to determine which error coset has maximum coset probability, i.e., to do ML decoding. However, this does not directly imply any hardness results for ML decoding, since the task for ML decoding is merely to decide which error coset has the greatest coset probability (i.e., which partition function out of several related ones is the greatest), rather than to exactly compute any coset probabilities.
### Summary of results
We consider the complexity of surface code decoding where every qubit \(j\) has a Pauli \(X\) error with probability \(p_{X}^{(j)}\), a Pauli \(Y\) error with probability \(p_{Y}^{(j)}\), a Pauli \(Z\) error with probability \(p_{Z}^{(j)}\), and no error with probability \(1-p_{X}^{(j)}-p_{Y}^{(j)}-p_{Z}^{(j)}\). In general the probabilities \(p_{X}^{(j)}\), \(p_{Y}^{(j)}\), \(p_{Z}^{(j)}\) will be different for each qubit \(j\), and they are part of the input to the computational problem along with the syndromes. We prove 2 theorems about the worst-case hardness of decoding the surface code with this simple but general class of noise models, which we call single-qubit, site-dependent Pauli noise.
**Theorem 1**.: _Maximum Probability Error (MPE) decoding the surface code with single-qubit, site-dependent Pauli noise is NP-hard._
**Theorem 2**.: _Maximum Likelihood (ML) decoding the surface code with single-qubit, site-dependent Pauli noise is #P-hard under Turing reductions._
We prove the Theorem 1 by reducing directly from SAT. The strategy of the reduction is to construct a noise model that simulates a boolean formula in the sense that there is one possible error for each possible setting of the input variables for the boolean formula. We set the error probabilities such that errors corresponding to satisfying assignments of the formula have higher probabilities than errors corresponding to unsatisfying assignments of the formula, so a MPE decoder will find an error corresponding to a satisfying assignment of the formula, if one exists.
We prove the Theorem 2 by reducing directly from #SAT. The noise model outputted by this reduction is similar to, but not the same as, the noise model outputted by the
reduction for Theorem 1--the possible errors are the same, but the probabilities of those errors are different. We set the error probabilities such that errors corresponding to unsatisfying assignments (which are all in one error coset) all have the same probability, and likewise with errors corresponding to satisfying assignments (which are all in a different error coset). Since all these probabilities are the same, finding the error coset with maximum sum of probabilities is equivalent to counting satisfying versus unsatisfying assignments of the formula.
We also prove 2 hardness of approximation results, which follow from straightforward modifications of the proofs of Theorems 1 and 2. Here the approximation measures are the probability of the error found (for Corollary 1) and the coset probability of the coset found (for Corollary 2).
**Corollary 1**.: _Approximate MPE decoding of the surface code with single-qubit, site-dependent Pauli noise, up to any exponential approximation factor (i.e., with approximation factor \(2^{\ell^{c}}\) for any constant \(c\), where \(\ell\) is the number of qubits in the surface code instance), is NP-hard._
**Corollary 2**.: _There exists a constant \(c\) such that approximate ML decoding of the surface code with single-qubit, site-dependent Pauli noise, with approximation factor \(M(\ell)=2^{\ell^{c}}\), is NP-hard, where \(\ell\) is the number of qubits in the surface code instance._
This means that although many decoders typically approximate the problem they are trying to solve very well, they cannot hope to approximate the problem in all scenarios (i.e., for all noise models and for all sets of syndromes). If the approximation measures are log probabilities (which is a common metric that decoders work with), instead of just probabilities, then these results translate into hardness of approximation results for polynomial additive approximation factors, not exponential multiplicative approximation factors.
In Section 5, we show that we maintain the hardness results of Theorem 1 and Corollary 1 if we restrict all the error probabilities to be either \(0\), or some constant \(p\), which we can choose to be any constant in \((0,0.25]\). We also show that we maintain the hardness result of Theorem 2 if we restrict the error probabilities to be either \(0\), \(\frac{1}{2}\), \(\frac{1}{3}\), or \(\frac{1}{4}\), although we lose the hardness of approximation result Corollary 2 in this case.
All of these hardness results are about worst-case hardness, not average-case. Therefore they have nothing to say about the fact that many surface code decoding algorithms tend to work well in practice; in particular, well enough to achieve fault tolerance. In order to achieve fault tolerance it is only required that decoders be correct in the average case, i.e., for almost all sets of syndromes with reasonable noise models. Therefore, like all hardness results for practical problems of interest, the consequence of these results is merely to inform our search for surface code decoding algorithms: we need to search for heuristic algorithms that are usually but not always correct, or search for special cases that we can solve exactly.
## 2 Background
### The surface code
Here we review the surface code concepts necessary to understand our reductions. We do not provide a comprehensive introduction to the surface code; see [9] for an excellent overview.
The surface code is defined on a rectangular grid of qubits, with qubits at the intersections of grid lines. For every face of the grid adjacent to 4 qubits there is a 4-qubit stabilizer, either a \(X^{\otimes 4}\) stabilizer or a \(Z^{\otimes 4}\) stabilizer1--see Fig. 0(a). In that figure, and all figures in this paper, green (lighter) faces indicate \(X\)-stabilizers and blue (darker) faces indicate \(Z\) stabilizers2. After Fig. 0(a), we don't draw the lines and points on the grid, to make the figures less cluttered.
Footnote 1: We use this definition of the surface code, rather than the original which involved qubits at edges and stabilizers at points and faces, because it is easier to visualize.
Footnote 2: Readers with trouble viewing our color scheme should view our paper in black and white.
The code space for the surface code is the joint \(+1\) eigenspace of all of those stabilizer operators. Therefore, if no error has occurred on any qubits, then all stabilizer measurement results, or _syndromes_, will be \(+1\). A single Pauli error \(X\), \(Y\), or \(Z\) that occurs on a qubit causes all adjacent stabilizers that anticommute with that error to have measurement result \(-1\). For example, if an \(X\) (\(Z\)) error occurs on a qubit, then all adjacent \(Z\) (\(X\)) stabilizers will have their measurement result flipped to \(-1\). If a \(Y\) error occurs on a qubit, then all adjacent stabilizers will have their measurement result flipped to \(-1\).
If errors occur on multiple qubits adjacent to some stabilizer, then the \(\pm 1\) measurement result depends on the parity of the number of anticommuting errors adjacent to that stabilizer. If an odd number of qubits adjacent to a stabilizer have a Pauli error that anticommutes with that stabilizer, then that stabilizer will have the measurement result \(-1\). If an even number of qubits adjacent to a stabilizer have a Pauli error that anticommutes with that stabilizer, then that stabilizer will have the measurement result \(+1\). See Fig. 0(b) for an example of the stabilizer measurement results caused by a particular Pauli error.
Although several choices of boundary stabilizers are possible for the surface code, here we work with surface codes that have 2-qubit \(Z\) stabilizers at the left and right boundaries and 2-qubit \(X\) stabilizers at the top and bottom boundaries, as in Fig. 0(a). This choice of boundary conditions has an important consequence: strings of \(X\) (\(Z\)) errors can run to the top or bottom (left or right) boundary without having a \(-1\) syndrome at that boundary endpoint. See Fig. 0(b) for an example. Another important consequence is that strings of \(X\) (\(Z\)) errors that start and end at the same top/bottom (left/right) boundary are products of stabilizers. This contrasts with the toric code with periodic boundary conditions, where products of stabilizers form closed loops of errors.
The standard choice of logical operators for these boundary conditions is that the logical \(\bar{X}\) operator is a string of \(X\) errors running from the top to the bottom boundary, and the logical \(\bar{Z}\) operator is a string of \(Z\) errors running from the left to the right boundary.
### Noise models
While many types of noise are possible on quantum computers, here we focus on the restricted class of **single-qubit, site-dependent Pauli noise**. Single qubit Pauli noise means that every qubit \(j\) has a Pauli \(X\) error with probability \(p_{X}^{(j)}\), a Pauli \(Y\) error with probability \(p_{Y}^{(j)}\), a Pauli \(Z\) error with probability \(p_{Z}^{(j)}\), and no error with probability \(1-p_{X}^{(j)}-p_{Y}^{(j)}-p_{Z}^{(j)}\). Errors on different qubits are independent. Site-dependent means that the probabilities \(p_{X}^{(j)}\), \(p_{Y}^{(j)}\), \(p_{Z}^{(j)}\) may be different for each qubit \(j\). Equivalently, we may think of each qubit \(j\) as undergoing the quantum channel,
\[\mathcal{E}^{(j)}(\rho)=(1-p_{X}^{(j)}-p_{Y}^{(j)}-p_{Z}^{(j)})\rho+p_{X}^{(j) }X\rho X+p_{Y}^{(j)}Y\rho Y+p_{Z}^{(j)}Z\rho Z.\]
When we refer to a **noise model**, we mean a list of probabilities \(p_{X}^{(j)}\), \(p_{Y}^{(j)}\), \(p_{Z}^{(j)}\) for each qubit \(j\). This class of noise models has also been referred to as independent non-identically distributed (i.n.i.d.) noise[18]. Because we are restricting ourselves to this class of noise models, all errors we consider in this paper will be Pauli operators.
### Maximum Probability Error (MPE) decoding versus Maximum Likelihood (ML) decoding
In classical error correction, the decoding task is to find an error (ie, a set of bits that were flipped) that is consistent with the received corrupted codeword, then reverse that error by flipping those same bits. The analogous decoding strategy with quantum error correction is, given a set of syndromes and a noise model (i.e., an algorithm to compute probabilities of errors), find the highest probability error \(E\) consistent with those syndromes, then reverse that error by applying \(E\) (since every Pauli is its own inverse). This decoding strategy is known as maximum probability error decoding.
**Definition 1** **Maximum Probability Error (MPE) decoding.** Given a surface code instance, a set of syndromes, and a Pauli noise model, find the highest probability Pauli error consistent with those syndromes.
This is also sometimes referred to as non-degenerate quantum maximum likelihood decoding, or simply quantum maximum likelihood decoding (QMLD).
However, maximizing the probability of that single error \(E\) ignores the fact that there are many errors other than \(E\) that are logically equivalent to \(E\) and thus are also corrected by \(E\). Specifically, given a quantum code with stabilizer group \(\mathcal{S}\), for any error \(E\), the coset of the stabilizer group \(E\mathcal{S}\) is the set of all errors that are logically equivalent to \(E\), i.e., are the same up to a stabilizer and thus share the same correction.
Instead of finding a single error \(E\) with maximum probability, a better strategy is to find an error \(E\) such that the coset of logically equivalent errors \(E\mathcal{S}\) has maximum sum of probabilities. That is, we want to maximize
\[\Pr\left(E\mathcal{S}\right)=\sum_{E^{\prime}\in E\mathcal{S}}\Pr\left(E^{ \prime}\right) \tag{1}\]
Figure 1: (a) The surface code with the boundary conditions we use. Qubits are at the intersections of lines. The blue (darker) faces are \(Z\) stabilizers and the green (lighter) faces are \(X\) stabilizers. The boundary consists of 2-qubit stabilizers: \(X\) stabilizers on the bottom and top boundaries, and \(Z\) stabilizers on the left and right boundaries. (b) This choice of boundary conditions has an important consequence. Strings of \(X\) errors can run to the top or bottom boundary without having a \(-1\) syndrome at the end of the string. Likewise, strings of \(Z\) errors can run to the left or right boundary without having a \(-1\) syndrome at the end of the string. Red labels are a possible error consistent with these syndromes.
where \(\Pr\left(E^{\prime}\right)\) is the probability of the Pauli error \(E^{\prime}\) occurring. (1) is called the **coset probability**. The task of the maximum likelihood decoder is to find an error \(E\) that maximizes that coset probability \(\Pr\left(E\mathcal{S}\right)\).
**Definition 2 Maximum Likelihood (ML) decoding.** Given a surface code instance with stabilizer group \(\mathcal{S}\), a set of syndromes, and a Pauli noise model, find a Pauli error \(E\) consistent with the syndromes that maximizes the coset probability \(\Pr\left(E\mathcal{S}\right)\).
This is also sometimes referred to as degenerate quantum maximum likelihood decoding (DQMLD).
Finding an error that maximizes \(\Pr\left(E\mathcal{S}\right)\) is equivalent to finding an error that maximizes the probability that we end up doing the right correction, because \(E\mathcal{S}\) is the set of all errors that have logically equivalent corrections to \(E\). This is the sense in which the ML decoder is optimal--it maximizes the probability that we do the right correction.
Recall that \(N(\mathcal{S})\), the normalizer of \(\mathcal{S}\), is generated by \(\mathcal{S}\) and the logical operators of the code \(\bar{X}_{i}\) and \(\bar{Z}_{i}\). For surface codes with one logical qubit, which are what we work with in this paper, this normalizer is \(N(\mathcal{S})=\left\langle S,\bar{X},\bar{Z}\right\rangle\). For any error \(E\) that is consistent the syndromes, the coset \(EN(\mathcal{S})\) is the set of all errors consistent with the syndromes. Since \(N(\mathcal{S})=\left\langle S,\bar{X},\bar{Z}\right\rangle=\mathcal{S}\cup \bar{X}\mathcal{S}\cup\bar{Z}\mathcal{S}\cup\bar{X}\bar{Z}\mathcal{S}\) (up to some global phases), the coset \(EN(\mathcal{S})\) is \(E\mathcal{S}\cup E\bar{X}\mathcal{S}\cup E\bar{Z}\mathcal{S}\cup E\bar{X}\bar {Z}\mathcal{S}\).
So, given any Pauli error \(E\) consistent with the syndromes, the set of all errors consistent with those syndromes is the union of the following 4 cosets of \(\mathcal{S}\): \(E\mathcal{S}\), \(E\bar{X}\mathcal{S}\), \(E\bar{Z}\mathcal{S}\), \(E\bar{X}\bar{Z}\mathcal{S}\). Each one of those cosets consists of errors that are all logically equivalent and thus have equivalent corrections. If we can find which one of those 4 cosets has maximum coset probability, then our correction procedure is just to pick some error \(E^{\prime}\) from that coset and apply \(E^{\prime}\) as our correction. The task of the ML decoder is to decide which of those 4 cosets has the highest coset probability.
Both MPE decoding and ML decoding are known to be exactly solvable in the case of independent \(X\) and \(Z\) noise. In the case of MPE decoding, the standard matching decoder[10] finds the maximum probability error by finding separately the minimum weight matching for the \(X\) syndromes and for the \(Z\) syndromes and combining those matchings into one error. ML decoding can be reduced to matchgate quantum circuit simulation in the case of independent \(X\) and \(Z\) noise[11].
Another special case where MPE decoding can be solved exactly is low weight errors with depolarizing noise. With depolarizing noise, the maximum probability Pauli error is the minimum weight Pauli error. the Union-Find decoder[23] finds the maximum probability error for depolarizing noise when the error is known to have weight \(\leq\frac{d-1}{2}\), where \(d\) is the distance of the surface code instance being considered.
Our reduction outputs noise models with explicitly non-independent \(X\) and \(Z\) noise, and outputs syndromes such that errors consistent with those syndromes have weight \(>\frac{d-1}{2}\). This means all known decoders are not guaranteed to exactly solve MPE or ML decoding for the instances outputted by our reduction.
## 3 Reduction from SAT to Maximum Probability Error (MPE) decoding
In this section we prove our first main theorem.
**Theorem 1**.: _Maximum Probability Error (MPE) decoding the surface code with single-qubit, site-dependent Pauli noise is NP-hard._
We reduce directly from SAT. Before we give the reduction from SAT to MPE decoding, we review some facts about SAT and explain our graphical notation for writing down noise models.
SAT (boolean formula satisfiability) is the problem of determining whether a boolean formula made of AND, OR, and NOT gates has an assignment of true/false values to the variables that makes the formula output true. For example, the boolean formula \((x_{1}\lor x_{2})\wedge(\bar{x}_{2}\lor x_{3})\wedge(\bar{x}_{1}\vee\bar{x}_{3} )\)3 is satisfiable because one can assign the input variables the values \(x_{1}=\text{TRUE}\), \(x_{2}=\text{FALSE}\), \(x_{3}=\text{FALSE}\). SAT is the canonical NP-complete problem[24]. This means that all problems in the class NP can be efficiently transformed into a SAT problem, and thus an efficient algorithm for SAT implies efficient algorithms for every problem in NP.
Footnote 3: Recall \(\wedge=\text{AND}\), \(\vee=\text{OR}\), \(\bar{x}=\text{NOT}(x)\).
We can think of any boolean formula as a boolean circuit consisting of a layer of FAN-OUT gates after the input wires, then the relevant AND, OR, and NOT gates after that layer. Without loss of generality, we can assume that no wires cross each other after the layer of FAN-OUT gates; this is because the circuit graph of the part of the circuit after the FAN-OUT gates is a tree (it is the parse tree of the boolean formula), and all trees are planar. See Fig. 2 for an example. The planarity of this part of the circuit is important because we will have to carefully argue how we can handle the wire crossings after the FAN-OUT gates.
Now we describe our notation for writing down noise models and errors. We consider Pauli noise models where every qubit \(j\) has specific probabilities \(p_{X}^{(j)}\), \(p_{Y}^{(j)}\), \(p_{Z}^{(j)}\) of \(X\), \(Y\), and \(Z\) errors respectively. In all the figures in this paper, drawing red (bold-faced) operators on a qubit means we are writing down a specific Pauli error. Drawing purple (non-bold) operators on a qubit means we are writing down a noise model--that is, the list of probabilities \(p_{X}^{(j)}\), \(p_{Y}^{(j)}\), \(p_{Z}^{(j)}\) for each qubit. If we draw a single operator (e.g., \(X\)) on a qubit when describing a noise model, that means that that error occurs with the fixed probability \(p\), and no error occurs with probability \(1-p\). Our results hold if \(p\) is any fixed constant in \((0,0.25]\). If we draw multiple operators on a qubit (e.g., if we draw \(X,Z\), or
Figure 2: Any boolean formula can be viewed as a boolean circuit with a layer of FAN-OUT gates on the input wires, then some wire crossings, then AND, OR, and NOT gates without any wire crossings. This is because the part of the circuit graph above the FAN-OUT gates is a tree, and all trees are planar.
if we draw \(X,Y,Z\)), that means that each of those operators can occur as an error, each with probability \(p\), and no error occurs with probability \(1-2p\) or \(1-3p\). If we draw a noise model on a surface code without a boundary, it is to be understood that it is only a segment of our construction and will be stitched together with the other gadgets.
We give an example of a noise model using our graphical notation in Fig. 2(a). There are 6 qubits with 1, 2, or 3 Pauli errors possible, each with probability \(p\), and all other qubits have 0 probability of error. Fig. 2(b) shows one possible error that could result from this noise model (we do not draw the syndromes caused by this error). There we draw the error with red, bold-faced letters, on top of the noise model, which we often do to make it easier to visualize what possible errors could arise from a noise model. That error occurs with probability \(p^{3}(1-p)(1-2p)(1-3p)\). This is because there are 3 qubits with an error (each of which happens with probability \(p\)), one qubit with one possible error that does not have an error occur (this occurs with probability \(1-p\)), one qubit with two possible errors that does not have an error occur (this occurs with probability \(1-2p\)), and one qubit with three possible errors that does not have an error occur (this occurs with probability \(1-3p\)).
### Overview of the reduction
Our reduction is a polynomial time algorithm that takes as input a SAT instance with \(n\) variables, and outputs a surface code instance, noise model, and set of syndromes with the following properties:
1. There are exactly \(2^{n}\) Pauli errors with nonzero probability consistent with the given syndromes: one for each setting of the \(n\) variables of the boolean formula.
2. For any Pauli error with nonzero probability consistent with the given syndromes, it is easy (i.e., can be done in polynomial time) to tell whether that error corresponds to a satisfying or unsatisfying assignment of the variables of the boolean formula.
3. The noise model is such that the maximum probability error consistent with the given syndromes is always an error corresponding to a satisfying assignment of the variables of the boolean formula, if such an assignment exists.
This reduction means one could use an oracle for MPE decoding to solve SAT in polynomial time by transforming the SAT instance into a decoding problem instance,
Figure 3: (a) An example of our graphical notation for writing down noise models, which in this paper are just probability distributions of Pauli errors. For every operator drawn on a qubit, that error occurs on that qubit with the probability \(p\), which we can choose to be any fixed constant in \((0,0.25]\). If no operators are drawn on a qubit, then errors have 0 probability for that qubit. The errors for different qubits are independent. (b) An example of an error that could occur with this noise model. We denote errors with bold red letters, and often draw them on top of the noise model, as done here. This error occurs with probability \(p^{3}(1-p)(1-2p)(1-3p)\). Here we do not draw the syndromes that result from this error.
finding a MPE for that instance, and outputting satisfiable/unsatisfiable depending on whether the error returned by the decoder corresponds to a satisfying or unsatisfying assignment of the formula. Therefore MPE decoding is NP hard.
Here we only concern ourselves with Turing reductions (i.e., using an oracle for MPE decoding to solve SAT in polynomial time), rather than many-one/Karp reductions (i.e., transforming a SAT instance into decoding problem instance with the same yes/no answer). This is because MPE decoding is most naturally thought of as a function problem (outputting a Pauli error), rather than a yes/no decision problem. While one could formulate MPE decoding as a yes/no decision problem4, we choose to only think about the more natural function problem version of MPE decoding, for clarity.
Footnote 4: For example, the yes/no decision problem “given a surface code instance, a noise model, and set of syndromes, does there exist a Pauli error consistent with the syndromes with probability \(\geq p^{\prime}\)?” is an NP-complete decision problem that is polynomial-time equivalent to the function problem of actually finding a MPE, provided that there are some reasonable technical restrictions on the error probabilities.
With this high-level overview of the reduction in mind, we can now define the first gadget in our reduction, the variable gadget, which is in Fig. 4. In this gadget the noise model and lack of \(-1\) syndromes mean there will either be one string of \(X\) errors starting at the bottom boundary and continuing up to the rest of the construction, or no errors at all in this gadget. If the string of \(X\) errors goes up to the rest of the circuit, then that variable is true, and if there is no such string of \(X\) errors, then that variable is false. In general a "wire" in this reduction will be a string of possible \(X\) errors--if there actually are \(X\) errors present in that string in the error, then that wire is true, otherwise that wire is false.
With this initial gadget defined, we can now give a more detailed overview of the reduction before diving in to the rest of the gadget constructions. See Fig. 4(a)--in that figure, the "gadgets that simulate the circuit" are AND, OR, NOT, and FAN-OUT gates that express the boolean formula as a circuit, as in Fig. 2. These gadgets "simulate the circuit" in the following sense:
* If the presence of \(X\) errors in the variable gadgets is set corresponding to a satisfying assignment of the formula, then the only possible error consistent with the syndromes has \(X\) errors in the output wire. Fig. 4(b) is a hypothetical example of such an error.
* If the presence of \(X\) errors in the variable gadgets is set corresponding to an unsatisfying assignment of the formula, then the only possible error consistent with the syndromes has no \(X\) errors present in the output wire. Fig. 4(c) is a hypothetical example of such an error.
We give a special probability to the topmost possible \(X\) error in the output wire of the circuit. We set this probability such that that error appearing is so likely (and it not
Figure 4: The variable gadget. The noise model and lack of \(-1\) syndromes given means that either there is one string of \(X\) errors starting at the bottom boundary and continuing up (which corresponds to the variable being true), or there are no errors (which corresponds to the variable being false).
appearing is so unlikely) that it forces the maximum probability error to correspond to a satisfying assignment of the circuit, if such a satisfying assignment exists.
To make this notion precise, let \(\ell\) denote the number of qubits in the surface code instance outputted by the reduction, and note that the probability of any error corresponding to a satisfying assignment of the circuit is lower bounded by
\[p^{\#\text{ of qubits with an error, except topmost qubit of output wire}}\] \[\times(1-p)^{\#\text{ of qubits without an error, and 1 error was possible, except topmost qubit of output wire}}\] \[\times(1-2p)^{\#\text{ of qubits without an error, and 2 errors were possible}}\] \[\times(1-3p)^{\#\text{ of qubits without an error, and 3 errors were possible}} \tag{2}\] \[\times\left(1-p^{\ell}\right)\] \[\geq \left(1-p^{\ell}\right)p^{\ell-1}\] \[> p^{\ell}.\]
And the probability of any error corresponding to an unsatisfying assignment of the circuit is at most \(p^{\ell}\), since the lack of presence of an \(X\) error at the top of the output wire causes a \(p^{\ell}\) term to appear in the probability of such an error. Therefore, any satisfying assignment of the circuit will correspond to an error with higher probability than any error corresponding to an unsatisfying assignment of the circuit. Therefore a MPE decoder will always find an error corresponding to a satisfying assignment of the circuit, if one exists. Since it is easy to determine whether an error corresponds to a satisfying or unsatisfying assignment of the circuit (by looking at the output wire), this means an oracle for MPE decoding can be used to solve SAT in polynomial time, which establishes MPE decoding as NP hard.
This establishes what ingredients we need to construct to complete the reduction. We need gadgets for AND, OR, NOT, and FAN-OUT gates, and for the wire crossings above the FAN-OUT gates. Recall that OR gates can be constructed from AND gates and NOT gates, because \(\text{OR}(x,y)=\text{NOT}(\text{AND}(\text{NOT}(x),\text{NOT}(y)))\), so really all we need is AND, NOT, and FAN-OUT gates, and wire crossings. The remainder of this section is constructing these gadgets and arguing how they can be stitched together in order to complete the proof that MPE decoding is NP hard.
Before we construct those gadgets, we construct 2 gadgets used as ingredients in the other gadgets. These are the "convert to \(Z\) (\(X\)) string" and "\(X/Z\) wire crossing" gadgets.
### Convert between \(X/Z\) string gadgets
Strings of \(Z\) errors that act as wires in the circuit, instead of strings of \(X\) errors, will be useful. So we have 2 gadgets to convert between them in Fig. 6. In these gadgets, the only 2 Pauli errors with nonzero probability consistent with the syndromes (all syndromes are \(+1\)) are: either all the depicted errors are present, or none of the errors are present. These gadgets result in an extra \(X\) string and \(Z\) string that have to be routed to the appropriate boundaries.
Note that these gadgets contain _non-independent \(X\) and \(Z\) error probabilities_ for some qubits: specifically, the qubits where a \(Y\) error is possible but not an \(X\) nor \(Z\) error. This non-independence of \(X\) and \(Z\) error probabilities means that the known decoders are not guaranteed to find the most likely error.
this \(X\) error occurs with probability \(1-p^{\ell}\), where \(\ell=\) # of qubits in the code
Figure 5: Overview of the reduction, with examples of errors corresponding to a satisfying assignment and an unsatisfying assignment.
Figure 6: Two gadgets to convert between \(X\) and \(Z\) strings. (a) converts its input \(X\) string to a \(Z\) string, and (b) converts \(Z\) to \(X\). The only 2 Pauli errors with nonzero probability consistent with the syndromes are: either all the depicted errors are present, or none of the errors are present. These gadgets result in 2 other strings that will have to be routed to the appropriate boundary when stitching all the gadgets together.
Figure 7: Gadget that lets \(X\) strings cross \(Z\) strings.
### \(X/z\) wire crossing gadget
Fig. 7 has a gadget that lets \(X\) strings cross \(Z\) strings. The 4 possible errors here are no error, just the \(X\) string, just the \(Z\) string, or both strings (with a \(Y\) error at their intersection).
### Fan-Out gadget
The FAN-OUT gadget is in Fig. 8. This gadget can be made arbitrarily wide, instead of having the finite width as shown. This is necessary because the 2 output wires may have to be separated by a large horizontal distance, as in Fig. 2. In order to achieve the wire crossings above the FAN-OUT gates in Fig. 2, we only need to cross vertical wires (i.e., \(X\) wires) and horizontal wires (i.e., \(Z\) wires), which we can achieve with the \(X/Z\) wire crossing gadget (Fig. 7).
This property of the FAN-OUT gadget, that \(Z\) error strings are the only possible error strings that propagate horizontally across the whole width of the circuit and \(X\) error strings are the only possible error strings that propagate vertically across the whole height of the circuit, will be shared across all the gadgets we construct. We already see this property in the "convert to \(Z\) string" gadget (Fig. 6). That gadget has a possible string of \(Z\) errors that goes horizontally all the way to the left boundary, and a possible string of \(X\) errors that goes vertically all the way to the bottom boundary. When stitching all these gadgets together, these error strings can cross each other via the \(X/Z\) wire crossing gadget (Fig. 7).
### Not gadget
The NOT gadget is simple, just a \(-1\) syndrome on a \(Z\) stabilizer adjacent to the wire--see Fig. 9.
Clearly, if there is a string of \(X\) errors going up from the bottom of the gadget, then there will be no string of \(X\) errors going up above the \(-1\) syndrome. And if there is no string of \(X\) errors going up from the bottom of the gadget, then there will be a string of \(X\) errors going up above the \(-1\) syndrome.
Figure 8: The FAN-OUT gadget. This gadget can be made arbitrarily wide by moving the input wire and output 1 to the left.
Figure 10: The AND gadget. The “convert to \(X\) string” gadget results in an extra \(X\) string coming out of the right of that gadget and an extra \(Z\) string coming out of the left (both not shown for clarity), that will have to be routed to the appropriate boundaries.
Figure 9: The NOT gadget. If there is a string of \(X\) errors coming up the input wire from below, that string will end at the \(-1\) syndrome. If there is no string of \(X\) errors coming up from below, then a string of \(X\) errors will propagate upwards from the \(-1\) syndrome, up the output wire.
### AND gadget
The AND gadget, given in Fig. 10, is complicated, and we save the explanation for how it works for Appendix A. The 4 possible errors for this gadget (corresponding to the 4 possible values of the 2 input wires) are given in Fig. 14 in Appendix A.
### Putting it all together: spacing between the gadgets
When we stitch all these gadgets together, we need to leave enough space for all the relevant \(X\) and \(Z\) strings to cross each other and go to the boundary. Each gadget has a constant number of strings leaving it that go to the boundary, and the number of gadgets is given by some polynomial in the number of variables and clauses in the formula. Therefore we only need to add a polynomial amount of space between all the gadgets to leave space for all those strings to have space to cross each other and go to the boundary. Thus we can stitch together all these gadgets and get a surface code instance with syndromes and a noise model with total size that is bounded by some polynomial in the number of clauses and variables in the formula. See Fig. 11 for an example of stretching the circuit to leave space for the extra \(X\) and \(Z\) strings.
Figure 11: Our reduction stretches out the circuit horizontally and vertically in order to leave space for the \(X\) and \(Z\) strings to be routed to the appropriate boundaries. (a) is the original circuit, and (b) is the stretched version of the circuit embedded in the surface code outputted by the reduction, including the extra \(X\) and \(Z\) strings going to the boundaries. In the stretched version of the circuit (b), solid lines are the \(X\) wires between gadgets, and dashed lines are the extra \(X\) and \(Z\) strings routed to the boundaries. \(X\) strings are those that go to the bottom boundary, and \(Z\) strings are those that go to the left boundary. Recall that \(X\) strings can cross the FAN-OUT gadgets, because the FAN-OUT gadgets (Fig. 8) consist of large horizontal stretches of \(Z\) strings that can be crossed by \(X\) strings via the \(X/Z\) wire crossing gadget (Fig. 7). This stretching increases the size of the output of the reduction by at most \(O(m)\) in both the horizontal and vertical directions (where \(m\) is the number of gates in the circuit), so at most the polynomial factor \(O(m^{2})\) in total.
Once we stitch all the gadgets together, we have constructed a noise model and set of syndromes that satisfies properties 1, 2, and 3 from Section 3.1. Property 1 is satisfied because the \(n\) variable gadgets (each with their own binary choice of error present or not present) give \(2^{n}\) possible errors, and all of the other gadgets in the circuit above the variable gadgets have their error present uniquely determined by their input, so those \(2^{n}\) errors are the only possible errors. Property 2 is satisfied because we can tell whether an error corresponds to a satisfying or unsatisfying assignment by looking at whether \(X\) errors are present in the output wire. Property 3 is satisfied because of the argument in Section 3.1 starting at (2). This completes the reduction.
### Hardness of approximate MPE decoding
Our proof admits a straightforward generalization that establishes the hardness of approximate decoding. Here, **approximate MPE decoding** is the task of finding an error with probability at least \(\frac{p^{\prime}}{M}\), where \(p^{\prime}\) is the probability of the maximum probability error, and \(M>1\) is the approximation factor. Here we consider \(M\) to be a function of \(\ell\), the number of qubits in the surface code instance, rather than \(n\), to avoid confusion as we use \(n\) to denote the number of variables in the boolean formula.
**Corollary 1**.: _Approximate MPE decoding of the surface code with single-qubit, site-dependent Pauli noise, up to any exponential approximation factor (i.e., with approximation factor \(2^{\ell^{c}}\) for any constant \(c\), where \(\ell\) is the number of qubits in the surface code instance), is NP-hard._
Proof.: For the sake of contradiction, assume we have a decoder that is guaranteed to output an error with probability at least \(\frac{p^{\prime}}{M(\ell)}\), where \(p^{\prime}\) is the probability of the maximum probability error, \(M\) is some function always greater than \(1\) that grows at most exponentially fast (i.e., \(2^{\ell^{O(1)}}\)), and \(\ell\) is the number of qubits in the surface code instance. Then replace the probability \(1-p^{\ell}\) in Fig. 4(a) with \(1-\frac{p^{\ell}}{M(\ell)}\). This real number can be written down with a number of bits that is polynomial in \(\ell\) (and thus polynomial in the size of the original boolean formula). Then the probability of any error corresponding to a satisfying assignment of the circuit is lower bounded by
\[\left(1-\frac{p^{\ell}}{M(\ell)}\right)p^{\ell-1}>p^{\ell}.\]
And the probability of any error corresponding to an unsatisfying assignment of the circuit is upper bounded by \(\frac{p^{\ell}}{M(\ell)}\). Therefore this approximate decoder will still find an error corresponding to a satisfying assignment of the circuit, if such an assignment exists.
Note that it is common for decoders to optimize for \(\log p^{\prime}\), where \(p^{\prime}\) of the error they find, rather than to optimize for \(p^{\prime}\) itself. In this setting our hardness of approximation result for any exponential multiplicative approximation factor translates to a hardness of approximation result for any polynomial additive approximation factor.
## 4 Reduction from #SAT to Maximum Likelihood (ML) decoding
In this section we prove our second main theorem.
**Theorem 2**.: _Maximum Likelihood (ML) decoding the surface code with single-qubit, site-dependent Pauli noise is #P-hard under Turing reductions._
We reduce from #SAT, the canonical #P-complete problem. #SAT is the problem of, given a boolean formula with \(n\) variables, determining how many assignments of true/false values to the input variables satisfy the formula (i.e., make the formula output true), out of the \(2^{n}\) possible assignments. This is more general than the SAT problem, which is merely determining whether that number of satisfying assignments is zero or nonzero. #SAT (and its associated class of computational problems #P) is generally thought of as much harder than SAT (and its associated class of computational problems NP).
The strategy of the reduction is as follows. We start with the same noise model and syndromes as those outputted by the reduction in Section 3. We observe that all errors corresponding to satisfying assignments are in the same coset, as with all errors corresponding to unsatisfying assignments, which are in a different coset. Thus determining which coset has maximum coset probability already seems related to counting satisfying versus unsatisfying assignments. Given any integer \(a\in\{0,1,\cdots,2^{n}\}\), we show how to modify the error probabilities in the noise model (but not which Pauli errors have nonzero probability) so that the maximum likelihood coset is the coset corresponding to satisfying assignments iff the formula has \(\geq a\) satisfying assignments. Thus if we have a ML decoder, we can use that to determine whether the formula has \(\geq a\) satisfying assignments for any integer \(a\). This lets us exactly determine the number of satisfying assignments to any boolean formula using \(O(n)\) calls to a ML decoder via binary search: we first try \(a=\frac{1}{2}2^{n}\) to determine whether the formula has \(\geq\) or \(<\frac{1}{2}2^{n}\) satisfying assignments, then we try \(a=\frac{3}{4}2^{n}\) in the former case or \(a=\frac{1}{4}2^{n}\) in the latter case, and we continue this process of narrowing down the range of possible numbers of satisfying assignments until we exactly determine how many satisfying assignments the formula has.
With this high-level strategy in mind, we can begin constructing the reduction from #SAT to ML decoding. First, we modify the error probabilities in the noise model outputted by the reduction from Section 3 in the following way. Instead of giving \(X\), \(Y\), and \(Z\) errors probability \(0\) or \(p\), we give them the following probabilities:
* If only one error is possible for a qubit (e.g., if we only draw \(X\) on a qubit), then that error occurs with probability \(\frac{1}{2}\) and no error occurs with probability \(\frac{1}{2}\). This excludes the topmost possible \(X\) error which is special (we set that probability later).
* If two different errors are possible (e.g., if we draw \(X,Y\) on a qubit), then each of those errors occurs with probability \(\frac{1}{3}\), and no error occurs with probability \(\frac{1}{3}\).
* If all three errors are possible on a qubit (i.e., if we draw \(X,Y,Z\) on a qubit), then each of those errors occurs with probability \(\frac{1}{4}\), and no error occurs with probability \(\frac{1}{4}\).
* The topmost possible \(X\) error on the output wire occurs with probability \(r\) (to be determined later), instead of probability \(1-p^{\ell}\).
The important consequence of setting the error probabilities this way is that all errors corresponding to satisfying assignments of the circuit have the same probability of occurring, and likewise with all errors corresponding to unsatisfying assignments. The probability of any error occurring, ignoring the special topmost qubit of the output wire,
is
\[1^{\#}\] of qubits where no error is possible \[\times\left(\frac{1}{2}\right)^{\#}\] of qubits where \[1\] error is possible, except topmost qubit of output wire \[\times\left(\frac{1}{3}\right)^{\#}\] of qubits where \[2\] errors are possible \[\times\left(\frac{1}{4}\right)^{\#}\] of qubits where \[3\] errors are possible.
Call this probability \(q\). Now taking into account the special topmost qubit of the output wire, all errors corresponding to satisfying assignments have probability \(qr\). All errors corresponding to unsatisfying assignments have probability \(q(1-r)\).
The next ingredient we need in the #P-hardness proof is the following lemma, which we prove in Appendix B.
**Lemma 1**.: _All errors corresponding to satisfying assignments of the circuit are equivalent up to stabilizers, and are thus in the same coset \(C\). All errors corresponding to unsatisfying assignments of the circuit are in the coset \(\bar{X}C\), where \(\bar{X}\) is a logical \(X\) operator._
Using this lemma, we can now prove Theorem 2.
Proof.: Let the number of satisfying assignments of the boolean formula be \(a\), and let the number of unsatisfying assignments of the formula be \(b\). Note that \(b=2^{n}-a\), where \(n\) is the number of variables in the formula. Lemma 1 tells us that the coset of errors with maximum likelihood is \(C\) iff \(qra>q(1-r)b\).5 This is equivalent to
Footnote 5: The maximum likelihood coset is undefined if \(qra=q(1-r)b\), because both cosets have the same coset probability in this case. This minor technical issue can be avoided by setting the pivot points in the binary search not to \(\frac{a}{2^{n}}\) for any integer \(a\), but rather \(\frac{a}{2^{n}}+\frac{1}{2^{n+1}}\) for any integer \(a\).
\[\frac{r}{1-r} >\frac{b}{a}\] \[\Longleftrightarrow \frac{r}{1-r} >\frac{b}{2^{n}-b}\] \[\Longleftrightarrow \frac{r}{1-r} >\frac{\frac{b}{2^{n}}}{1-\frac{b}{2^{n}}}.\]
Since \(\frac{r}{1-r}\) is a strictly increasing function of \(r\) when \(r\in(0,1)\), this condition is equivalent to \(r>\frac{b}{2^{n}}\). Thus by setting \(r\) equal to any real number of our choice in \((0,1)\) (which we only need to specify to \(n+1\) bits of precision), performing ML decoding, and seeing whether the error returned corresponds to a satisfying or unsatisfying assignment of the formula, we can determine whether the proportion of assignments that don't satisfy the formula is \(<r\).
Thus if we have a ML decoding algorithm, we can exactly determine the number of satisfying assignments for any boolean formula by doing \(O(n)\) rounds of binary search on \(r\), where \(n\) is the number of variables in the formula. Thus ML decoding is #P-hard.
This reduction involves multiple calls to an oracle for ML decoding, rather than outputting a single instance of a ML decoding problem where the answer to that ML decoding problem encodes the answer to another #P-hard problem. Therefore this is a Turing reduction, not a many-one/Karp reduction. As there are multiple oracle calls in this reduction,
the Turing nature of the reduction seems essential, unlike with the MPE reduction where it could be transformed into a many-one/Karp reduction by formulating MPE decoding as a decision problem. Turing reductions are common for #P-hardness proofs. For example, both Leslie Valiant's original proof that computing the permanent is #P-hard[25], and Iyer and Poulin's result that ML decoding of general stabilizer codes is #P-hard[17], used Turing reductions.
### Hardness of approximate ML decoding
As with MPE decoding, we can strengthen this result to show that even approximate ML decoding is hard. Here, we use the fact that #SAT is hard to approximate to within any nontrivial exponential factor. Specifically, for boolean formulas with \(n\) variables, and for all \(\epsilon>0\), it is NP-hard to approximate the number of solutions to such formulas with approximation ratio \(2^{n^{1-\epsilon}}\)[26].
This hardness of approximation result for #SAT translates directly into a hardness of approximation result for ML decoding. Here, **approximate ML decoding** is the task of finding an error in a coset \(C\) such that the coset probability of \(C\) is at least \(\frac{p^{\prime}}{M}\), where \(p^{\prime}\) is the coset probability of the maximum likelihood coset and \(M>1\) is the approximation factor. As in the previous section, here we consider \(M\) to be a function of \(\ell\), the number of qubits in the surface code instance, rather than \(n\), to avoid confusion as we use \(n\) to denote the number of variables in the boolean formula.
**Corollary 2**.: _There exists a constant \(c\) such that approximate ML decoding of the surface code with single-qubit, site-dependent Pauli noise, with approximation factor \(M(\ell)=2^{\ell^{c}}\), is NP-hard, where \(\ell\) is the number of qubits in the surface code instance._
Proof.: For the sake of contradiction, assume we have a decoder that is guaranteed to output an error in a coset \(C\) with coset probability at least \(\frac{p^{\prime}}{M(\ell)}\), where \(p^{\prime}\) is the coset probability of the maximum likelihood coset. Here \(M(\ell)>1\) is the approximation factor as a function of \(\ell\), the number of qubits in the surface code instance. If we use this decoder to perform binary search on the proportion of unsatisfying assignments as we did in the proof of Theorem 2, then this binary search will settle on a value that is correct up to a factor \(O\left(M(\ell)\right)\).
This means approximate ML decoding is NP-hard if \(M(\ell)=O\left(2^{n^{1-\epsilon}}\right)\) for some \(\epsilon>0\). Since \(\ell\) is \(n^{O(1)}\), or equivalently \(n\) is \(\ell^{O(1)}\) (since we can assume we are dealing with boolean formulas with total size upper bounded by some polynomial in the number of variables), this means there is some constant \(c\) such that approximate ML decoding is NP-hard if the approximation ratio \(M(\ell)\) is \(\operatorname{O}\!\left(2^{\ell^{c}}\right)\).
## 5 Hardness of decoding with more regular noise models
The noise models outputted by our reduction are contrived and unphysical. This raises the question, what can we say about the hardness of decoding the surface code with more natural, physically realistic noise models? Here, "natural" could mean the error probabilities don't vary too much between qubits, or the error probabilities are never too small or large. In this section we make a small step towards hardness results with more natural noise models by doing away with the "special" top qubit of the output wire that has different error probabilities than the rest of the qubits.
### More regular noise models for MPE reduction
We can modify the MPE reduction so that the topmost qubit of the output wire doesn't have the physically unrealistic error rate \(1-p^{\ell}\). This modification is in Fig. 12. The modification adds a NOT gate on the output wire and a length \(2\ell\) wire onto the original output wire, where \(\ell\) is the number of qubits used in the unmodified version of the reduction as outlined in Fig. 5a. This modification means that the probability of any error corresponding to a satisfying assignment picks up a \((1-p)^{2\ell}\) term, and the probability of any error corresponding to an unsatisfying assignment picks up a \(p^{2\ell}\) term.
This lets us lower bound the probability of any error corresponding to a satisfying assignment by
\[\Pr\left(\text{error corresp. to satisfying assignment}\right) \geq p^{\ell}(1-p)^{2\ell}=p^{\ell}\left((1-p)^{2}\right)^{\ell}>p^{\ell}p^{ \ell}=p^{2\ell}.\]
Here we used the fact that \(p\in(0,0.25]\implies(1-p)^{2}>p\). The probability of any error corresponding to an unsatisfying assignment is trivially upper bounded by \(p^{2\ell}\). This means that any error corresponding to a satisfying assignment will have a higher probability than any error corresponding to an unsatisfying assignment. Thus MPE decoding in this setting remains NP-hard.
We can maintain the hardness of approximation result for any exponential approximation factor \(M(\ell)\) by giving that wire length \(2\ell+2\log_{p}\frac{1}{M(\ell)}\), instead of length \(\ell\). Then the probability of any error corresponding to an unsatisfying assignment is upper bounded by
\[p^{2\ell+2\log_{p}\frac{1}{M(\ell)}}=p^{2\ell}\frac{1}{M(\ell)^{2}}.\]
And the probability of any error corresponding to a satisfying assignment is lower bounded by
\[p^{\ell}(1-p)^{2\ell+2\log_{p}\frac{1}{M(\ell)}}>p^{\ell}p^{\ell+\log_{p} \frac{1}{M(\ell)}}=p^{2\ell}\frac{1}{M(\ell)}.\]
Therefore a \(M(\ell)\) approximate MPE decoder will still always find an error corresponding to a satisfying assignment, if one exists.
Figure 12: A modification of the reduction (as outlined in Fig. 5a) that lets all qubits have the same probabilities for possible errors (0 or \(p\)), instead of having one special qubit with error probability \(1-p^{\ell}\).
### More regular noise models for ML reduction
In the ML reduction, the topmost qubit of the output wire is "special", as the probability of that \(X\) error takes on special values. If we don't make that qubit special and instead give it an error probability of \(\frac{1}{2}\), like all other qubits with one possible error, then ML decoding applied to the resulting noise model can be used to decide Majority-SAT, the problem of deciding whether a boolean formula has more satisfying or unsatisfying assignments. Since an oracle for Majority-SAT can be used to solve #SAT ([27] Lemma 17.7 proves this), surface code decoding is still #P-hard even if we restrict all qubits to have error probabilities \(0\), \(\frac{1}{2}\), \(\frac{1}{3}\), or \(\frac{1}{4}\) and don't have a special qubit with more specific error probabilities. However, we then lose the hardness of approximation result Corollary 2. This is because an approximate ML decoder with constant approximation ratio \(M\) will only be able to solve the BPP-complete promise problem of deciding whether a boolean formula has at least \(2^{n}\frac{M}{M+1}\) satisfying assignments or at most \(2^{n}\frac{1}{M+1}\) satisfying assignments, promised that one of those is the case.
Although these minor modifications to the reductions do make the resulting noise models slightly more homogeneous, they still result in many qubits in the reduction having \(0\) error probability, which is clearly physically unrealistic. We leave it as an open problem whether one can show any hardness results for decoding with more physically realistic noise models.
## 6 Conclusion and discussion
We have shown computational hardness results for maximum probability error decoding and maximum likelihood decoding of the surface code, and for approximate versions of those problems. Therefore no efficient surface code decoding algorithm can always solve, or even always approximate, MPE or ML decoding for all possible Pauli noise models and syndromes (modulo the standard computational complexity assumptions \(\mathrm{P}\neq\mathrm{NP}\) and \(\mathrm{FP}\neq\mathrm{\#P}\)). This provides some explanation as to why all known surface code decoding algorithms are not known to be optimal or approximately optimal except in a few special cases.
These no-go results are highly relevant for quantum computing, because the surface code is one of the most promising candidates for an error correcting code with which to do fault-tolerant quantum computation, and we need fast and accurate decoders in order to use the surface code. However, these results are not really bad news for quantum computing, because we can still achieve fault tolerance with the imperfect decoders we have now. This is because the current decoders get the right answer almost all of the time--i.e., for almost all sets of syndromes with physically reasonable noise models. Instead the practical consequence of these results, like all hardness results for practical problems of interest, is to inform research into surface code decoding algorithms. We now know that one cannot hope for an algorithm that always solves the MPE or ML decoding problem (or even always approximates it) for all possible Pauli noise models and all possible syndromes. Rather, one should look for algorithms that take advantage of special properties of the noise or syndromes (independence of \(X\) and \(Z\) errors is one such special property that some decoders take advantage of), or look for heuristic algorithms without rigorous performance guarantees (tensor network decoders[11] and belief-matching[28] are two examples).
A clear open problem suggested by our work is to establish hardness results for surface code decoding with more physically realistic noise models. Even for simple depolarizing noise there are no known optimal decoders. Can we get any hardness results for noise
models as simple as depolarizing noise? Or can we further characterize the special cases where we can get provably optimal (or provably approximately optimal) decoders for the surface code?
## 7 Acknowledgments
This work is supported by a collaboration between the U.S. DOE and the National Science Foundation. The material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Systems Accelerator. It is also supported by the NSF STAQ Project (PHY-1818914).
Alex Fischer would like to thank Milad Marvian for teaching an excellent class on quantum error correction which got him interested in the subject.
|
2309.15249 | High power, frequency agile comb spectroscopy in the mid-infrared
enabled by a continuous-wave optical parametric oscillator | While mid-infrared optical frequency combs have been widely utilized in areas
such as trace gas sensing, chemical kinetics, and combustion science, the
relatively low power per comb tooth has limited acquisition times and
sensitivities. We have developed a new approach in which an electro-optic
frequency comb is utilized to pump a continuous-wave singly-resonant optical
parametric oscillator in order to spectrally translate the comb into the
mid-infrared. Through the use of electro-optic combs produced via chirped
waveforms we have produced mid-infrared combs containing up to 2400 comb teeth.
We show that a comb can be generated on the non-resonant idler when the pump
modulation is non-synchronous, and we use these combs to perform high
resolution spectroscopy on methane. In addition, we describe the underlying
theory of this method and demonstrate that phase matching should allow for
combs as broad as several THz to be spectrally translated to the mid-infrared.
The high power and mutual coherence as well as the relatively low complexity of
this approach should allow for broad application in areas such as chemical
dynamics, quantum information, and photochemistry. | Adam T. Heiniger, Matthew J. Cich, David A. Long | 2023-09-26T20:23:46Z | http://arxiv.org/abs/2309.15249v1 | High Power, Frequency Agile Comb Spectroscopy in the Mid-Infrared Enabled by a Continuous-Wave Optical Parametric Oscillator
###### Abstract
While mid-infrared optical frequency combs have been widely utilized in areas such as trace gas sensing, chemical kinetics, and combustion science, the relatively low power per comb tooth has limited acquisition times and sensitivities. We have developed a new approach in which an electro-optic frequency comb is utilized to pump a continuous-wave singly-resonant optical parametric oscillator in order to spectrally translate the comb into the mid-infrared. Through the use of electro-optic combs produced via chirped waveforms we have produced mid-infrared combs containing up to 2400 comb teeth. We show that a comb can be generated on the non-resonant idler when the pump modulation is non-synchronous, and we use these combs to perform high resolution spectroscopy on methane. In addition, we describe the underlying theory of this method and demonstrate that phase matching should allow for combs as broad as several THz to be spectrally translated to the mid-infrared. The high power and mutual coherence as well as the relatively low complexity of this approach should allow for broad application in areas such as chemical dynamics, quantum information, and photochemistry.
_ITOPTICA Photonics, Inc., Pittsford, NY 14534, USA_
_2National Institute of Standards and Technology, Gaithersburg, MD 20899, USA_
_*Corresponding authors: A. T. Heiniger. Email: [email protected]. D. A. Long. Email: [email protected]_
## 1 Introduction
Molecular spectroscopy at mid-infrared wavelengths has been widely utilized for gas sensing [1], chemical dynamics [2], quantum information [3], and astrochemistry [4] due to the large absorption cross-sections present in this region. Direct optical frequency comb spectroscopy (DCS) can offer advantages in spectral bandwidth, resolution, and acquisition time over other spectroscopic techniques, but it is difficult to simultaneously realize these capabilities in the mid-infrared. In particular, while broadband, high-resolution DCS has been demonstrated in the mid-infrared, the low power per comb tooth has largely precluded fast acquisition times and high sensitivities [5], [6].
Achieving higher power per comb tooth in the mid-infrared has been an ongoing challenge. Mode-locked quantum cascade lasers (QCLs) can provide milliwatts of power per tooth, but gigahertz tooth spacings limit the spectral resolution [7]. High-resolution mid-infrared comb approaches have traditionally offered tens to hundreds of nanowatts of power per comb tooth [8], [9], with at most 100 \(\upmu\)W per comb tooth generated through the use of a 57 W pump laser [8]. Recently, we have developed a new approach for mid-infrared comb generation in which a continuous-wave singly-resonant optical parametric oscillator (CWSRO) is used for frequency conversion of near-infrared electro-optic frequency combs [9]. This approach allows for high power per tooth (approximately 80 mW), and correspondingly high acquisition rates (up to 50 MHz) while also offering broad wavelength tunability.
These results indicate that this method may fulfill the promise of fast, sensitive absorption spectroscopy in the mid-infrared. In the present study, we describe the theory governing the frequency conversion of the modulated pump and derive the conditions under which the pump comb is replicated on the non-resonant, mid-infrared CWSRO idler. Our initial DCS demonstration [9] utilized over-driven electro-optic phase modulators (EOMs) to produce roughly twelve comb teeth in each frequency comb with a comb tooth spacing of 2.55 GHz. Here we pump the CWSRO with a single electro-optic frequency comb combined with a local oscillator. To generate this comb, a near-infrared EOM is driven with chirped waveforms to produce an optical frequency comb with up to 2400 individual comb teeth of similar amplitude. This flat, high-resolution comb allows us to experimentally confirm the derived theory for replication of the pump comb on the idler, and results in a mid-infrared comb that is widely tunable between 2.19 \(\upmu\)m and 4.00 \(\upmu\)m with Watt-level power and approximately 450 \(\upmu\)W per comb tooth in a relatively simple setup. We have then utilized the comb spectrometer to perform high resolution molecular spectroscopy in the critical mid-infrared spectral region.
## 2 Theory
In this section we derive the conditions under which the spectrum of a pump laser is replicated on the non-resonant idler of a CWSRO. In particular we show that the pump spectrum should contain no spectral components which are separated by the signal cavity's free spectral range (FSR), and that the phase matching bandwidth of the CWSRO provides the ultimate limit on the bandwidth of the idler spectrum.
We follow the derivation of Yariv for three waves interacting in a nonlinear medium [10]. There, differential equations are derived assuming that the three waves have no time dependence except oscillation at their carrier frequencies. Here we allow for variation in time at rates much slower than the pump carrier frequency. The real electric field \(\mathcal{E}\) for pump, signal, and idler waves \(j=(p,s,i)\) is written in terms of a complex field \(E\) which varies slowly in time and space:
\[\mathcal{E}_{j}(z,t)=\tfrac{1}{2}E_{j}(z,t)e^{i\omega jt}e^{-ikjz}+c.\,c. \tag{1}\]
where \(\omega j\) are the center angular frequencies of the three waves and \(k_{j}\) are their wavevectors.
This expression for the electric field represents a plane wave, and we model the field as such for simplicity. In practice, the pump, signal, and idler in our CWSRO have spatial modes defined by the pump laser and optical cavity. However, this plane wave approximation has adequately modeled the behavior of similar CWSROs [11], [12], [13], and the transfer of modulation to the idler that we explore here is independent of the spatial mode.
We employ Type 0 phase matching in periodically-poled lithium niobate (PPLN), so that all electric field components have the same polarization. The equations which govern propagation of the pump, signal, and idler in the nonlinear crystal are then given by
\[\frac{dE_{p}}{dz} =-\frac{n_{p}}{c}\frac{\partial E_{p}}{\partial t}-2\,\frac{d_{eff }}{n_{p}c}\frac{\partial(E_{p}E_{i})}{\partial t}e^{i\Delta kz}-i\,\frac{ \omega_{p}d_{eff}}{n_{p}c}E_{s}E_{i}e^{i\Delta kz} \tag{2}\] \[\frac{dE_{s}}{dz} =-\frac{n_{s}}{c}\frac{\partial E_{s}}{\partial t}-2\,\frac{d_{ eff}}{n_{s}c}\frac{\partial(E_{p}E_{i}^{*})}{\partial t}e^{-i\Delta kz}-i\, \frac{\omega_{s}d_{eff}}{n_{s}c}E_{p}E_{i}^{*}e^{-i\Delta kz}\] (3) \[\frac{dE_{i}}{dz} =-\frac{n_{i}}{c}\frac{\partial E_{i}}{\partial t}-2\,\frac{d_{ eff}}{n_{i}c}\frac{\partial(E_{p}E_{s}^{*})}{\partial t}e^{-i\Delta kz}-i\, \frac{\omega_{i}d_{eff}}{n_{i}c}E_{p}E_{s}^{*}e^{-i\Delta kz} \tag{4}\]
where \(n_{j}\) are the nonlinear crystal refractive indices at \(\omega_{j}\), and c is the speed of light in vacuum. The wavevector mismatch is \(\Delta k=k_{p}-k_{s}-k_{i}-K_{g}\), where \(K_{g}=2\pi/\lambda g\) is the wavevector of the quasi-phase matching (QPM) grating of period \(\lambda g\), and \(d_{eff}\) is the effective nonlinear coefficient for ideal first-order QPM. In PPLN \(d_{eff}=17\) pm/V [14].
We are concerned with the evolution of the pump, signal, and idler spectra, so we take the Fourier transform of these equations with respect to angular frequency detuning \(\Omega\) to obtain
\[\frac{d\tilde{E}_{p}}{dz}(z,\Omega)+\frac{i\Omega n_{p}}{c}\tilde{E}_{p}(z,\Omega )=-i\frac{(\omega_{p}+2\Omega)d_{eff}}{n_{p}c}e^{i\Delta kz}\int_{-\infty}^{ \infty}d\Omega^{\prime}\tilde{E}_{s}(z,\Omega^{\prime})\tilde{E}_{i}(z,\Omega -\Omega^{\prime}) \tag{5}\]
\[\frac{d\tilde{E}_{s}}{dz}(z,\Omega)+\frac{i\Omega n_{s}}{c}\tilde{E}_{s}(z, \Omega)=-i\frac{(\omega_{s}+2\Omega)d_{eff}}{n_{s}c}e^{-i\Delta kz}\int_{- \infty}^{\infty}d\Omega^{\prime}\tilde{E}_{p}(z,\Omega^{\prime})\tilde{E}_{i} ^{*}(z,\Omega-\Omega^{\prime}) \tag{6}\]
\[\frac{d\tilde{E}_{i}}{dz}(z,\Omega)+\frac{i\Omega n_{i}}{c}\tilde{E}_{i}(z, \Omega)=-i\frac{(\omega_{i}+2\Omega)d_{eff}}{n_{i}c}e^{-i\Delta kz}\int_{- \infty}^{\infty}d\Omega^{\prime}\tilde{E}_{p}(z,\Omega^{\prime})\tilde{E}_{s} ^{*}(z,\Omega-\Omega^{\prime}). \tag{7}\]
As each wave in the nonlinear crystal propagates it generates new spectral components from the convolution of the spectra of the other two waves.
The nonlinear crystal is located inside an optical cavity which is resonant only for the signal wave. The optical cavity supports signal frequencies which correspond to longitudinal modes, and all other signal frequencies will interfere destructively. If the signal oscillates on only a single longitudinal cavity mode, then its spectrum is practically a delta function with respect to the pump spectrum and Eqn. 7 for the idler becomes
\[\frac{d\tilde{E}_{i}}{dz}(z,\Omega)+\frac{i\Omega n_{i}}{c}\tilde{E}_{i}(z, \Omega)=-i\frac{(\omega_{i}+2\Omega)d_{eff}}{n_{i}c}e^{-i\Delta kz}\tilde{E}_{ p}(z,\Omega). \tag{8}\]
Therefore, if the signal is single mode, then a spectral component detuned \(\Omega\) from the pump carrier will be generated on the idler detuned \(\Omega\) from the idler carrier. Thus, the frequency spacing of the idler spectral components will match that of the pump. The relative amplitudes of these spectral components on the idler nearly match the pump. The \(\omega_{i}+2\Omega\) factor on the right-hand side of Eqn. 8 will slightly skew the idler spectrum. As described below, the maximum detuning considered here is on the order of 1 THz, which is much smaller than the 75 THz to 135 THz idler carrier frequency. Thus, if the signal is single mode, then the idler spectrum will replicate the pump spectrum except with a slope in amplitude of approximately 1%.
We note that there are modulation conditions which can cause the signal to oscillate on multiple modes. An etalon in the CWSRO cavity is used to force single longitudinal mode operation in the case of an unmodulated pump. The etalon has a bandpass full-width at half maximum (FWHM) of approximately 30 GHz and the cavity has an FSR near 530 MHz. One cavity mode under the etalon bandpass has the highest gain and dominates when the pump is unmodulated, but approximately 60 modes have significant gain. When the pump is modulated, then cavity modes which neighbor the highest-gain mode also can oscillate.
An extreme case of pump modulation coupling to the resonant wave is a synchronously pumped OPO [17], [18]. There, the pump and OPO cavity are stabilized so that the comb mode spacing exactly matches the cavity FSR. From Eqns. 5-7, a comb spectrum on the pump and signal leads to cascaded generation of comb teeth on the pump, signal, and idler.
To ensure that only a single cavity mode oscillates we avoid synchronous modulation. The signal oscillates on a center cavity mode at \(\omega_{s}\), and in general it can carry modulation and have spectrum \(\tilde{E}_{s}(z,\Omega)\). However, the signal can only support modulation at frequencies which correspond to longitudinal modes, where the detuning \(\Omega\) is equal to multiples of the cavity FSR. If the pump spectrum \(\tilde{E}_{p}(z,\Omega)\) does not have any spectral components which are separated by multiples of the cavity FSR, then pump modulation cannot couple to the signal cavity.
Thus, the first condition for successful replication of the pump spectrum on the idler of a CWSRO is that the pump spectrum does not contain spectral components which are separated by the cavity FSR. The second condition relates to the bandwidth of the pump comb. The PPLN poling period can be chosen so that \(\Delta k=0\) at the center frequencies of the pump,
signal, and idler. However, the second term in Eqn. 3 provides a first-order correction to the phase accumulation in each of the three waves due to dispersion. This will result in non-zero \(\Delta k\) for spectral components which are detuned from the center frequencies. This phase mismatch results in decreased gain, and thus decreased power for idler comb teeth. The full-width at half-maximum (FWHM) of the phase matching gain curve for the CWSRO used here was calculated by SNLO [15] and is plotted as a function of idler wavelength in Fig. 1. It shows that the widest FWHM of the idler spectrum is several terahertz when the idler wavelength is near the degeneracy point and decreases to approximately 100 GHz far from degeneracy.
We note that the CWSRO could be optimized for broader phase matching. A similarly designed CWSRO with a broadband incoherent pump utilized non-collinear phase matching via tight focusing to generate an idler that was 5.7 THz wide (FWHM) at 3.4 \(\upmu\)m [16].
## 3 Experiment
A schematic of the present mid-infrared optical frequency comb spectrometer can be found in Figure 2a. Light from a continuous-wave external cavity diode laser with a wavelength near 1064 nm had its fiber-coupled output split into two paths: a probe path and a local oscillator (LO) path in a self-heterodyne configuration [17], [18]. An electro-optic frequency comb is produced on the probe path by driving an EOM with the output of a chip-scale direct digital synthesizer (DDS) [19]. The DDS generates periodic frequency chirps as shown in Fig. 2b and accepts sweep and timing information from a programmable microcontroller. This periodic phase modulation results in a frequency comb where the comb tooth spacing is equal to the chirp repetition rate and the comb bandwidth is twice the bandwidth of the chirp. Importantly, this approach allows for agile, ultraflat frequency combs to be generated over a wide range of comb tooth spacings (100's of Hz to GHz) [19].
An acousto-optic modulator (AOM) placed on the LO path provided a 54.2 MHz shift to ensure that the positive and negative order comb teeth occur at unique radiofrequencies once combined on a photodiode. A representative near-infrared self-heterodyne comb spectrum can be found in Fig. 2c. 240 comb teeth which are spaced by 10 MHz can be observed as well as the carrier tone which occurs at the AOM frequency.
Figure 1: Full-width at half-maximum phase matching bandwidth of the idler of the continuous-wave singly resonant optical parametric oscillator pumped at 1064 nm.
The combined probe and LO paths were then amplified by an ytterbium fiber amplifier which increased the seed power from 5 mW to 10 W. This amplified pump was then injected into the CWSRO which generates tunable mid-infrared radiation by down-converting near-infrared pump photons into near-infrared signal and mid-infrared idler photons through the use of a PPLN crystal. The CWSRO used here has been previously described in detail in Ref. [20].
Figure 2: (a) Schematic of the continuous-wave optical-parametric-oscillator-based optical frequency comb spectrometer. The output of an external-cavity diode laser (ECDL) is split into two paths, one containing an electro-optic phase modulator (EOM) and the other an acousto-optic modulator (AOM) which serve as probe and local oscillator (LO) paths, respectively. The EOM is driven by a periodic chirp to generate an electro-optic frequency comb. The chirp repetition rate (\(f_{rep}\)) and frequency range are controlled by the output of a direct digital synthesis (DDS) chip (see panel (b)). The LO path passes through an AOM to produce a frequency shift of 54.2 MHz. The probe and LO paths are then recombined and amplified with a fiber amplifier before being injected into the continuous-wave optical parametric oscillator (CWSRO) containing a periodically poled lithium niobate (PPLN) crystal. The resulting mid-infrared (MIR) idler frequency comb was attenuated and then passed through a gas cell containing 48 Pa of methane before being detected on a photodetector (DET). A representative near-infrared (NIR) self-heterodyne spectrum can be seen in panel (c).
The poling period of the PPLN varies along the crystal height in a fan-out structure. Vertical translation of the crystal position relative to the pump beam changes the phase matching conditions, which widely tunes the signal (1450 nm to 2070 nm) and idler (2190 nm to 4000 nm). Rotation of the etalon and continuous tuning of the seed laser finely tunes the idler to a target mid-infrared wavelength. The depleted pump and signal beam were sent to a wavemeter for a continuous wavelength measurement. Measurement of the pump and signal wavelengths with 150 MHz accuracy allowed calculation of the idler wavelength with 210 MHz accuracy. The CWSRO had a nominal output power of 2 W in the depleted pump, 3 W in the signal, and 2 W in the idler.
Figure 3: Radiofrequency spectra of the pump (a), signal (b), and idler (c) when driven by near-infrared electro-optic combs with 1 MHz (left panels) and 10 MHz repetition rate (right panels). The insets show magnified versions of the radiofrequency spectra which are centered near the OPO cavity’s free spectral range. Each of the shown power spectra is the average of one hundred power spectra each of which were acquired in 0.5 ms.
## 4 Results and Discussion
The three outputs of the CWSRO (depleted pump, signal, and idler) were recorded on fast photodiodes using a digitizer operating at 3 gigasamples/s. Fast Fourier transforms (FFTs) of these interferograms can be found in Figure 3 for both 1 MHz and 10 MHz comb spacings (left and right panels, respectively). The depleted pump spectrum contains both the carrier tone at 54.2 MHz as well as the 2400-MHz-wide comb which was initially produced in the near-infrared prior to amplification (i.e., Fig. 2c). The shown pump combs contained 2400 and 240 individual comb teeth for the 1 MHz and 10 MHz comb spacing cases, respectively. In order to further visualize the flatness and extent of the optical frequency combs we extracted the FFT magnitudes at the known comb tooth frequencies and normalized them against the carrier tone for the 1 MHz pump, signal, and idler combs (see Fig. 4). The shown idler combs had powers per comb tooth of approximately 40 \(\upmu\)W and 450 \(\upmu\)W for the 1 MHz comb and 10 MHz combs, respectively, and are therefore well suited for high signal-to-noise molecular spectroscopy.
As previously described, complete replication of the pump comb on the idler requires that the pump comb does not contain spectral components spaced by multiples of the cavity FSR. The EO comb carrier and the 54.2 MHz-detuned LO are the dominant components of the pump, so separations from these components at multiples of the cavity FSR are the dominant contributors to multimode oscillation of the signal. While the 10 MHz spaced pump comb is replicated with minimal distortion on the idler, the 1 MHz spaced comb contains teeth which are separated from the comb carrier and LO by multiples of the cavity FSR. As a result, we see significant depletion of the pump and enhancement of the signal and idler at multiples of 530 MHz (see Fig. 3b and 4). Weaker features also can be observed which arise at frequencies which are separated from the LO frequency by the cavity FSR (e.g., 476 and 584 MHz). The signal should only have gain at harmonics of the cavity FSR, so the observation of comb teeth on the signal at frequencies between multiples of the cavity FSR was unexpected. We believe these weak comb teeth are present as there can be some transfer of phase to the signal in a single pass through the crystal.
As an initial spectroscopic demonstration, we passed the idler beam through a 40 cm long cell containing 48 Pa of methane. Normalization of the resulting comb spectrum was
Figure 4: Comb tooth magnitudes normalized against the carrier tone for the pump, idler, and signal outputs of the CWSRO for a 1 MHz spaced comb. The shown spectra are the average of ten spectra each of which was acquired in 1 ms.
performed via a background spectrum recorded when the cell was removed. The resulting transmission spectrum is plotted in Fig. 5 and compared to a HITRAN 2020 [21] fit in which only the center wavelength and background level were floated to account for uncertainty associated with the wavelength meter and optical cell coupling losses. This spectrum contains 2400 individual comb teeth and was recorded in only 0.5 s.
The use of the 1 MHz comb for this demonstration allows us to see the impact of the slight idler comb perturbations when used for spectroscopy. Distortions are clearly present on the recorded spectrum at multiples of the cavity FSR, but the features viewed here are much broader than these distortions and an accurate spectrum can be measured. We note that these distortions can be removed by using a wide spaced comb (e.g., 10 MHz) which does not contain comb components at the cavity FSR.
We believe that the present technique holds numerous advantages in comparison to other mid-infrared comb approaches such as QCL combs [7], difference frequency generation (DFG) combs [8], [22], [23], and synchronously pumped femtosecond OPO-based combs (SySRO combs) [24], [25]. The present approach offers far higher tunability and agility with respect to repetition rate than QCL or SySRO combs where the repetition rate is essentially fixed and limited by either the QCL cavity length (generally near 10 GHz) [7] or the SySRO cavity length (generally a few hundred MHz). The repetition rates used in the present work are determined by the driving frequencies of an EOM, which are frequency agile and digitally controlled. Thus, a single EOM-comb CWSRO system can be used for applications requiring spectral resolution ranging from well less than 1 MHz to more than 1 GHz. In addition, since the local oscillator (or a second comb) can be generated from the same near-infrared laser and simultaneously translated into the mid-infrared by the same CWSRO, there is no need for complicated phase locking, phase correction, or a second comb source, in contrast to these other methods. Finally, in comparison to DFG combs the present method offers far higher optical powers for a given pump power [8], [22], [23].
Figure 5: Transmission spectrum (black points) of a gas cell containing 48 Pa of methane as well as a spectral fit (red line) using HITRAN 2020 [21] parameters. Only the center wavelength and background level were adjusted to account for uncertainty in the wavelength meter and coupling losses of the optical cell. The shown spectrum contains 2400 individual comb teeth spaced by 1 MHz and was the average of 500 spectra each of which was acquired in 1 ms. As predicted by the theory, small distortions in the spectrum are observed for comb teeth occurring at multiples of the CWSRO cavity’s free spectral range from the carrier wavelength.
One area of future work will be extending the bandwidth of the pump combs to reach the bandwidth limits imposed by the phase matching. Broader pump combs could be generated via cascaded modulators or non-linear spectral broadening. As shown earlier, the phase matching condition of the CWSRO is very broad and is expected to accommodate combs as wide as several THz, allowing for wideband multiplexed spectroscopy.
The combination of high resolution, high power, and mutual coherence provided by CWSRO EOM combs is ideally suited to applications such as sub-Doppler spectroscopy, where narrow features must be located within a broad spectral region. In addition, we see strong applications for this approach in areas such as chemical kinetics, optical metrology, communications, and quantum sensing where the flexibility, agility, and high optical power are expected to be transformative.
**Acknowledgements.** We thank D. B. Foote and J. T. Hodges for helpful discussions. Certain equipment, instruments, software, or materials are identified in this paper in order to specify the experimental procedure adequately. Such identification is not intended to imply recommendation or endorsement of any product or service by NIST, nor is it intended to imply that the materials or equipment identified are necessarily the best available for the purpose.
**Disclosures.**ATH: TOPTICA Photonics, Inc. (E,P), MJC: TOPTICA Photonics, Inc. (E,P).
**Data availability.** The data underlying this paper will be publicly available at a data.nist.gov.
|
2309.14143 | Nonlinear Filtering of Classical and Quantum Spin Systems | In this paper we consider classical and quantum spin systems on discrete
lattices and in Euclidean spaces, modeled by infinite dimensional stochastic
diffusions in Hilbert spaces. Existence and uniqueness of various notions of
solutions, existence and uniqueness of invariant measures as well as
exponential convergence to equilibrium are known for these models. We formulate
nonlinear filtering problem for these classes of models, derive nonlinear
filtering equations of Fujisaki-Kallianpur-Kunita and Zakai tye, and prove
existence and uniqueness of measure-valued solutions to these filtering
equations. We then establish the Feller property and Markov property of the
semigroups associated with the filtering equations and also prove existence and
uniqueness of invariant measures. Evolution of error covariance equation for
the nonlinear filter is derived. We also derive the nonlinear filtering
equations associated with finitely-additive white noise formulation due to
Kallianpur and Karandikar for the classical and quantum spin systems, and study
existence and uniqueness of measure-valued solutions | Sivaguru S. Sritharan, Saba Mudaliar | 2023-09-25T13:56:17Z | http://arxiv.org/abs/2309.14143v1 | # Nonlinear Filtering of Classical and Quantum Spin Systems
###### Abstract.
In this paper we consider classical and quantum spin systems on discrete lattices and in Euclidean spaces, modeled by infinite dimensional stochastic diffusions in Hilbert spaces. Existence and uniqueness of various notions of solutions, existence and uniqueness of invariant measures as well as exponential convergence to equilibrium are known for these models. We formulate nonlinear filtering problem for these classes of models, derive nonlinear filtering equations of Fujisaki-Kallianpur-Kunita and Zakai \(\mathrm{\mathnormal{ye}}\), and prove existence and uniqueness of measure-valued solutions to these filtering equations. We then establish the Feller property and Markov property of the semigroups associated with the filtering equations and also prove existence and uniqueness of invariant measures. Evolution of error covariance equation for the nonlinear filter is derived. We also derive the nonlinear filtering equations associated with finitely-additive white noise formulation due to Kallianpur and Karandikar for the classical and quantum spin systems, and study existence and uniqueness of measure-valued solutions.
_Key words:_ quantum spin systems, quantum lattice systems, stochastic processes, nonlinear filtering, invariant measures, Markov property, transition semigroup, ergodicity.
Mathematics Subject Classification (2010): 60H30, 81T25, 82C10, 93E11
###### Contents
* 1 Introduction
* 2 Unified Mathematical Formulation of Spin Systems -Stochastic Dissipative Systems and Invariant Invariant Systems
* 3 Classical Spin Systems on Discrete Lattices
* 3.1 Diffusions on an Infinite Dimensional Torus
* 3.2 Unbounded Classical Spin Systems
* 3.3 Euclidean Field Theory and Spin Systems on Euclidean Spaces
* 4 Quantum Lattice Systems
* 5 Nonlinear Filtering Formulation and Filtering Equations
* 5.1 Stochastic Calculus Method of Nonlinear Filtering
* 5.2 Existence and Uniqueness of Measure-valued solutions to Filtering Equations
* 5.3 Evolution Equation for Error Covariance
* 6 White Noise Theory of Nonlinear Filtering
## 1. Introduction
Classical and quantum spin systems are extensively studied in literature [46, 49, 50, 16, 17]. In this paper we will take a stochastic partial differential equations approach as in [19] for spin systems and develop a classical nonlinear filtering method for these models. For classical spin systems we will consider the diffusions on infinite dimensional torus studied by Holley and Stroock [28] and also the Euclidean and lattice systems studied by Da Prato and Zabczyk [19]. For quantum spin systems we consider the models studied by [1] and [19]. Nonlinear filtering for infinite dimensional problems related to nonlinear partial differential equations of fluid dynamics was initiated in [51] and later developed for several fluid dynamic models in [52, 53, 22]. Nonlinear filtering for nonlinear stochastic partial differential equations for reaction diffusion type was originated in [27]. In these papers the measure valued evolution equations of FKK type [23] and Zakai type[58] have been derived and existense and uniqueness of measure valued nonlinear filter was established. In this paper we will establish similar results for classical and quantum spin systems [1, 19] and then prove convergence and ergodicity type results similar to such results for finite dimensional nonlinear filtering problems initiated by Kunita[38] and further developed by several other authors [41, 54, 39, 42, 6, 13, 14, 18]. We will also consider white noise nonlinear filtering equations developed by Kallianpur and Karandikar [34, 35, 36] and derive the corresponding equations for the spin systems and prove existence and uniqueness of measure-valued solutions. Other works on nonlinear filtering of classical spin systems with different perspective see [45]. We also note here that in this paper we develop classical nonlinear filtering for quantum spin systems even though white noise filtering in SS6 has some attributes of quantum theory. Quantum nonlinear filtering for open quantum systems has been developed in a series of papers by V. P. Belavkin [8, 9, 10, 11, 12] (see also the introductory exposition [15]) for the Hudson-Parthasarathy equation [29, 43]. Belavkin type nonlinear filtering theory for unbounded Hamiltonian and noise operators in the Hudson-Parthasarathy equation is an open problem and will be addressed in the future.
Unified Mathematical Formulation of Spin Systems -Stochastic Dissipative Systems and Invariant Measures
In this section we will describe certain general results for infinite dimensional stochastic equations driven by cylindrical Wiener process as presented in [19, 20]. Let \(H\) and \(U\) be separable Hilbert spaces with norms \(\|\cdot\|,\|\cdot\|_{U}\) and scalar products \(\langle\cdot,\cdot\rangle\), \(\langle\cdot,\cdot\rangle_{U}\). The spaces of all bounded operators from \(U\) in to \(H\) and from \(U\) in to \(U\) will be denoted by \(\mathcal{L}(U,H)\) and \(\mathcal{L}(U)\) respectively. Let \((\Omega,\mathcal{F},\mathcal{F}_{t},m)\) be a complete filtered probability space and \(W(t),t\geq 0\) be an \(\mathcal{F}_{t}\)-adapted Wiener process defined on \(\Omega\) with values in \(U\) and with the covariance operator \(Q\in\mathcal{L}(U)\). Hence for arbitary \(\phi,\psi\in U\) and \(t,s\geq 0\):
\[E[\langle W(t),\phi\rangle_{U}\langle W(s),\psi\rangle_{U}]=t\wedge s\langle Q \phi,\psi\rangle_{U}.\]
We will consider the general stochastic equation
\[dX=(AX+F(X))dt+BdW, \tag{2.1}\]
\[X(0)=x.\]
Here \(B\in\mathcal{L}(U,H)\), \(A\in\mathcal{L}(D(A);H)\) is a linear operator with \(D(A)\subset H\), \(F:D(F)\to H\) is a nonlinear operator with \(D(F)\subset H\) and both \(A\) and \(F\) are dissipative in the sense defined below. An Important observation is that in the classes of problems studied in this paper the
covariance operator is not of trace class: \(\text{Tr}Q=\sum_{i}\langle Q\varphi_{i},\varphi_{i}\rangle_{U}=+\infty.\) In particular \(Q=I\) is often explicitly invoked.
Let \(E,\|\cdot\|_{E}\) be a Banach space and \(E^{*}\) its dual. For arbitrary \(x\in E\) the subdifferential \(\partial\|x\|_{E}\) of the norm \(\|\cdot\|_{E}\) at \(x\) is given by:
\[\partial\|x\|_{E}:=\{x^{*}\in E^{*}:\|x+y\|_{E}-\|x\|\geq x^{*}(y),\forall y \in E\}.\]
A mapping \(G\) from \(D(G)\subset E\) into \(E\) is said to be dissipative in \(E\) if for arbitrary \(x,y\in D(G)\) there exists \(z^{*}\in\partial\|x-y\|_{E}\) such that
\[z^{*}(G(x)-G(y))\leq 0. \tag{2.2}\]
If in addition, for some \(\alpha>0\) the mapping \((I-\alpha G):D(G)\to E\) is surjective then \(G\) is said to be \(m\)-dissipative.
Let \(K\subset E\) is a Banach space embedded into \(E\) then the part \(G_{K}\). Then the part \(G_{K}\) of \(G\) in \(K\) is defined as follows:
\[G_{K}(x)=G(x)\text{ for }x\in D(G_{K}),\]
where
\[D(G_{K})=\{x\in D(G)\cap K:G(x)\in K\}.\]
We will now state the two essential hypothesis on operators \(A,F,B\), the Wiener process \(W(t),t\geq 0\) and on a Banach space \(K,\|\cdot\|_{K}\) continuously and densely embedded into \(H\).
**Hypothesis 2.1**.:
1. _There exists_ \(\eta\in\mathbb{R}\) _such that the operators_ \(A+\eta I\) _and_ \(F+\eta I\) _are_ \(m\)_-dissipative on_ \(H\)_._
2. _The parts in_ \(K\) _of_ \(A+\eta I\) _and_ \(F+\eta I\) _are_ \(m\)_-dissipative on_ \(K\)_._
3. \(K\subset D(F)\) _and_ \(F\) _maps bounded sets in_ \(K\) _into bounded sets in_ \(H\)_._
4. \(K\) _is a reflective space._
5. \(B\in\mathcal{L}(U,H)\)_._
Let \(W_{A}(t),t\geq 0\) be the solution to the linear equation
\[dZ(t)=AZ(t)dt+BdW(t), \tag{2.3}\]
\[Z(t)=W_{A}(t)=\int_{0}^{t}S(t-s)BdW(s),\ \ t\geq 0,\]
where \(S(t),t\geq 0,\) is the semigroup generated by \(A\) on \(H\).
**Hypothesis 2.2**.: _The process \(W_{A}(t),t\geq 0\), is continuous in \(H\), takes values in the domain \(D(F_{K})\) of the part of \(F\) in \(K\) and for any \(T>0\) we have_
\[\sup_{t\in[0,T]}(\|W_{A}(t)\|_{K}+\|F_{K}(W_{A}(t))\|_{K})<+\infty,\ \ m\text{ a.s.} \tag{2.4}\]
**Definition 2.1**.: _An \(H\)-valued continuous \(\mathcal{F}_{t}\)-adapted process \(X(t),t\geq 0\), is said to be a strong solution to 2.1 if it satisfies \(m\)-a.s. the equation_
\[X(t)=x+\int_{0}^{t}(AX(s)+F(X(s)))ds+BW(t),\ \ t\geq 0, \tag{2.5}\]
_and it is a mild solution if it satisfies the following integral equation:_
\[X(t)=S(t)x+\int_{0}^{t}S(t-s)F(X(s))ds+W_{A}(t),\ \ t\geq 0. \tag{2.6}\]
_If for a \(H\)-valued process \(X\), there exists a sequence \(X_{n}\), of mild solutions to 2.1 such that \(m\)-a.s., \(X_{n}\to X\) uniformly on any interval \([0,T]\), then \(X\) is said to be a generalized solution to 2.1._
Note that each strong solution is a mild solution and each mild solution is a generalized solution.
**Theorem 2.1**.: _[_19, 20_]__Assume that Hypothesis 2.1 and 2.2 are satisfied. Then for arbitrary \(x\in K\) there exists a unique mild solution of 2.1 and for arbitrary \(x\in H\) there exists a unique generalized solution \(X(t,x),t\geq 0\) of 2.1. If the operator \(A\) and its part in \(K\) are bounded then solutions for \(x\in K\) are strong._
The above theorem implies that
**Theorem 2.2**.: \(X(t,x),t\geq 0\) _is a Markov process with transition group \(P_{t},t\geq 0\), given by_
\[P_{t}\varphi(x)=E[\varphi(X(t,x))],\ \ t\geq 0,x\in H,\varphi\in C_{b}(H), \tag{2.7}\]
_where \(C_{b}(H)\) denotes the space of all uniformly continuous and bounded functions on \(H\). Moreover, the transition semigroup is Feller:_
\[P_{t}:C_{b}([0,\infty];H)\to C_{b}([0,\infty];H).\]
The next theorem concerns with existence and uniqueness of invariant measures for 2.1 and asymptotic behavior of the transition semigroup \(P_{t},t\geq 0\).
**Theorem 2.3**.: _[_19, 20_]_ _If in addition to Hypotheses 2.1, 2.2,_
1. _there exist_ \(\eta_{1},\eta_{2}\in\mathbb{R}\) _such that_ \(\omega=\eta_{1}+\eta_{2}>0\) _and operators_ \(A+\eta_{1}I\)_,_ \(F+\eta_{2}I\) _are dissipative in_ \(H\)_,_
2. _one has_ \[\sup_{t\geq 0}E(\|W_{A}(t)\|+\|F(W_{A}(t))\|)<+\infty.\]
_Then there exists a unique invariant measure \(\mu\) for the semigroup \(P_{t},t\geq 0\) such that \(P_{t}^{*}\mu=\mu\). Moreover, for all bounded and Lipschitz continuous function \(\varphi\) on \(H\) one has_
\[|P_{t}\varphi(x)-\int_{H}\varphi(y)\mu(y)|\leq(C+2\|x\|)e^{-\omega t}\|\varphi \|_{\text{Lip}} \tag{2.8}\]
_where_
\[C=\sup_{t\geq 0}E\bigg{(}\|W_{A}(t)\|+\frac{1}{\omega}\|F(W_{A}(t))\|\bigg{)}.\]
## 3. Classical Spin Systems on Discrete Lattices
### Diffusions on an Infinite Dimensional Torus
We will discuss one of the simplest classical spin systems studied by Holley and Stroock [28]. Let \((\Omega,\Sigma,m)\) be a complete probability space and let \(x_{\gamma}(t)\in\mathbb{R}^{\mathbb{Z}^{d}}\) (resp. \(x_{\gamma}(t)\in\mathbf{T}^{\mathbb{Z}^{d}}\),) \(\gamma\in\mathbb{Z}^{d}\) be a spin system that evolves according to:
\[x_{\gamma}(t,x)=x_{\gamma}+\int_{0}^{t}\sigma_{\gamma}(x(s,x))dB_{\gamma}(s)+ \int_{0}^{t}b_{\gamma}(x(s,x))ds,\gamma\in\mathbb{Z}^{d},t>0, \tag{3.1}\]
where for fixed \(\gamma\in\mathbb{Z}^{d}\), \(B_{\gamma}\) is a one dimensional standard Brownian motion, \(x\in\mathbb{R}^{\mathbb{Z}^{d}}\) (resp. \(x\in\mathbf{T}^{\mathbb{Z}^{d}}\)), and \(\sigma_{\gamma}(\cdot):\mathbb{R}^{\mathbb{Z}^{d}}\to\mathbb{R}\), \(b_{\gamma}(\cdot):\mathbb{R}^{\mathbb{Z}^{d}}\to\mathbb{R}\) (resp. defined on \(\mathbf{T}^{\mathbb{Z}^{d}}\)) and their derivatives are smooth bounded functions such that \(\sigma_{\gamma}(x)=\sigma_{\gamma}(y)\) if \(x_{l}=y_{l}\) for all \(l\in Z^{d}\) satisfying \(|l-\gamma|\leq L\). We will start with a path-wise solvability:
**Lemma 3.1**.: _[_28_]_ _For each \(x\in\mathbb{R}^{\mathbb{Z}^{d}}\) there exists a unique solution \(x_{\gamma}(\cdot,x)\) to 3.1. Moreover, if \(x,y\in\mathbb{R}^{\mathbb{Z}^{d}}\), then for each \(T>0\):_
\[E\Bigg{[}\sum_{\gamma\in\mathbb{Z}^{d}}\sup_{0\leq t\leq T}\frac{1}{2^{|\gamma |}}|x_{\gamma}(t,x)-x_{\gamma}(t,y)|^{2}\Bigg{]}\leq A_{T}\Bigg{(}\sum_{\gamma \in\mathbb{Z}^{d}}\frac{1}{2^{|\gamma|}}|x_{\gamma}-y_{\gamma}|^{2}\Bigg{)},\]
_where \(A_{T}<\infty\) depends only on \(T>0,L,\) and \(\max_{\gamma}\|\sigma_{\gamma}\|_{C^{1}(\mathbb{R}^{2d})}\). Finally, if \(x^{N}(\cdot,x)_{\gamma}\) is defined by 3.1 with \(\sigma_{\gamma}(\cdot)=b_{\gamma}(\cdot)\equiv 0\) for \(|\gamma|>N\) then for all \(T>0\),_
\[E\Bigg{[}\sum_{\gamma\in\mathbb{Z}^{d}}\sup_{0\leq t\leq T}\frac{1}{2^{|\gamma |}}|x_{\gamma}(t,x)-x_{\gamma}^{N}(t,x)|^{2}\Bigg{]}\to 0\text{ as }N\to\infty.\]
Let us now recall the solvability of martingale problem given by Holley and Stroock [28]. Let \(\mathbf{T}=\{z\in\mathbb{C};|z|=1\}\). For points in \(\mathbf{T}^{\mathbb{Z}}\) we will use \(\eta\) to denote both the point in \(\mathbf{T}^{\mathbb{Z}}\) as well as the element \(\alpha\) of \(\left([0,2\pi)\right)^{\mathbb{Z}}\) such that \(\eta_{k}=e^{i\alpha_{k}},k\in\mathbb{Z}\). Let \(\mathcal{D}\) denote the set of smooth functions on \(\mathbf{T}^{\mathbb{Z}}\) (sometimes called cylindrical test functions) which depend only on a finite number of coordinates.
**Theorem 3.1**.: _[_28_]_ _Let \(\Omega\) denote the Polish space \(C([0,\infty);\mathbf{T}^{\mathbb{Z}})\). For \(\omega\in\Omega,\eta(t,\omega)\in\mathbf{T}^{\mathbb{Z}}\) is the position of \(\omega\) at time \(t\geq 0\). We set \(\mathcal{F}_{t}:=\sigma(\eta(s);0\leq s\leq t)\) to be the cannonical filtration and \(\mathcal{F}=\sigma(\cup_{t\geq 0}\mathcal{F}_{t})\) and then \(\mathcal{F}\) coincides with the Borel field over \(\Omega\). Let \(\sigma_{k},b_{k},k\in\mathbb{Z}\) be smooth functions of \(\mathbf{T}^{\mathbb{Z}}\) such that for a given \(k\in\mathbb{Z}\) the functions \(\sigma_{k}\) and \(b_{k}\) depend only on \(\{\eta_{l};|l-k|\leq L\}\). Also assume that \(\sigma_{k}\) and \(b_{k}\) and each of their derivatives are bounded independent of \(k\in\mathbb{Z}\). For \(f\in\mathcal{D}\), define_
\[\mathcal{L}f(\eta)=\sum_{k\in\mathbb{Z}}\biggl{(}\frac{1}{2}\sigma_{k}^{2}( \eta)\frac{\partial^{2}f}{\partial\eta_{k}^{2}}(\eta)+b_{k}(\eta)\frac{ \partial f}{\partial\eta_{k}}(\eta)\biggr{)},\ \ \eta\in\mathbf{T}^{\mathbb{Z}}.\]
_Then for each \(\eta\in\mathbf{T}^{\mathbb{Z}}\) there is exactly one probability measure \(P_{\eta}\) on \((\Omega,\mathcal{F})\) such that \(P_{\eta}(\eta(0)=\eta)=1\) and \(\left\{f(\eta(t))-\int_{0}^{t}\mathcal{L}f(\eta(s))ds,\mathcal{F}_{t},P_{\eta}\right\}\) is a martingale for each \(f\in\mathcal{D}\). Moreover, if \(\Phi:\mathbb{R}\to\mathbb{Z}\) is the map defined so that \(\Phi_{k}(x)=\eta_{k}\), where \(\eta_{k}\in[0,2\pi)\) and \(x_{k}=\eta_{k}\) mod \((2\pi)\), then for any \(n\geq 1\), \(F\in B((\mathbf{T}^{\mathbb{Z}})^{n})\), and \(0\leq t_{1}<t_{1}<\cdots<t_{n}\):_
\[E^{P_{\eta}}[F(\eta(t_{1}),\cdots,\eta(t_{n}))]=E^{m}[F\circ\Phi^{n}(x(t_{1}, \eta),\cdots,x(t_{n},\eta)],\]
_where \(\Phi^{n}=\Phi\otimes\cdots\otimes\Phi\) (\(n\)-times ), and \(x(\cdot,\eta)\) is the solution to 3.1 with \(x=\eta\) and the \(\sigma_{k}\) and \(b_{k}\) in 3.1 replaced by \(\sigma_{k}\circ\Phi\) and \(b_{k}\circ\Phi\), respectively. Finally, the family \(\left\{P_{\eta};\eta\in\mathbf{T}^{\mathbb{Z}}\right\}\) is Feller continuous and strong Markov._
### Unbounded Classical Spin Systems
Unbounded classical spin systems have been studied by several authors to establish construction of Markov semigroup, existence and uniqueness of invariant measures, ergodicity and exponential convergence of Markov semigroups to equilibrium state characterized by the invariant measure [57, 19, 20]. We will recall some of the essential results below. We consider the Markov process \(X=\{X_{\gamma}\}_{\gamma\in\mathbb{Z}^{d}}\) satisfying an infinite system of Ito's equations
\[dX_{\gamma}(t)=\Bigg{(}\sum_{j}a_{\gamma j}X_{j}(t)+f(X_{\gamma}(t))\Bigg{)} dt+dW_{\gamma}(t), \tag{3.2}\]
\[X_{\gamma}(0)=x_{\gamma},\ \ \gamma\in\mathbb{Z}^{d},t\geq 0,\]
where \(W_{\gamma}\), \(\gamma\in\mathbb{Z}^{d}\) is a Wiener process on \((\Omega,\Sigma,m)\) with values in \(U=l^{2}(\mathbb{R}^{d})\) and with the covariance operator \(Q\in\mathcal{L}(U)\). In particular, if \(Q=I\) then the processes \(W_{\gamma},\gamma\in\mathbb{Z}^{d}\), are independent standard real valued Wiener processes. An interesting special case of the above system is:
\[dX_{\gamma}(t)=((\Delta_{d}-\alpha)X_{\gamma}(t)+f(X_{\gamma}(t)))dt+dW_{ \gamma}(t), \tag{3.3}\]
\[X_{\gamma}(0)=x_{\gamma},\ \ \gamma\in\mathbb{Z}^{d},t\geq 0,\]
where \(\Delta_{d}\) is the \(d\)-dimensional discrete Laplacian and \(\alpha\) is a constant.
In the light of general theory presented in Section 2, we define operators \(A\) and \(F\) as follows:
\[A(X_{\gamma})=\left(\sum_{j\in\mathbb{Z}^{d}}a_{j\gamma}X_{j}\right)\!,\ \ X=(X_{\gamma})\in H,\]
\[F(X_{\gamma})=(f(X_{\gamma})),\ \ X=(X_{\gamma})\in H.\]
The following result [19, 20, 44] is useful in establishing the boundedness of \(A\).
**Theorem 3.2**.: _Assume that_
1. \(\sup_{\gamma\in\mathbb{Z}^{d}}\sum_{j\in\mathbb{Z}^{d}}|a_{\gamma j}|=\alpha<+\infty\)_._
2. _There exists_ \(\beta>0\) _such that_ \[\sum_{\gamma\in\mathbb{Z}^{d}}|a_{\gamma j}|\rho(\gamma)\leq\beta\rho(j),\ \ j\in\mathbb{Z}^{d}.\] _Then the formula_ \[A_{p}(X_{\gamma})=\left(\sum_{j\in\mathbb{Z}^{d}}a_{j\gamma}X_{j}\right)\!,\ \ X=(X_{\gamma})\in l_{\rho}^{p}(\mathbb{Z}^{d}),\] _defines a bounded operator on_ \(l_{\rho}^{p}(\mathbb{Z}^{d})\)_, (i.e._ \(A\in\mathcal{L}(l_{\rho}^{p},l_{\rho}^{p})\)_) for all_ \(p\in[1,+\infty]\)_, with the norm not greater than_ \[\alpha^{1/q}\beta^{1/p},\ \ \frac{1}{p}+\frac{1}{q}=1.\]
_In particular \(A\in\mathcal{L}(l_{\rho}^{2},l_{\rho}^{2})\), with operator norm less than or equal to \(\sqrt{\alpha\beta}\)._
**Corollary 3.1**.: _If, for some \(\alpha<\infty\),_
\[\sup_{k\in\mathbb{Z}^{d}}\sum_{j\in\mathbb{Z}^{d}}|a_{k,j}|\leq\alpha\ \text{and}\ \sup_{j\in\mathbb{Z}^{d}}\sum_{k\in\mathbb{Z}^{d}}|a_{k,j}|\leq\alpha\]
_then \(A\in\mathcal{L}(l^{p},l^{p})\) with norm \(\leq\alpha\)._
For example, the matrix coefficients \(a_{\gamma,j}\) and positive weight \(\rho\) can be assumed to satisfy the following conditions:
**Definition 3.1**.: _The interactions \(a_{\gamma,j}\) have bounded range if there are constants \(R,M>0\) such that_
\[a_{\gamma,j}=0\ \text{if}\ |\gamma-j|>R\ \text{and}\ |a_{\gamma,j}|\leq M\ \text{for all}\ \gamma,j\in\mathbb{Z}^{d}. \tag{3.4}\]
**Lemma 3.2**.: _Assume bounded range and that_
\[|\frac{\rho(\gamma)}{\rho(j)}|\leq M\ \text{if}\ |\gamma-j|\leq R,\ \text{and}\ \sum_{ \gamma\in\mathbb{Z}^{d}}\rho(\gamma)<+\infty, \tag{3.5}\]
_then \(A\in\mathcal{L}(l_{\rho}^{p},l_{\rho}^{p})\) for every \(p\geq 1\)._
Examples of weights \(\rho\) that fit the conditions of Lemma 3.2 are:
\[\rho_{\kappa}(k)=e^{-\kappa|k|}\ \text{or}\ \rho^{\kappa}(k)=\frac{1}{1+ \kappa|k|^{r}},\kappa>0,r>d,k\in\mathbb{Z}^{d}.\]
**Definition 3.2**.: _The local interaction function \(f:\mathbb{R}\to\mathbb{R}\) is such that \(f=f_{0}+f_{1}\) where \(f_{1}\) is Lipchitz continuous and \(f_{0}+\eta\zeta,\zeta\in\mathbb{R}\) is continuous and decreasing for some \(\eta\in\mathbb{R}\) and for some \(s\geq 1\) and \(c_{0}>0\),_
\[|f_{0}(\zeta)|\leq c_{0}(1+|\zeta|^{s}),\zeta\in\mathbb{R}.\]
For example with \(s=2n+1\) and
\[f_{0}(\zeta)=-\zeta^{2n+1}+\sum_{k=0}^{2n}b_{k}\zeta^{2n-k},\ \ \zeta\in\mathbb{R},n \in\mathbb{N}\cup\{0\}.\]
**Theorem 3.3**.: _Let \(H=l_{\rho}^{2}(\mathbb{Z}^{d})\), \(K=l_{\rho}^{2s}(\mathbb{Z}^{d})\) and let the operators \(A\) and \(F\) be as above. Then_
1. _For arbitrary_ \(x\in K=l_{\rho}^{2s}(\mathbb{Z}^{d})\) _there exists a unique strong solution_ \(X(t,x),t\geq 0\)_._
2. _For arbitrary_ \(x\in H=l_{\rho}^{2}(\mathbb{Z}^{d})\)_, there exists a unique generalized solution_ \(X(t,x),t\geq 0\)_, and the transition semigroup_ \[P_{t}\varphi(x)=E(\varphi(X(t,x))),\ \ t\geq 0,x\in H,\varphi\in C_{b}(H),\] _is Feller._
**Theorem 3.4**.: _Assume that conditions 3.4, 3.5 are satisfied. Let \(H=l_{\rho}^{2}(\mathbb{Z}^{d})\), \(K=l_{\rho}^{2s}(\mathbb{Z}^{d})\) and let the operators \(A\) and \(F\) be given as above. Then_
1. _For arbitrary_ \(x\in K=l_{\rho}^{2s}(\mathbb{Z}^{d})\) _there exists a unique strong solution_ \(X(t,x),t\geq 0\) _of_ 3.2_._
2. _For arbitrary_ \(x\in H=l_{\rho}^{2}(\mathbb{Z}^{d})\)_, there exists a unique generalized solution_ \(X(t,x),t\geq 0\) _of_ 3.2 _and the transition semigroup_ \[P_{t}\varphi(x)=E(\varphi(X(t,x))),\ \ t\geq 0,x\in H,\varphi\in C_{b}(H),\] _is Feller._
**Theorem 3.5**.: _Assume, in addition to conditions of 3.4 that for an \(\eta>0\), operator \(A+I\eta\), restricted to \(l^{2}(\mathbb{Z}^{d})\) is dissipative, that \(f_{0}\) is decreasing and \(\eta-\|f_{1}\|_{\text{Lip}}>\omega>0\). Then there exists \(\kappa_{0}>0\) such that in the spaces \(l_{\rho}^{2}(\mathbb{Z}^{d})\) with \(\rho=\rho^{\kappa}\) or with \(\rho=\rho_{\kappa}\), given as above, and \(\kappa\in]0,\kappa_{0}[\), equation 3.2 has unique generalized solutions. The semigroup \(P_{t},t\geq 0\) has a unique invariant measure \(\mu\) on \(H\) and there exists \(C>0\) such that, for any bounded and Lipschitz function \(\varphi\) on \(H\), all \(t>0\) and all \(x\in H\):_
\[|P_{t}\varphi(x)-\int_{H}\varphi(x)\mu(dx)|\leq(C+2|x|)e^{-\omega t}\|\varphi \|\ \text{Lip}.\]
These results have been extended to spin systems forced by Levy noise in [44].
### Euclidean Field Theory and Spin Systems on Euclidean Spaces
Spin systems on Euclidean spaces can be modeled by the following stochastic partial differential equation:
\[dX(t,\zeta)=[(\Delta-\alpha)X(t,\zeta)+f(X(t,\zeta))]dt+dW(t,\zeta), \tag{3.6}\]
\[X(0,\zeta)=x(\zeta),\ \ \zeta\in\mathbb{R}^{d},t>0,\]
where \(\Delta\) is the Laplace operator, \(\alpha\) is a nonnegative constant and \(W(t,\zeta),\zeta\in\mathbb{R}^{d},\)\(t>0,\) an infinite dimensional Wiener process with covariance operator \(Q,\) defined on a probability space \((\Omega,\mathcal{F},m).\)
**Hypothesis 3.1**.:
1. \[Q=I\ \text{on}\ U=L^{2}(\mathbb{R}^{d}),\] (3.7) _or_
2. \(Q\) _is a convolution operator on_ \(U=L^{2}(\mathbb{R}^{d})\)_:_ \[Qu(\zeta)=q*u(\zeta),\zeta\in\mathbb{R}^{d},u\in L^{2}(\mathbb{R}^{d}),\] (3.8) _where_ \(q(\zeta)=\int_{\mathbb{R}^{d}}e^{i\zeta\cdot\eta}g(\eta)d\eta,\zeta\in \mathbb{R}^{d},\) _with_ \(g\geq 0\) _and_ \(q,g\in L^{1}(\mathbb{R}^{d}).\)__
**Theorem 3.6**.: _Assume that \(f\) is as in Definition 3.2 and either_
1. \(d=1\) _and 3.7 or 3.8 holds or_
2. \(d>1\) _and 3.8 holds._
_Then the equation 3.6 has a unique generalized solution in \(L^{2}_{\rho}(\mathbb{R}^{d})\) where \(\rho\) is given either by \(\rho_{\kappa}\) or by \(\rho^{\kappa}\). If \(x\in L^{2s}_{\rho}(\mathbb{R}^{d})\) then the generalized solution is mild._
**Theorem 3.7**.: _In addition to conditions of Theorem 3.6 assume that the function \(f_{0}\) is decreasing and \(\alpha-\|f_{1}\|\ Lip\ >\omega>0\). Then there exists \(\kappa_{0}>0\) such that the semigroup \(P_{t},t\geq 0\) corresponding to solution of equation 3.6 has a unique invariant measure both in \(H=L^{2}_{\rho^{\kappa}}(\mathbb{R}^{d})\) and \(H=L^{2}_{\rho_{\kappa}}(\mathbb{R}^{d})\), for any \(\kappa\in]0,\kappa_{0}[.\) Moreover there exists \(C>0\) such that for any bounded Lipschitz function \(\varphi\) on \(H\), all \(t>0\) and all \(x\in H\)_
\[|P_{t}\varphi(x)-\int_{H}\varphi(x)\mu(dx)|\leq(C+2)\|x\|e^{-\omega t}\|\varphi \|_{\text{Lip}}. \tag{3.9}\]
## 4. Quantum Lattice Systems
One of the simplest quantum lattice system would be a combination of the classical and Euclidean spin systems formulated in the previous sections (see [19, 20]):
\[dX_{\gamma}(t,\zeta)=\left(\mathcal{A}X_{\gamma}(t,\zeta)+\sum_{j\in\mathbb{Z }^{d}}a_{\gamma j}X_{\gamma}(t,\zeta)+\mathcal{F}(X_{\gamma}(t,\zeta))\right) dt+dW_{\gamma}(t,\zeta), \tag{4.1}\]
\[X_{\gamma}(0,\zeta)=x_{\gamma}(\zeta),\ \ \zeta\in[0,1],t>0,\]
where \(\mathcal{A}\), \(\mathcal{F}\) are in general unbounded, respectively linear and nonlinear operators on a Hilbert space \(\mathcal{H}\), \((a_{\gamma j})_{\gamma,j\in\mathbb{Z}^{d}}\) is a given matrix with real elements and \((W_{\gamma})_{\gamma\in\mathbb{Z}^{d}}\) is a family of independent cylindrical Wiener processes on \(\mathcal{H}\). We will assume for simplicity that \(\mathcal{H}=L^{2}(0,1)\), \(\mathcal{A}=\frac{d^{2}}{d\zeta^{2}}-\alpha,\)
\[D(\mathcal{A})=\big{\{}x\in H^{2}(0,1):x(0)=x(1),x^{\prime}(0)=x^{\prime}(1) \big{\}},\]
\[\mathcal{F}(x)(\zeta)=f(x(\zeta)),\ \ \zeta\in[0,1],\]
\[D(\mathcal{F})=\big{\{}x\in L^{2}(0,1):f(x)\in L^{2}(0,1)\big{\}},\]
and \(f\) and the matrix \(\{a_{\gamma j}\}_{\gamma,j\in\mathbb{Z}^{d}}\) satisfy conditions of Definition 3.2 and Definition 3.1 respectively. In order to frame 4.1 in to the general form of equation 2.1 we define:
\[H=l_{\rho}^{2}(L^{2}(0,1))=\Bigg{\{}(x_{\gamma})\in\mathcal{H}^{(\mathbb{Z}^{d} )}:\sum_{\gamma\in\mathbb{Z}^{d}}\rho(\gamma)\|x_{\gamma}\|_{\mathcal{H}}^{2} <\infty\Bigg{\}},\]
\[K=l_{\rho}^{2s}(L^{2s}(0,1))=\Bigg{\{}(x_{\gamma})\in(L^{2s}(0,1))^{(\mathbb{Z} ^{d})}:\sum_{\gamma\in\mathbb{Z}^{d}}\rho(\gamma)\|x_{\gamma}\|_{L^{2s}(0,1)}^ {2}<\infty\Bigg{\}},\]
where \(\rho=\rho^{\kappa}\) or \(\rho=\rho_{\kappa}\), \(\kappa>0\) as in Section 3.2. Let \(A=A_{0}+A_{1}\) where \(A_{1}\) is a bounded linear operator on \(H\) given by
\[A_{1}(x_{\gamma})=\left(\sum_{j\in\mathbb{Z}^{d}}a_{\gamma j}X_{\gamma} \right),x\in D(A_{1})=H,\]
and \(A_{0}(x_{\gamma})=(\mathcal{A}x_{\gamma})\), \(x=(x_{\gamma})\in D(A_{0})\),
\[D(A_{0})=\Bigg{\{}(x_{\gamma})\in H:\sum_{\gamma\mathbb{Z}^{d}}\rho(\gamma) \|\mathcal{A}x_{\gamma}\|_{\mathcal{H}}^{2}<\infty\Bigg{\}}.\]
\(A_{1}\) is bounded by a generalization of Theorem 3.2. and \(A_{0}+\eta\) on \(H\) and its restriction \(A_{0\rho}\) to \(K\) are \(m\)-dissipative for sufficiently small \(\eta\). Let us define
\[F(x_{\gamma})=(\mathcal{F}x_{\gamma}),\ \ x=(x_{\gamma})\in D(F),\]
\[D(F)=\Bigg{\{}(x_{\gamma})\in H:\sum_{\gamma\in\mathbb{Z}^{d}}\rho(\gamma)\| \mathcal{F}x_{\gamma}\|_{\mathcal{H}}^{2}<\infty\Bigg{\}}.\]
**Theorem 4.1**.: _Assume that conditions of Definition 3.2 hold. Then for arbitrary \(x\in H\) equation 4.1 has a unique generalized solution \(X(\cdot,x)\). If \(x\in K\) the solution is mild._
**Theorem 4.2**.: _In addition to conditions of Theorem 4.1 assume that the function \(f_{0}\) is decreasing and \(\alpha-\|f_{1}\|\) Lip \(>\omega>0\). Then there exists \(\kappa_{0}>0\) such that the semigroup \(P_{t},t\geq 0\) corresponding to solution of equation 4.1 has a unique invariant measure \(\mu\) both in \(H=L^{2}_{\rho^{\kappa}}(\mathbb{R}^{d})\) and \(H=L^{2}_{\rho_{\kappa}}(\mathbb{R}^{d})\), for any \(\kappa\in]0,\kappa_{0}[\). Moreover there exists \(C>0\) such that for any bounded Lipschitz function \(\varphi\) on \(H\), all \(t>0\) and all \(x\in H\)_
\[|P_{t}\varphi(x)-\int_{H}\varphi(x)\mu(dx)|\leq(C+2)\|x\|e^{-\omega t}\| \varphi\|_{\text{Lip}}. \tag{4.2}\]
## 5. Nonlinear Filtering Formulation and Filtering Equations
### Stochastic Calculus Method of Nonlinear Filtering
Nonlinear filtering theory for nonlinear stochastic partial differential equations of fluid mechanics and reacting and diffusing systems was initiated in [51, 27] and further developed for stochastic models of fluid mechanics with Levy noise in [52, 53, 22]. The nonlinear filtering problem for the class of spin systems studied in this paper is formulated as follows. The key mathematical equations of stochastic calculus method of nonlinear filtering are the Kallianpur-Striebel formula, the
Fujisaki-Kallianpur-Kunita (FKK) equation, the Zakai equation and the Kunita's semigroup versions of FKK and the Zakai equation, all will be developed below. We will also establish the equivalency of FKK, Zakai with their Kunita counter parts in the spirit of Szpirglas [55]. We call the process \(X(t)\in H,t\geq 0\) the signal process and it is governed by the evolution formulated in the earlier sections as:
\[dX=(AX+F(X))dt+BdW, \tag{5.1}\]
\[X(0)=x.\]
We note here that \(H\) is compact for the finitely truncated classical spin system, locally compact for the classical spin system and general Hilbert space for the Euclidean fields and also quantum spin lattice system as formulated in Section 2, 3 and 4.
The measurement process \(Y(t)\in\mathbb{R}^{N},t\geq 0\) is defined as
\[dY=h(X)dt+dZ, \tag{5.2}\]
where \(Z(t)\in\mathbb{R}^{N}\) is a \(N\)-dimensional Wiener process which may or may not be correlated with \(W(t)\). Let us denote by \(\mathcal{F}_{t}^{Y}\) the filtration generated by the sensor data over the time period \(0\leq s\leq t\) (sigma algebra generated by the back measurement):
\[\mathcal{F}_{t}^{Y}=\sigma\{Y_{s},0\leq s\leq t\}.\]
Our goal is to study the time evolution of the conditional expectation \(E[f(X(t))|\mathcal{F}_{t}^{Y}],t\geq 0\) called the nonlinear filter, where \(f\) is some measurable function. It is also the least square best estimate of \(f(X(t))\) given the back measurements. We will first consider a special case where the signal (spin system dynamics) noise and observation noise are independent. The following Bayes formula known as the Kallianpur-Striebel formula can be derived following [30]: Let \(g\) be \(\mathcal{F}_{t}^{X}\)-measurable function \(g(\cdot):H\to\mathbb{R}\) integrable on \((\Omega,\Sigma,m)\), \(0\leq t\leq T\). We may assume that processes in 5.2 be defined on the product space \((\Omega\times\Omega_{0},\Sigma\times\Sigma_{0},m\times m_{0})\) where \(Z\) is a Wiener process on \((\Omega_{0},\Sigma_{0},m_{0})\). Then
\[E_{m}[g|\mathcal{F}_{t}^{Y}]=\frac{\vartheta^{Y}(g)}{\vartheta^{Y}(1)} \tag{5.3}\]
where
\[\vartheta_{t}^{Y}(g)=\int_{\Omega}gq_{t}dm\]
with
\[q_{t}=\exp\biggl{\{}\int_{0}^{t}h(X(s))\cdot dY(s)-\frac{1}{2}\int_{0}^{t}|h( X(s))|^{2}ds\biggr{\}}.\]
From the Kallianpur-Striebel formula we can derive the nonlinear filtering equations of Zakai and FKK type using Ito formula for the special case of signal and noise independent. However we can proceed as follows to get the filtering equations for the general case.
Let us define the formal infinitesimal generator \(\mathcal{A}\) of the process \(X(t)\) using the transition semigroup \(P_{t}\) of \(X(t)\) in the following way: for a function \(f(\cdot):H\to\mathbb{R}\), we set \(P_{t}f(x)=E[f(X(t))|X(0)=x]\).
Let us consider the measure space \((H,\mathcal{B}(H))\) where \(\mathcal{B}(H)\) is the Borel algebra generated by closed/open subsets of \(H\). Let \(\mathcal{B}_{b}(H)\) be the class of bounded \(\mathcal{B}(H)\)-measurable functions on \(H\). For \(f_{n},f\in\mathcal{B}_{b}(H)\) we say that \(f_{n}\to f\) weakly if \(f_{n}\to f\) pointwise and the sequence \(f_{n}\) is uniformly bounded.
Let \(\mathcal{D}(\mathcal{A})\) be the class of functions \(f\) in \(\mathcal{B}_{b}(H)\) such that there exists \(f_{0}\in\mathcal{B}_{b}(H)\) satisfying
\[(P_{t}f)(x)=f(x)+\int_{0}^{t}(P_{s}f_{0})(x)ds,\ \ \forall x\in H. \tag{5.4}\]
Note that \(f_{0}\) is uniquely determined by the above equation. Hence we set \(\mathcal{A}f=f_{0}\) and formally define the extended generator \(\mathcal{A}\) as:
\[\mathcal{A}f(x):=\ \text{weak-limit}_{t\downarrow 0^{+}}\frac{P_{t}f(x)-f(x)}{t },\ \ f\in D(\mathcal{A}),\]
where
\[D(\mathcal{A})=\bigg{\{}f\in\mathcal{B}_{b}(H):\ \text{weak-limit}_{\ t \downarrow 0^{+}}\frac{P_{t}f(x)-f(x)}{t}=\mathcal{A}f(x)\ \text{exists}\bigg{\}}.\]
Since the domain \(D(\mathcal{A})\) cannot easily be defined explicitly, we will also work with a sub-class of functions known as cylindrical test functions \(\mathcal{D}_{\text{cyl}}\) defined as follows:
\[\mathcal{D}_{\text{cyl}}:=\big{\{}f\in C_{b}(H):f(X)=\varphi(\langle X,e_{1} \rangle_{H},\cdots,\langle X,e_{N}\rangle_{H}),e_{i}\in D(A),i=1,\cdots,N, \varphi\in C_{0}^{\infty}(\mathbb{R}^{N})\big{\}}.\]
Then \(D_{x}f(x)\in D(A)\) and also \(\mathcal{D}_{\text{cyl}}\subset D(\mathcal{A})\). The formal generator of the stochastic process \(X(t)\) is given by
\[\mathcal{A}\Phi(x)=\frac{1}{2}\text{Tr}(B^{*}QBD_{x}^{2}\Phi(x))+\langle Ax+F(x ),D_{x}\Phi\rangle,\ \ \Phi\in\mathcal{D}_{\text{cyl}}. \tag{5.5}\]
We now note that the solvability theorems established in the previous section imply by Ito's formula the following result.
**Proposition 5.1**.: _Define_
\[M_{t}(f)=f(X(t))-f(x)-\int_{0}^{t}\mathcal{A}f(X(s))ds,\ \ f\in\mathcal{D}_{ \text{cyl}}. \tag{5.6}\]
_where \(X(t)\) is the spin process defined by the stochastic partial differential equation 5.1. Then \((M_{t}(f),\mathcal{F}_{t},m)\) is a martingale._
In fact applying Ito formula [21] to the stochastic process defined by 5.1
\[f(X(t))=f(x)+\int_{0}^{t}\mathcal{A}f(X(s))ds+\int_{0}^{t}(\frac{\partial f}{ \partial x}(X(s),BdW(s)),\ \ f\in\mathcal{D}_{\text{cyl}}.\]
Let us now state a general martingale lemma and then specialize it to the filtering problem.
**Lemma 5.1**.: _Let \(\mathcal{F}_{t}\) and \(\mathcal{G}_{t}\) be filtrations with \(\mathcal{G}_{t}\subset\mathcal{F}_{t}\). Suppose that_
\[M_{t}^{\mathcal{F}}=U(t)-U(0)-\int_{0}^{t}V(s)ds\]
_is an \(\mathcal{F}_{t}\)-martingale. Then_
\[M_{t}^{\mathcal{G}}=E[U(t)|\mathcal{G}_{t}]-E[U(0)|\mathcal{G}_{0}]-\int_{0}^ {t}E[V(s)|\mathcal{G}_{s}]ds\]
_is a \(\mathcal{G}_{t}\)-martingale._
We now state this property in the nonlinear filtering context:
**Theorem 5.1**.: _Define_
\[M_{t}^{Y}(f)=E[f(X(t))|\mathcal{F}_{t}^{Y}]-f(x)-\int_{0}^{t}E[\mathcal{A}f(X(s))| \mathcal{F}_{s}^{Y}]ds,\ \ f\in\mathcal{D}_{cyl}. \tag{5.7}\]
_Then \((M_{t}^{Y}(f),\mathcal{F}_{t}^{Y},m)\) is a martingale._
**Proof:** First note that
\[E\big{[}M_{f}^{Y}(f)|\mathcal{F}_{s}^{Y}\big{]}-M_{s}^{Y}(f)=E\big{[}M_{f}^{Y} (f)-M_{s}^{Y}(f)|\mathcal{F}_{s}^{Y}\big{]}.\]
Noting that \(\mathcal{F}_{t}^{Y}\subset\mathcal{F}_{t}\), we consider, for \(s<t\),
\[E\big{[}M_{f}^{Y}(f)-M_{s}^{Y}(f)|\mathcal{F}_{s}^{Y}\big{]}\]
\[=E\big{[}E[f(X(t))|\mathcal{F}_{t}^{Y}]-E[f(X(s))|\mathcal{F}_{s}^{Y}]| \mathcal{F}_{s}^{Y}\big{]}-E\bigg{[}\int_{s}^{t}E[\mathcal{A}f(X(r))|\mathcal{ F}_{r}^{Y}]dr|\mathcal{F}_{s}^{Y}\bigg{]}.\]
Using the tower property of the conditional expectation we conclude that
\[E\big{[}M_{f}^{Y}(f)-M_{s}^{Y}(f)|\mathcal{F}_{s}^{Y}\big{]}=E[f(X(t))| \mathcal{F}_{s}^{Y}]-E[f(X(s))|\mathcal{F}_{s}^{Y}]-\int_{s}^{t}E[\mathcal{A} f(X(r))|\mathcal{F}_{s}^{Y}]dr\]
\[=E\bigg{[}f(X(t))-f(X(s))-\int_{s}^{t}\mathcal{A}f(X(r))dr|\mathcal{F}_{s}^{Y} \bigg{]}\]
\[=E\big{[}M_{t}(f)-M_{s}(f)|\mathcal{F}_{s}^{Y}\big{]}=E\big{[}E[M_{t}(f)-M_{s} |\mathcal{F}_{s}]|\mathcal{F}_{s}^{Y}\big{]}=0,\]
since \(M_{t}(f)\) is a \((\mathcal{F}_{t},m)\)- martingale. Hence \(E\big{[}M_{f}^{Y}(f)|\mathcal{F}_{s}^{Y}\big{]}=M_{s}^{Y}(f)\).
Using Fujisaki-Kallianpur-Kunita (1972) [23] (in particular Theorem 3.1, Lemma 4.2, Theorem 4.1 and equation 6.11 in that paper) we will characterize \(M_{t}^{Y}\) explicitly using the next set of arguments.
**Definition 5.1**.: _Innovation process is defined as:_
\[d\nu^{Y}(t)=dY(t)-E[h(X(t))|\mathcal{F}_{t}^{Y}]dt,\ \ t\in[0,T]. \tag{5.8}\]
**Lemma 5.2**.: _[_23_]_ _The innovation process \((\nu^{Y},\mathcal{F}_{t}^{Y},m)\) is an \(N\)-vector standard Wiener process. Furthermore \(\mathcal{F}_{s}^{Y}\) and \(\sigma\big{\{}\nu_{v}^{Y}-\nu_{u}^{Y};s\leq u\leq v\leq T\big{\}}\) are independent._
Let us now state an important martingale representation theorem (Theorem 3.1 [23]):
**Theorem 5.2**.: _Every square integrable martingale \((M_{t}^{Y},\mathcal{F}_{t}^{Y},m)\) is sample continuous and has the representation_
\[M_{t}^{Y}-M_{0}^{Y}=\int_{0}^{t}\Xi(s)\cdot d\nu^{Y}(s). \tag{5.9}\]
_where \(\nu(t)\) is the innovation process and \(\Xi(s)=(\Xi_{s}^{1},\cdots,\Xi_{s}^{N})\) is jointly measurable and adapted to \(\mathcal{F}_{s}^{Y}\)._
**Lemma 5.3**.: _[_23_]_ _Let \((M_{t}(f),\mathcal{F}_{t},m)\) be the square integrable martingale 5.6. Then there exists unique sample continuous process \(\langle M(f),Z^{i}\rangle,(i=1,\cdots,N)\) adapted to \(\mathcal{F}_{t}\) such that almost all sample functions are of bounded variation and \(M_{t}(f)-\langle M(f),Z^{i}\rangle_{t}\) are \(\mathcal{F}_{s}\)-martingales. Furthermore each \(\langle M(f),Z^{i}\rangle_{t}\) has the following properties: it is absolutely continuous with respect to Lebesgue measure in \([0,T]\). There exists a modification of the
Radon-Nikodym derivative which is \((t,\omega)\)-measurable and adapted to \(\mathcal{F}_{t}\) and which shall denote by \(\tilde{D}_{t}^{i}f(\omega)\). Then using the vector notation \(\tilde{D}_{t}f=(\tilde{D}_{t}^{1}f,\cdots,\tilde{D}_{t}^{N}f)\),_
\[\langle M(f),Z\rangle_{t}=\int_{0}^{t}\tilde{D}_{s}fds,\quad\text{a.s},\]
_where_
\[\int_{0}^{T}E|\tilde{D}_{s}f|^{2}ds<\infty.\]
_If the process \(X(t)\) and \(Z(t)\) are independent then_
\[\langle M(f),Z\rangle_{t}=0,\text{ a.s.}\]
The following theorem characterizes \(\Xi\) explicitly:
**Theorem 5.3**.: _Let \(f\in D(\mathcal{A})\) and_
\[\int_{0}^{T}|h(X(t))|^{2}dt<\infty,\quad\text{a.s.} \tag{5.10}\]
_Then the evolution of the conditional expectation \(E[f(X(t))|\mathcal{F}_{t}^{Y}]\) (the nonlinear filter) is characterized by the Fujisaki-Kallianpur-Kunita equation:_
\[E[f(X(t))|\mathcal{F}_{t}^{Y}]=E[f(X(0))|\mathcal{F}_{0}^{Y}]+\int_{0}^{t}E[ \mathcal{A}f(X(s))|\mathcal{F}_{s}^{Y}]ds\]
\[+\int_{0}^{t}\Bigl{\{}E[f(X(s))h(X(s))|\mathcal{F}_{s}^{Y}]-E[f(X(s))|\mathcal{ F}_{s}^{Y}]E[h(X(s))|\mathcal{F}_{s}^{Y}]+E[\tilde{D}_{s}f(X(s))|\mathcal{F}_{s}^{ Y}]\Bigr{\}}\cdot d\nu^{Y}(s) \tag{5.11}\]
_where \(\tilde{D}f(X(s))\) is given by_
\[\langle M(f),Z\rangle=\int_{0}^{t}\tilde{D}_{s}fds.\]
_In particular \(\tilde{D}_{t}f=0\) if \(W\) and \(Z\) are independent. Moreover, if_
\[BW=\sigma_{1}W_{1}+\sigma_{2}Z,\]
_then_
\[\tilde{D}_{t}f=\sigma_{2}^{*}\frac{\partial f}{\partial x}.\]
Note that if \(h\) has a growth condition
\[|h(x)|\leq C||x||^{p},\text{ for some }p\geq 1,\ \ x\in H,\]
then by theorem 2.1 the condition 5.10 is satisfied.
### Existence and Uniqueness of Measure-valued solutions to Filtering Equations
A theorem of Getoor [24] provides the existence of a random measure \(\pi_{t}^{Y}\) that is measurable with respect to \(\mathcal{F}_{t}^{Y}\) such that
\[E[f(X(t))|\mathcal{F}_{t}^{Y}]=\pi_{t}^{Y}[f]=\int_{H}f(\zeta)\pi_{t}^{Y}(d \zeta).\]
Substituting in 5.11, the probability measure-valued process \(\pi_{t}^{Y}[\cdot]\) satisfies the Fujisaki-Kallianpur-Kunita (FKK) type equation:
\[d\pi_{t}^{Y}[f]=\pi_{t}^{Y}[\mathcal{A}f]dt+\left(\pi_{t}^{Y}[hf+\tilde{D}_{t }f]-\pi_{t}^{Y}[h]\pi_{t}^{Y}[f]\right)\cdot d\nu^{Y}(t),\quad\text{for }f\in \mathcal{D}_{\text{cyl}}. \tag{5.12}\]
If we set
\[\vartheta_{t}^{Y}[f]:=\pi_{t}^{Y}[f]\exp\biggl{\{}\int_{0}^{t}\pi_{t}^{Y}[h]\cdot dY (s)-\frac{1}{2}\int_{0}^{t}|\pi_{t}^{Y}[h]|^{2}ds\biggr{\}}, \tag{5.13}\]
then the measure-valued process \(\vartheta_{t}^{y}[\cdot]\) satisfies the Zakai type equation:
\[d\vartheta_{t}^{Y}[f]=\vartheta_{t}^{Y}[\mathcal{A}f]dt+\vartheta_{t}^{Y}[hf+ \tilde{D}_{t}f]\cdot dY(t),\text{ for }f\in\mathcal{D}_{\text{cyl}}. \tag{5.14}\]
Such measure-valued evolutions were first derived by Kunita [38] in the context of compact space valued signal processes.
One method of proving uniqueness of measure-valued solutions to filtering equations is to start with the unique solution of the backward Kolmogorov equation similar to the papers in the case of nonlinear filtering of stochastic Navier-Stokes equation [51] or for the case of stochastic reaction diffusion equation [27]. Here we use the solution \(\Phi(t)\) of the backward Kolmogorov equation with initial data \(\Phi\) and express the Zakai equation as:
\[\vartheta_{t}^{Y}[\Psi]=\vartheta_{0}^{Y}[\Phi(t)]+\int_{0}^{t}\vartheta_{s}^ {Y}[(\frac{\partial}{\partial s}+\mathcal{A})\Phi(s)]ds+\int_{0}^{t}\Bigl{(} \vartheta_{s}^{Y}[(h+\tilde{D}_{t})\Phi(s)]\Bigr{)}\cdot d\nu_{s}, \tag{5.15}\]
and utilize the fact the \(\Phi\) satisfies \((\frac{\partial}{\partial s}+\mathcal{A})\Phi(s)=0\) in the above equation (eliminating the integral involving the generator \(\mathcal{A}\)) and then proceed to prove the uniqueness of the random measures \(\vartheta_{t}^{Y}\) as well as \(\pi_{t}^{Y}\).
We now specialize to the case of signal noise \(W\) and observation noise \(Z\) being independent. In this setting we can use the results of Kunita [38] and Szpirglas[55] to obtain equivalent nonlinear filtering equation that do not explicitly involve the generator \(\mathcal{A}\) of the signal Markov process \(X\).
Uniqueness theorem relies on the following lemma [55, 32]:
**Lemma 5.4**.: _Let us define a subclass of \(\mathcal{D}(\mathcal{A})\) as:_
\[\mathcal{D}_{2}(\mathcal{A}):=\{f\in\mathcal{D}(\mathcal{A}):\mathcal{A}f\in \mathcal{D}(\mathcal{A})\}.\]
_Let \(\mu_{1},\mu_{2}\in\mathcal{M}(H)\) be finite measures on \((H,\mathcal{B}(H))\) such that_
\[\langle\mu_{1},f\rangle=\langle\mu_{2},f\rangle,\quad\forall f\in\mathcal{D}_ {2}(\mathcal{A}), \tag{5.16}\]
_where \(\langle\mu,f\rangle=\int_{H}f(x)d\mu(x).\) Then \(\mu_{1}=\mu_{2}\in\mathcal{M}(H)\)._
The uniqueness of measure-valued solutions for the nonlinear filtering equations and equivalence of two different forms, one involving the formal generator and other using Feller semigroup of the system Markov process eliminating the term involving the generator started with Kunita[38] and Szpirglass [55] and the following series of theorems can be proven by the same methods as in those original works for signal state space compact, locally compact and general Hilbert space cases as in [38, 54, 14, 18, 22]:
**Theorem 5.4**.: _The random probability measure valued process \(\pi_{t}^{Y}\in\mathcal{P}(H)\) uniquely solves the evolution equation:_
\[\pi_{t}^{Y}[f]=\pi_{0}^{Y}[f]+\int_{0}^{t}\pi_{s}^{Y}[\mathcal{A}f]ds+\int_{0 }^{t}\bigl{(}\pi_{s}^{Y}[hf]-\pi_{s}^{Y}[h]\pi_{s}^{Y}[f]\bigr{)}\cdot d\nu^{ Y}(s),\quad\text{for }f\in D(\mathcal{A}). \tag{5.17}\]
_Equivalently, \(\pi_{t}^{Y}\) uniquely solves the evolution equation:_
\[\pi_{t}^{Y}[f]=\pi_{0}[P_{t}f]+\int_{0}^{t}\bigl{(}\pi_{s}^{Y}((P_{t-s}f)h)- \pi_{s}^{Y}(P_{t-s}f)\pi_{s}^{Y}(h)\bigr{)}\cdot d\nu_{s}^{Y},\quad\text{for }f\in \mathcal{B}_{b}(H). \tag{5.18}\]
**Theorem 5.5**.: _the random measure valued process \(\vartheta_{t}^{Y}\in\mathcal{M}(H)\) uniquely solves the evolution equation:_
\[\vartheta_{t}^{Y}[f]=\vartheta_{0}^{Y}[f]+\int_{0}^{t}\vartheta_{s}^{Y}[ \mathcal{A}f]ds+\int_{0}^{t}\vartheta_{s}^{Y}[hf]\cdot dY(s),\quad\text{for }f\in D( \mathcal{A}). \tag{5.19}\]
_Equivalently, \(\vartheta_{t}^{Y}\in\mathcal{M}(H)\) uniquely solves the evolution equation:_
\[\vartheta_{t}^{Y}[f]=\vartheta_{0}[P_{t}f]+\int_{0}^{t}\vartheta_{s}^{Y}((P_{t -s}f)h)\cdot dY(s),\quad\text{for }f\in\mathcal{B}_{b}(H). \tag{5.20}\]
**Theorem 5.6**.: _Let \(\pi_{t}^{Y}\) be the unique solution of the FKK equation for arbitrary initial condition \(\pi_{0}\). Furthermore,_
1. _The solution_ \(\pi_{t}^{Y}\) _is_ \(\sigma(Y(s)-Y(0);0\leq s\leq t)\vee\sigma(\pi_{0})\) _-measurable._
2. _Let_ \(\pi_{t}^{Y(\nu)}\) _and_ \(\pi_{t}^{Y(\mu)}\) _be solutions with initial conditions_ \(\pi_{0}=\nu\) _and_ \(\pi_{0}=\mu\) _respectively, where_ \(\mu,\nu\in\mathcal{M}(H)\)_. Then for every_ \(t>0\)__ \[\lim_{\nu\to\mu}E\Big{[}\big{|}\pi_{t}^{Y(\nu)}(f)-\pi_{t}^{Y(\mu)}(f)\big{|} \Big{]}=0,\ \ f\in C_{b}(H).\] (5.21)
**Theorem 5.7**.: _The filtering process \((\pi^{Y},\mathcal{F}_{t}^{Y},P_{\mu})\), \(\mu\in\mathcal{M}(H)\) are Markov process associated with the transition probabilities \(\Pi_{t}(\nu,\Gamma)\) defined by_
\[\Pi_{t}(\nu,\Gamma)=P(\pi_{t}^{Y}\in\Gamma:\ \ \Gamma\in\mathcal{B}(\mathcal{M}(H )),\]
_where \(\mathcal{B}(\mathcal{M}(H))\) is the Borel algebra generated by the open (or closed) sets in \(\mathcal{M}(H)\)._
_Furthermore, the transition probabilities \(\Pi_{t}(\nu,\Gamma)\) define a Feller semigroup in \(C_{b}(\mathcal{M}(H))\), where \(C_{b}(\mathcal{M}(H))\) is the space of all real continuous functions over \(\mathcal{M}(H)\)._
Systematic study of time asymptotic analysis of nonlinear filter was initiated by H. Kunita [38] and further developed by several authors [41, 54, 39, 42, 6, 13, 14, 18]. We will further develop this subject to be applicable to classical and quantum systems studied in this paper.
**Definition 5.2**.: _A probability measure \(\nu\in\mathcal{P}(H)\) on the Borel algebra \(\mathcal{B}(H)\) is called the barycenter of the probability measure \(\Phi\in\mathcal{P}(\mathcal{P}(H))\) on the Borel algebra \(\mathcal{B}(\mathcal{P}(H))\) if and only if for every \(\phi\in C_{b}(H)\)_
\[\nu(\phi)=\int_{\mathcal{P}(H)}\nu^{\prime}(\phi)\Phi(d\nu^{\prime}). \tag{5.22}\]
The following theorem is inspired by the original paper of Kunita[38] which was in turn technically corrected by several authors [7, 18, 56].
**Theorem 5.8**.: _Let us assume that the following condition regarding the sigma fields is satisfied:_
\[\bigcap_{t\leq 0}\bigl{(}\mathcal{F}_{-\infty,0}^{Y}\vee\mathcal{F}_{-\infty,t}^ {X}\bigr{)}=\mathcal{F}_{-\infty,0}^{Y}\vee\mathcal{F}_{-\infty,-\infty}^{X}, \ \ m\text{ a.s.} \tag{5.23}\]
_The existence and uniqueness of invariant measures for the transition probabilites \(P_{t}(x,A),A\in\mathcal{B}(H)\), associated with the classical (Theorem 2.3, Theorem 3.5, Theorem 3.7) and quantum spin systems (Theorem 4.2) imply that the transition probabilities \(\Pi_{t}(\nu,\Gamma)\) associated with the filtering process for these classican and quantum spin systems have unique invariant measures. The invariant measure \(\mu\) for \(P_{t}(x,A)\) is the barycenter of the invariant measure \(\Phi\) of \(\Pi_{t}(\nu,\Gamma)\):_
\[\mu(f)=\int_{\mathcal{M}(H)}\nu(f)\Phi(d\nu),\ \ \forall f\in C_{b}(H). \tag{5.24}\]
### Evolution Equation for Error Covariance
The seminal paper of Kalman and Bucy [37] also gives a characterization of the evolution of error covariance using a Riccati equation. This concept was generalized by Kunita[39] for nonlinear filters. Let us describe the time evolution of error covariance slightly generalizing the results of Kunita[39]. The error covariance of general moments \(f(X(t))\) and \(g(X(t))\) is given by
\[\mathcal{P}_{t}[f,g]=E[(f(X(t))-\pi_{t}(f))(g(X(t))-\pi_{t}(g))]. \tag{5.25}\]
**Theorem 5.9**.: _Let \(f,g\in D(\mathcal{A})\). Then \(\mathcal{P}_{t}[f,g]\) is differentiable with respect to \(t\) and the derivative satisfies_
\[\frac{d}{dt}\mathcal{P}_{t}[f,g]=\mathcal{P}_{t}[f,\mathcal{A}g]+\mathcal{P}_ {t}[\mathcal{A}f,g]+\mathcal{Q}_{t}[f,g]\]
\[-\sum_{i=1}^{N}E\Big{[}\pi_{t}\Big{\{}(f-\pi_{t}(f))(h^{i}-\pi_{t}(h^{i}))+ \tilde{D}f\Big{\}}\pi_{t}\Big{\{}(g-\pi_{t}(g))(h^{i}-\pi_{t}(h^{i}))+\tilde{D} g\Big{\}}\Big{]}, \tag{5.26}\]
_where_
\[\mathcal{Q}_{t}[f,g]=E[\text{ tr}(B^{*}QBD_{x}f\otimes D_{x}g)]. \tag{5.27}\]
**Proof:**
Note that we have:
\[\mathcal{A}(fg)=f\mathcal{A}g+g\mathcal{A}f+\text{ tr}(B^{*}QBD_{x}f\otimes D _{x}g).\]
We first write the FKK equation for \(\pi_{t}(fg)\) as
\[\pi_{t}(fg)=\pi_{0}(fg)+\int_{0}^{t}\pi_{s}(f\mathcal{A}g)ds+\int_{0}^{t}\pi_{ s}(g\mathcal{A}f)ds+\int_{0}^{s}\pi_{s}(\text{ tr}(B^{*}QBD_{x}f\otimes D_{x}g))ds+M_{t}^{\pi},\]
where \(M_{t}^{\pi}\) is a \(\mathcal{F}_{t}^{Y}\)-martingale with mean zero and \(f,g\in D(\mathcal{A})\). Similarly since \(\pi_{t}(f)\) and \(\pi_{t}(g)\) satisfy FKK equation we have by Ito formula for the product \(\pi_{t}(f)\pi_{t}(g)\):
\[\pi_{t}(f)\pi_{t}(g)=\pi_{0}(f)\pi_{0}(g)+\int_{0}^{t}\pi_{s}(f)\pi_{s}( \mathcal{A}g)ds+\int_{0}^{t}\pi_{s}(g)\pi_{s}(\mathcal{A}f)ds\]
where \(\tilde{M}_{t}^{\pi}\) is a \(\mathcal{F}_{t}^{Y}\)-martingale with mean zero. Now noting that
\[\pi_{s}(fg)-\pi_{s}(f)\pi_{s}(g)=\pi_{s}\{(f-\pi_{s}(f))(g-\pi_{s}(g))\},\]
we get
\[\mathcal{P}_{t}(f,g)=E[\pi_{t}(fg)-\pi_{t}(f)\pi_{t}(g)].\]
Substituting for \(\pi_{t}(fg)\) and \(\pi_{t}(f)\pi_{t}(g)\) from the previous two equations and taking expectation we arrive at
\[\mathcal{P}_{t}(f,g)=\mathcal{P}_{0}(f,g)+\int_{0}^{t}\{\mathcal{P}_{s}[f, \mathcal{A}g]+\mathcal{P}_{s}[\mathcal{A}f,g]+\mathcal{Q}_{s}[f,g]\}ds\]
\[-\sum_{i=1}^{N}\int_{0}^{t}E\Big{[}\pi_{s}\Big{\{}(f-\pi_{s}(f))(h^{i}-\pi_{s }(h^{i}))+\tilde{D}f\Big{\}}\pi_{s}\Big{\{}(g-\pi_{s}(g))(h^{i}-\pi_{s}(h^{i}) )+\tilde{D}g\Big{\}}\Big{]}ds.\]
## 6. White Noise Theory of Nonlinear Filtering
White noise theory of nonlinear filtering was initiated by several papers of A. V. Balakrishnan [5, 4, 5] and systematically developed by G. Kallianpur and R. Karandikar [31, 32, 33, 34, 35] (see also Kallianpur's review [36]). We will further develop this method to be applicable to classical and quantum spin systems discussed in this paper. Key mathematical equations of white noise nonlinear filtering theory are essentially parallel to stochastic calculus counterpart but of deterministic in nature. The Kallianpur-Striebel formula, the Fujisaki-Kallianpur-Kunita equation, the Zakai equation and the Kunita's semigroup versions of FKK and the Zakai equation all have white noise counterparts and will be developed below for the spin system models. We will also establish the equivalency of FKK, Zakai with their Kunita counter parts in the spirit of Szpirglas[55].
We will now describe the theory of Segal [48] and Gross [25, 26] of finitely additive cylindrical measures on separable Hilbert spaces. Let \(\mathcal{H}\) be a separable Hilbert space and \(\mathcal{P}\) be the set of orthogonal projections on \(\mathcal{H}\) having finite dimensional range. For \(P\in\mathcal{P}\), let \(\mathcal{C}_{P}=\{P^{-1}B:B\text{ a Borel set in the range of }P\}\). Let \(\mathcal{C}=\cup_{P\in\mathcal{P}}\mathcal{C}_{P}\). A cylinder measure \(\mathbf{n}\) on \(\mathcal{H}\) is a finitely additive measure on \((\mathcal{H},\mathcal{C})\) such that its restriction to \(\mathcal{C}_{P}\) is countably additive for each \(P\in\mathcal{P}\).
Let \(L\) be a representative of the weak-distribution corresponding to the cylinder measure \(\mathbf{n}\). This means that \(L\) is a linear map from \(\mathcal{H}^{*}\) (identified with \(\mathcal{H}\)) in to \(\mathcal{L}_{0}(\Omega_{1},\Sigma_{1},m_{1})\) -the space of all random variables on a \(\sigma\)-additive probability space \((\Omega_{1},\Sigma_{1},m_{1})\) such that
\[\mathbf{n}(h\in\mathcal{H}:((h,h_{1}),(h,h_{2}),\cdots,(h,h_{k}))\in B)\]
\[=m_{1}(\omega\in\Omega_{1};(L(h_{1})(\omega),L(h_{2})(\omega),\cdots,L(h_{k}) (\omega))\in B) \tag{6.1}\]
for all Borel sets \(B\) in \(\mathbb{R}^{k}\), \(h_{1},\cdots,h_{k}\in\mathcal{H}\), \(k\geq 1\). Two maps \(L,L^{\prime}\) are said to be equivalent if both satisfy 6.1 and the equivalence class of such maps is the weak distribution corresponding to the cylindrical measure \(\mathbf{n}\).
A function \(f\) on \(\mathcal{H}\) is called a tame function if it is of the form
\[f(y)=\varphi((y,h_{1}),\cdots,(y,h_{k})),\text{ for some }k\geq 1,h_{1},\cdots,h_ {k}\in\mathcal{H} \tag{6.2}\]
and a Borel function \(\varphi:\mathbb{R}^{k}\rightarrow\mathbb{R}\). For a tame function \(f\) given by 6.2, we associate the random variable \(\varphi(L(h_{1}),\cdots,L(h_{k}))\) on \((\Omega_{1},\Sigma_{1},m_{1})\) and denote it by \(\tilde{f}\). We can extend this map \(f\rightarrow\tilde{f}\) to a larger class of functions as follows [26]:
**Definition 6.1**.: _Let \(\mathcal{L}(\mathcal{H},\mathcal{C},\mathbf{n})\) be the class of continuous functions \(f\) on \(\mathcal{H}\) such that the net \(\{(f\tilde{\circ}P):P\in\mathcal{P}\}\) is Cauchy in \(m_{1}\)-measure. Here \(P_{1}<P_{2}\) if the range \(P_{1}\subseteq\) range \(P_{2}\). For \(f\in\mathcal{L}(\mathcal{H},\mathcal{C},\mathbf{n})\), we define:_
\[\tilde{f}=\text{Lim in Prob}_{P\in\mathcal{P}}(f\tilde{\circ}P).\]
_The map \(f\rightarrow\tilde{f}\) is linear, multiplicative and depends only on \(f\) and \(\mathbf{n}\) and is independent of the representative of the weak distribution._
**Definition 6.2**.: _The function \(f\in\mathcal{L}(\mathcal{H},\mathcal{C},\mathbf{n})\) is integrable with respect to \(\mathbf{n}\) if \(\int_{\Omega_{1}}|\tilde{f}|dm_{1}<\infty\) and then for \(C\in\mathcal{C}\) define the integral \(f\) with respect to \(\mathbf{n}\) over \(C\) denoted by \(\int_{C}fd\mathbf{n}\) by_
\[\int_{C}fd\mathbf{n}=\int_{\Omega_{1}}\tilde{\text{$C$}}\tilde{f}dm_{1}.\]
Let us now consider the class of all Gauss measures on \(\mathcal{H}\) defined by
\[\mu\{y\in\mathcal{H}:(y,h)\leq a\}=\frac{1}{\sqrt{2\pi v(h)}}\int_{-\infty}^{a} \exp{(-\frac{x^{2}}{2v(h)})}dx,\ \ \forall h\in\mathcal{H}.\]
The special case of \(v(h)=\|h\|^{2}\) is called the canonical Gauss measure \(\mathbf{m}\). The following Lemma from Sato [47] (in particular, Lemma 6 in that paper)clarifies the \(\sigma\)-additivity of Gauss measures on separable Hilbert spaces.
**Lemma 6.1**.: _let \(\mathcal{H}\) be a separable Hilbert space and let \(\mu\) be a Gaussian cylinder measure on \((\mathcal{H},\mathcal{C})\) with variance \(v(h)\). Then \(\mu\) has a \(\sigma\)-additive extension to \((\mathcal{H},\bar{\mathcal{C}})\) (here \(\bar{\mathcal{C}}\) is the minimal \(sigma\)-algebra containing \(\mathcal{C}\)) if and only if the characteristic functional of \(\mu\) is of the form:_
\[\int_{\mathcal{H}}e^{i(h,x)}d\mu(x)=\exp{\left[i<h,m_{e}>-\frac{1}{2}\|Sh\|^{2 }\right]},\ \ \forall h\in\mathcal{H},\]
_where \(m_{e}\) is an element of \(\mathcal{H}\), and \(S\) is a nonnegative selfadjoint Hilbert-Schmidt operator on \(\mathcal{H}\)._
The canonical Gauss measure \(\mathbf{m}\) corresponds to \(m_{e}=0\) and \(S=I\) and since the identity operator \(I\) is not Hilbert-Schmidt, the cannonical Gauss measure \(\mathbf{m}\) is only finitely additive (See also Chapter-I, Proposition 4.1 of Kuo[40]).
The identity map \(e\) on \(\mathcal{H}\), considered as a map from \((\mathcal{H},\mathcal{C},\mathbf{n})\) in to \((\mathcal{H},\mathcal{C})\) is called the Gaussian white noise.
Let us now start with the abstract version of the white noise filtering model as formulated by Kallianpur and Karandikar [35]:
\[y=\zeta+e \tag{6.3}\]
where \(\zeta\) is an \(\mathcal{H}\)-valued random variable defined on a countably additive probability space \((\Omega,\mathcal{F},m)\), independent of \(e\). To formulate this mathematicially precisely let \(\mathcal{E}=\mathcal{H}\times\Omega\) and
\[\Sigma=\cup_{P\in\mathcal{P}}\mathcal{C}_{P}\otimes\mathcal{F}\text{ where }\mathcal{C}_{P}\otimes\mathcal{F}\text{ is the product sigma field.}\]
For \(P\in\mathcal{P}\), let \(\alpha_{P}\) be the product measure \((\mathbf{m}|\mathcal{C}_{P})\otimes m\) which is countably additive. This defines a unique finitely additive probability measure \(\alpha\) on \((\mathcal{E},\Sigma)\) such that \(\alpha=\alpha_{P}\) on \(\mathcal{C}_{P}\otimes\mathcal{F}\).
let \(e,\zeta,y\) be \(H\)-valued maps on \(\mathcal{E}\) defined by
\[e(h,\omega)=h,\] \[\zeta(h,\omega)=\zeta(\omega),\] \[y(h,\omega)=e(h,\omega)+\zeta(h,\omega),\ \ (h,\omega)\in \mathcal{H}\times\Omega. \tag{6.4}\]
The model 6.4 is the abstract version of the white noise filtering on \((\mathcal{E},\Sigma,\alpha)\). Our goal is to first characterize the conditional expectation \(E[g|y]\) in this finitely additive setting.
**Definition 6.3**.: _If there exists a \(v\in\mathcal{L}(\mathcal{H},\mathcal{C},\mathbf{n})\) such that_
\[\int_{\mathcal{H}\times\Omega}g(\omega)1_{C}(y(h,\omega))d\alpha(h,\omega)= \int_{C}v(y)d\mathbf{n}(y),\ \ \forall C\in\mathcal{C}, \tag{6.5}\]
_then we define \(v\) to be the conditional expectation of \(g\) given \(y\) and express it as \(E[g|y]=v\)._
Note that the integrand \(g(\omega)1_{C}(y(h,\omega))\) is \(\mathcal{C}_{P}\otimes\mathcal{F}\)-measurable for \(C\in\mathcal{C}_{P}\) and hence the integrals in 6.5 are well-defined. Let us now state the elegant Bayes formula for the finitely additive white noise framework proved in [31]:
**Theorem 6.1**.: _Let \(y,\zeta\) be as in 6.4. Let \(g\) be an integrable function on \((\Omega,\mathcal{F},m)\). Then_
\[E[g|y]=\frac{\int_{\Omega}g(\omega)exp\big{\{}(y,\zeta(\omega))-\frac{1}{2}\| \zeta(\omega)\|^{2}\big{\}}dm(\omega)}{\int_{\Omega}exp\big{\{}(y,\zeta(\omega ))-\frac{1}{2}\|\zeta(\omega)\|^{2}\big{\}}dm(\omega)}. \tag{6.6}\]
We now return to our spin system models where the state variable \(X(t)\) is an \((H,\mathcal{B}(H))\)-valued stochastic process (where \(\mathcal{B}(H)\) is the Borel algebra generated by the closed or open subsets of \(H\)) that is defined on the probability space \((\Omega,\mathcal{F},m)\). Let \(\mathcal{K}\) be a separable Hilbert space (including \(\mathbb{R}^{N}\)) and let the observation vector \(\zeta=\zeta(X(t))\) be a measurable function from \(H\to\mathcal{K}\) such that
\[\int_{0}^{T}\|\zeta(X(t))\|_{\mathcal{K}}^{2}dt<\infty,\ \text{a.s..} \tag{6.7}\]
The Hilbert space \(\mathcal{H}\) we started with in the abstract white noise formulation will be in this case \(\mathcal{H}=L^{2}(0,T;\mathcal{K})\).
The white noise sensor measurement model:
\[Y(t)=\zeta(X(t))+e(t)\in\mathcal{K},\ \ t>0, \tag{6.8}\]
where \(e(t)\in\mathcal{K}\) is a finite or infinite dimensional white noise.
Let us define a Borel measure \(\rho_{t}^{Y}(\cdot)\in\mathcal{M}(H)\) and a probability measure \(\pi_{t}^{Y}(\cdot)\in\mathcal{P}(H)\) as follows:
\[\rho_{t}^{Y}(B)=E\bigg{[}1_{B}(X(t))\exp\int_{0}^{t}\bigg{\{}(\zeta(X(s)),Y(s) )_{\mathcal{K}}-\frac{1}{2}\|\zeta(X(s))\|_{\mathcal{K}}^{2}\bigg{\}}ds\bigg{]},\]
for \(B\in\mathcal{B}(H)\), and
\[\pi_{t}^{Y}(B)=\frac{\rho_{t}^{Y}(B)}{\rho_{t}^{Y}(H)}.\]
Then the measures \(\rho_{t}^{Y}\in\mathcal{M}(H)\) and \(\pi_{t}^{Y}\in\mathcal{P}(H)\) satisfy:
\[<\pi_{t}^{Y},f>=\int_{H}f(x)\pi_{t}^{Y}(dz)=E[f(X(t))|Y(s),0\leq s\leq t], \tag{6.9}\]
\[<\pi_{t}^{Y},f>=\frac{<\rho_{t}^{Y},f>}{<\rho_{t}^{Y},1>},\]
\[<\rho_{t}^{Y},f>=E\bigg{\{}f(X(t))\exp\int_{0}^{t}\mathfrak{C}_{s}^{Y}(X(s))ds \bigg{\}},\]
where
\[\mathfrak{C}_{s}^{Y}(X)=(\zeta(X(s)),Y(s))_{\mathcal{K}}-\frac{1}{2}\|\zeta(X (s))\|_{\mathcal{K}}^{2}.\]
The following series of theorems on equivalence of finitely additive nonlinear filtering equations, uniqueness of measure-valued filter, Markov property and robustness are proven by methods similar to those of Kallianpur and Karandikar [31, 32, 33, 35, 36]:
**Theorem 6.2**.: _For \(Y\in C([0,T];\mathcal{H})\) and the class of measures \(\rho_{t}^{Y}\in\mathcal{M}(H)\) uniquely solves_
\[<\rho_{t}^{Y},f>=<\rho_{0},f>+\int_{0}^{t}<\rho_{s}^{Y},\mathcal{A}f+ \mathfrak{C}_{s}^{Y}f>ds,\ \ f\in D(\mathcal{A}),0\leq t\leq T, \tag{6.10}\]
_or equivalently, \(\rho_{t}^{Y}\in\mathcal{M}(H)\) uniquely solves_
\[<\rho_{t}^{Y},f>=<\rho_{0},P_{t}f>+\int_{0}^{t}<\rho_{s}^{Y},\mathfrak{C}_{s}^{Y }(P_{t-s}f)>ds,\ \ f\in\mathcal{B}_{b}(H),0\leq t\leq T. \tag{6.11}\]
**Theorem 6.3**.: _For \(Y\in C([0,T];\mathcal{H})\) the probability measure valued process \(\pi_{t}^{Y}\in\mathcal{P}(H)\), \(0\leq t\leq T\), uniquely solves_
\[<\pi_{t}^{Y},f>=<\pi_{0},f>+\int_{0}^{t}\bigl{[}<\pi_{s}^{Y},\mathcal{A}f+ \mathfrak{C}_{s}^{Y}f>-<\pi_{s}^{Y},\mathfrak{C}_{s}^{Y}><\pi_{s}^{Y},f>\bigr{]} ds,f\in D(\mathcal{A}), \tag{6.12}\]
_or equivalently \(\pi_{t}^{Y}\in\mathcal{P}(H)\) uniquely solves_
\[<\pi_{t}^{Y},f>=<\pi_{0},P_{t}f>+\int_{0}^{t}\bigl{[}<\pi_{s}^{Y},\mathfrak{C }_{s}^{Y}(P_{t-s}f)>-<\pi_{s}^{Y},\mathfrak{C}_{s}^{Y}><\pi_{s}^{Y},(P_{t-s}f)> \bigr{]}ds,f\in\mathcal{B}_{b}(H). \tag{6.13}\]
Let
\[\mathcal{H}_{t}=\biggl{\{}\eta\in\mathcal{H}:\int_{t}^{\infty}\|\eta(r)\|_{ \mathcal{K}}^{2}dr=0\biggr{\}}.\]
Note that \(\mathcal{H}_{t}\) is a closed subspace of \(\mathcal{H}\). Let \(Q_{t}\) be the orthogonal projection onto \(\mathcal{H}_{t}\) and let \(\mathcal{C}_{Q_{t}}=\mathcal{C}(\mathcal{H}_{t})\).
**Theorem 6.4**.: \(\pi_{t}^{Y}\) _and \(\rho_{t}^{Y}\) are respectively \(\mathcal{P}(H)\) and \(\mathcal{M}(H)\) -valued \(\{\mathcal{C}_{Q_{t}}\}\) Markov processes on \((\mathcal{H},\mathcal{C},\mathbf{n})\)._
**Theorem 6.5**.: _If \(Y_{n}\to Y\) in \(\mathcal{H}\) then \(\pi_{t}^{Y_{n}}\to\pi_{t}^{Y}\) and \(\rho_{t}^{Y_{n}}\to\rho_{t}^{Y}\) in total variation norm._
**Acknowledgment**: The first author's research has been supported by the U. S. Air Force Research Laboratory through the National Research Council Senior Research Fellowship of the National Academies of Science, Engineering and Medicine.
|
2309.04350 | Exploring Cohesive Subgraphs in Hypergraphs: The (k,g)-core Approach | Identifying cohesive subgraphs in hypergraphs is a fundamental problem that
has received recent attention in data mining and engineering fields. Existing
approaches mainly focus on a strongly induced subhypergraph or edge
cardinality, overlooking the importance of the frequency of co-occurrence. In
this paper, we propose a new cohesive subgraph named (k,g)-core, which
considers both neighbour and co-occurrence simultaneously. The $(k,g)$-core has
various applications including recommendation system, network analysis, and
fraud detection. To the best of our knowledge, this is the first work to
combine these factors. We extend an existing efficient algorithm to find
solutions for $(k,g)$-core. Finally, we conduct extensive experimental studies
that demonstrate the efficiency and effectiveness of our proposed algorithm. | Dahee Kim, Junghoon Kim, Sungsu Lim, Hyun Ji Jeong | 2023-09-08T14:21:46Z | http://arxiv.org/abs/2309.04350v1 | # Exploring Cohesive Subgraphs in Hypergraphs: The \((k,g)\)-core Approach
###### Abstract
Identifying cohesive subgraphs in hypergraphs is a fundamental problem that has received recent attention in data mining and engineering fields. Existing approaches mainly focus on a strongly induced subhypergraph or edge cardinality, overlooking the importance of the frequency of co-occurrence. In this paper, we propose a new cohesive subgraph named \((k,g)\)-core, which considers both neighbour and co-occurrence simultaneously. The \((k,g)\)-core has various applications including recommendation system, network analysis, and fraud detection. To the best of our knowledge, this is the first work to combine these factors. We extend an existing efficient algorithm to find solutions for \((k,g)\)-core. Finally, we conduct extensive experimental studies that demonstrate the efficiency and effectiveness of our proposed algorithm.
keywords: Hypergraph Mining, Cohesive subgraph discovery
## 1 Introduction
Complex systems such as social networks, biological networks, and market transaction systems are becoming increasingly intricate. The ability to accurately model and analyse these systems is important for understanding their behaviour and predicting their evolution. Traditional graph theory [1], which has been widely used to represent relationships between entities in a system, often falls short in capturing the high-order relationships in complex systems [2]. Hypergraphs [3], which generalise traditional graph theory, offer a more flexible framework by allowing edges to connect any number of nodes, unlike traditional graphs where edges connect only two nodes. This characteristic makes hypergraphs particularly suitable for capturing multifaceted relationships and interactions in real-world networks, ranging from social networks to biological systems and beyond.
Real-world networks exhibit complex interactions that go beyond pairwise connections. For example, co-authorship networks [4], co-purchase networks [5], and location-tagged social networks [6] have complex relationships that cannot be adequately represented by traditional graphs. Hypergraphs provide more expressive representations for such networks, enabling a deeper understanding of underlying structures and interactions [2].
Despite the increasing popularity and wide-ranging applications of mining hypergraphs [7], understanding their structural properties remains a challenging task. One of the key aspects of this challenge is to find cohesive subgraphs within a hypergraph, which are more densely connected internally than with the rest of the graph. In general, identifying the cohesive structure is important for understanding the overall structure and function of the graph, as they often represent key communities or roles [8]. In addition, it provides insights into the robustness of networks [9], aiding in network analysis [10] and identifying influential nodes [11], and revealing query-centric communities [12].
Recently, several cohesive models have been proposed for hypergraphs. The first approach involves converting the hypergraph into a clique-like graph, then applying \(k\)-core algorithm [13; 14]. \(k\)-core is a maximal set of nodes of which every node has at least \(k\) neighbour nodes in the \(k\)-core. We refer to this approach as the clique-core. Lee et al. [15] introduced the \((k,q)\)-core, a maximal subgraph in which each node has at least \(k\) degree, and each hyperedge contains at least \(q\) nodes. With the similar period, the \(\text{hrb}\)-\(k\)-core was proposed by Arafat et al. [13]. The \(\text{hrb}\)-\(k\)-core is the maximal strongly induced connected subhypergraph [16; 17]\(H\) such that every node has at least \(k\) neighbours in \(H\). Note that the strongly induced subhypergraph indicates that every node in the hyperedge belongs to the subhypergraph if and only if all nodes of the hyperedge exist. Arafat et al. [13] also formulated the \((k,d)\)-core, a maximal strongly induced connected subhypergraph in which every node has at least \(k\) neighbours and degree (number of hyperedges incident on a node) is greater than or equal to \(d\) in strongly induced subhypergraph.
While these proposed models have proven useful, they may not capture all aspects of cohesion in hypergraphs. Especially, hyperedges in hypergraphs have some distinct characteristics: (1) _hyperedge nesting_: Hyperedges often nest, with one's vertex set being a subset of another's, representing hierarchical or containment relationships. For instance, in biological networks, smaller modules may nest within larger ones, and in transaction
networks, certain products may repeatedly co-occur in transactions; (2) _nodes belong to multiple hyperedges_: Nodes can engage in multiple relationships, each depicted by a hyperedge. In social networks, an individual may be associated with different communities or groups through distinct hyperedges.
However, existing models do not mainly consider these characteristics when designing cohesive subgraph models. To effectively address the above issues, we incorporate the frequency of co-occurrence of nodes within hyperedges, which can be a crucial aspect of cohesion in many systems. Taking into account the frequency with which entities co-occur has proven to be effective in domains [18] such as recommendation systems. This is particularly effective in the context of user-based collaborative filtering [19, 20]. In user-based collaborative filtering, it is important to find users who have similar preferences or tastes to a given user. This is because fundamental principle of user-based collaborative filtering is to recommend items based on the choices of similar users. Cohesive subgraphs of hypergraphs based on the frequency of co-occurrence among users help to define target similar users for capturing collaborative signals.
Therefore, we propose a new concept \((k,g)\)-core by extending traditional \(k\)-core [14] by incorporating the occurrence of the cohesive subgraphs. The \((k,g)\)-core is a maximal subgraph in which each node has at least \(k\) neighbours which appear in at least \(g\) hyperedges together in the \((k,g)\)-core. Note that \((k,g)\)-core can be solved by converting a clique-based graph. However, it is not desired since this step inflates the size of the problem [21]. In the following, we point out the motivating applications of the \((k,g)\)-core.
**Applications.** (1) Fraudster Detection: Within a social commerce service, user purchasing data can be organised into a hypergraph, where edges represent groups of items bought by users. Fraudsters often exhibit similar behaviours and have connections through shared product purchases [22]. For example, a seller may recruit individuals to inflate review ratings for their mobile devices, resulting in fraudulent activity. By identifying shared characteristics among users, it becomes possible to cluster these fraudsters within the hypergraph network; (2) Biomedical Systems: The \((k,g)\)-core model finds extensive application in modeling biological systems, such as protein-protein interaction networks and gene regulatory networks [23]. Leveraging this model helps to identify highly interconnected substructures within these networks, revealing crucial protein complexes or functional modules that play key roles in biological processes; and (3) Recommendation Systems: The \((k,g)\)-core model offers valuable insights in recommendation systems by finding user clusters with similar preferences. By detecting \((k,g)\)-core subgraphs in the hypergraph representing user-item interactions, personalised recommendations can be generated, capitalising on the co-occurrence patterns of items within these subgraphs.
## 2 Related Work
The study of hypergraphs has seen a surge of interest in recent years, with numerous methods proposed based on different criteria [24, 25]. In this section, we review the most relevant previous study.
\((k,q)\)**-core [15]**. One of the models for finding cohesive subgraphs by considering hyperedges and nodes together in hypergraphs is the \((k,q)\)-core, proposed by Lee et al. This model defines a \((k,q)\)-core as the largest subgraph in which each node has at least \(k\) degree, and each hyperedge contains at least \(q\) nodes. Even if this model has various applications, providing valuable insights into the structure of hypergraphs, it does not consider the frequency of co-occurrence of nodes within hyperedges, which can be a crucial aspect of cohesion in many systems.
**nbr-\(k\)-core [13]**. nbr-\(k\)-core is proposed by Arafat et al. It is the maximal strongly induced subhypergraph [16, 17] such that every node has at least \(k\) neighbours. The concept strongly induced subhypergraph implies that for each node within a hyperedge, it must be included in the subhypergraph if and only if all other nodes within the same hyperedge are also present. Note that utilising a strongly induced subhypergraph may have some issues. A hyperedge can be present in its entirety or not at all in a subhypergraph due to its definition. When a node is included in a large hyperedge, a size of the strongly induced subhypergraph can be very large, leading to difficulties in analysing an extracting meaningful information.
\((k,d)\)**-core [13]**. Arafat et al. [13] observed the problem that the nbr-\(k\)-core usually return a very large cohesive subgraph as a result if it is included in a large-sized hyperedge. To address this issue, they proposed a more comprehensive cohesive subgraph model, \((k,d)\)-core, by considering neighbourhood and degree constraints simultaneously. The \((k,d)\)-core is defined the maximal subhypergraph in which every node has at least \(k\) neighbours and degree of every node is at least \(d\) in strongly induced subhypergraph. However, this approach could not capture the strength of the neighbours due to the lack of consideration for co-occurrence.
**Clique-core [26]**. The Clique-core is the same with traditional \(k\)-core by converting a hypergraph as clique-structured graph. After preprocessing, it aims to find a maximal set of nodes of which every node has at least \(k\) neighbour nodes in the clique graph. This clique-based approach is known as inflating the problem-size [21].
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c} \hline & **nbr-\(k\)-core** & \((k,d)\)**-core** & \((k,q)\)**-core** & **Clique-core** & \((k,g)\)**-core** & \((\alpha,\beta)\)**-core** \\ \hline \hline objective & \multicolumn{6}{c}{maximize subgraph size} \\ \hline
1st param & strongly induced. subgraph & deg. const. & neighbour size const. & deg. const. \\ \hline
2nd param & - & deg. const. & cardinality const. & - & co-occur. const. & cardinality const. \\ \hline \hline \end{tabular}
\end{table}
Table 1: Model comparison
(\(\alpha,\beta\))-core [27]. The (\(\alpha,\beta\))-core is another model widely used in the study of bipartite networks. It aims to find a set of nodes in the bipartite graph where each side has at least \(\alpha\) and \(\beta\) bipartite edges, respectively. A hypergraph can be converted to a bipartite graph that consists of nodes as the first set of nodes, and the hyperedge as other node set and creates an edge between two node sets if the hyperedge in the second set involves a node in the first set. While this model has proven useful in bipartite networks, it cannot be directly utilised to find cohesive subgraphs in hypergraphs due to the lack of neighbour structure information. In addition, constructing a bipartite graph may suffer from size inflating problem [21].
## 3 Problem Statement
A hypergraph network can be modelled as a graph \(G=(V,E)\) with nodes \(V\) and edges \(E\). Following previous studies [13, 15], we consider that \(G\) is undirected and unweighted. In hypergraph notation, we use the term _degree_ to represent the count of hyperedges incident to a particular node. Moreover, we employ the term _neighbour_ to refer to any two nodes that appear together in a hyperedge.
**Definition 1**.: _(\((k,g)\)-core). Given a hypergraph \(G\), \(k\geq 1\) and \(g\geq 1\), \((k,g)\)-core is the maximal set of nodes in which each node has at least \(k\) neighbours which appear in at least \(g\) hyperedges together in an induced subhypergraph by the \((k,g)\)-core._
We next discuss two essential properties of (\(k,g\))-core: _uniqueness_ and _containment_. These properties are fundamental for understanding the behaviour and structure of the (\(k,g\))-core.
**Property 1**.: _Uniqueness: \((k,g)\)-core is unique._
Proof.: \((k,g)\)-core is unique due to the maximality constraint, as it represents the maximal set of nodes in which all nodes have at least \(k\) neighbours which appear in at least \(g\) hyperedges, achieved by iteratively removing nodes until satisfying the constraints.
**Property 2**.: _Containment: \((k,g)\)-core has hierarchical structure, i.e., \((k+1,g)\)-core \(\subseteq(k,g)\)-core and \((k,g+1)\)-core \(\subseteq(k,g)\)-core._
Proof.: The hierarchy structure of the \((k,g)\)-core is a result of a gradual node removal process, ensuring that if a node belongs to the \((k,g)\)-core, it also belongs to all higher-order cores due to the cumulative neighbour and co-occurrence constraints. satisfied by the remaining nodes.
**Example 1**.: _Let us consider an example using a simple hypergraph with \(9\) nodes and \(7\) hyperedges, as illustrated in Figure 1. This example demonstrates the distinct results and characteristics of different cohesive subgraph models._
* _The_ \((k,q)\)_-core yields_ \(\{u_{1},u_{2},u_{3},u_{4},u_{7},u_{8}\}\) _with_ \(k=2\) _and_ \(q=2\)_. This is because the nodes_ \(u_{5}\)_,_ \(u_{6}\)_, and_ \(u_{9}\) _are iteratively removed due to the degree constraint. The remaining graph then satisfies the_ \((k,q)\)_-core constraint._
* _The nbr-_\(k\)_-core returns the entire graph when_ \(k\)_=_\(2\)_, and_ \(\{u_{2},u_{3},u_{6}\}\) _when_ \(k\)_=_\(3\) _since there are hyperedges involving only_ \(u_{2}\)_,_\(u_{3}\)_,_\(u_{5}\) _and_ \(u_{6}\)_, every node has 3 neighbours._
* _The_ \((k,d)\)_-core returns the entire graph for_ \(k=2\) _and_ \(d=1\)_,_ \(\{u_{2},u_{3},u_{5},u_{6}\}\) _for_ \(k=3\) _and_ \(d=1\)_. When_ \(k=2\) _and_ \(d=2\)_, it returns an empty set. We can observe that_ \(u_{1},u_{2},u_{3},u_{4},u_{7}\)_, and_ \(u_{8}\) _have two or more neighbours and belong to at least two hyperedges. Note that the_ \((k,d)\)_-core is a strongly induced subhypergraph. Thus, hyperedges such as_ \(\{u_{1},u_{3},u_{9}\}\) _and_ \(\{u_{2},u_{3},u_{5},u_{6}\}\) _are not considered. Therefore, the node_ \(u_{3}\) _must be removed. This removal process is repeated iteratively until no nodes remain in the resulting graph._
* _Clique-core returns the entire graph for_ \(k=2\)_, and returns_ \(\{u_{2},u_{3},u_{5}\,u_{6}\}\) _for_ \(k=3\) _due to the neighbour constraint._
* _The_ \((\alpha,\beta)\)_-core returns_ \(\{u_{1},u_{2},u_{3},u_{4},u_{7},u_{8}\}\) _for_ \(\alpha=2\) _and_ \(\beta=2\)_. Since every node is included in at least_ \(2\) _hyperedge, and a hyperedge has at least two nodes,_ \(u_{5}\)_,_ \(u_{6}\) _and_ \(u_{9}\) _cannot be included because they belong to only a single hyperedge._
* _Lastly, the_ \((k,g)\)_-core gives_ \(\{u_{1},u_{2},u_{3}\}\) _for_ \(k=2\) _and_ \(g=2\)_. In this case, each node has at least_ \(k\) _neighbours and each node pair appears in at least_ \(g\) _edges together, signifying a cohesive structure within the hypergraph._
Figure 1: Motivating example
## 4 PEELING ALGOR
In this section, we present the peeling algorithm to find \((k,g)\)-core. We extend the existing peeling algorithm for the \(k\)-core computation [28]. Basically, it iteratively removes a set of nodes until satisfying the cohesiveness constraint. For each iteration, it finds nodes which do not satisfy the \((k,g)\)-core criteria, and removes them from the hypergraph. The algorithm continues until no more nodes can be removed. Finally, the remaining subgraph can be the \((k,g)\)-core.
The pseudo description of the algorithm can be checked in Algorithm 1. It starts by initialising \(H\) as the set of all nodes in the hypergraph. It then initialises the neighbour occurrence map, \(NOM\), for each node in \(V\) with an empty dictionary data structure. The algorithm iterates over each node in \(H\) and counts the occurrences of its neighbours on the hyperedges. After that, it keeps only the neighbours in \(NOM[v]\) for each node \(v\) where the occurrence count is \(\geq g\). The algorithm proceeds in a loop that continues until no more changes are made. Within the loop, it iterates over each node \(w\) in \(H\). If \(NOM[w]\) does not contain at least \(k\) nodes, indicating that node \(w\) does not satisfy the \((k,g)\)-core criteria, it is removed from \(H\), and the occurrence map is updated accordingly. This process continues until no more nodes can be removed. Finally, it returns the nodes \(H\) as a result.
This peeling method is efficient to find the \((k,g)\)-core in a hypergraph. The time complexity of the peeling algorithm is \(O(|V|^{2}|E|)\). In the worst-case, the algorithm iterates over each node and its corresponding hyperedges to construct the occurrence map. As each hyperedge can contain up to \(|V|\) nodes, constructing the occurrence map for each node takes \(O(|V||E|)\) time, resulting in a time complexity of \(O(|V|^{2}|E|)\) for this step. During the iterative removal procedure, the algorithm performs a maximum of \(|V|\) iterations, and each removal operation has a time complexity of \(O(|V|)\). As a result, the overall time complexity is \(O(|V|^{2}|E|)\). However, it's important to note that the algorithm may terminate earlier as nodes can be removed together.
## 5 Experiments
To validate effectiveness of the proposed \((k,g)\)-core and efficiency of the peeling algorithm, we conducted extensive experiments on real-world hypergraphs.
**Experimental setup.** We implemented the \((k,g)\)-core model in Python using the NetworkX library [29]. The experiments were run on a Linux machine with Intel Xeon 6248R and 256GB of RAM.
**Dataset.** Table 2 provides the essential statistics of six real-world datasets. These datasets are publicly available and can be accessed from the sources mentioned in the references [13; 30]. In the table, the term 'nbr' denotes a neighbour, and 'card' indicates cardinality.
**Evaluation measure.** We evaluated the performance of the \((k,g)\)-core model by checking the number of nodes and the running time.
**Performance of \((k,g)\)-core.** We varied the user parameters to analyse the behaviour of the \((k,g)\)-core. In Figure 1(a), we fix the value \(k\) as 3 and vary \(g\) as 3, 5, 7, and 9. The experimental results report the number of nodes in the \((k,g)\)-core for each \(g\) value. The results show that an increase in the value of \(g\) also leads to a decrease in the number of nodes within the \((k,g)\)-core. Similarly, in Figure 1(b), the value of \(g\) is fixed as 3, while \(k\) is varied as 3, 5, 7, and 9. It is observed that when \(k\) increases, the number of nodes in the \((k,g)\)-core decreases. Both results indicate that a higher value of \(k\) and \(g\) results in more densely connected.
Next, we focus on analysing the running time of the algorithm while keeping the parameters fixed. Figure 3 presents the results obtained by varying the values of \(k\) and \(g\). The experimental results indicate that increasing \(k\) and \(g\) does not have a significant impact on the running time. This implies that the running time of the \((k,g)\)-core is not primarily determined by the user-defined parameters \(k\) and \(g\). Instead, the majority of the computational complexity arises from calculating the occurrence of the neighbours as we have discussed in Section 4.
**Scalability test.** To evaluate scalability of the algorithm, we conducted a scalability test using \(k\)-uniform hypergraph generation models [31]. With a fixed value of \(k\) (100) and a set number of nodes \((10,000)\), we varied the number of hyperedges from 4,000 to \(20,000\) to observe the impact on running time. By converting the hypergraph into a bipartite network, we obtained a
\begin{table}
\begin{tabular}{l||c|c|c|c} \hline Dataset & \(|V|\) & \(|E|\) & **Avg. nbr size** & **Avg. edge card.** \\ \hline
**Contact** & 242 & 12,704 & 68.74 & 2.42 \\ \hline
**Congress** & 1,718 & 83,105 & 494.68 & 8.81 \\ \hline
**Enron** & 4,423 & 5,734 & 25.35 & 5.25 \\ \hline
**Meetup** & 24,115 & 11,027 & 65.27 & 10.3 \\ \hline
**DBLP** & 1,836,596 & 2,170,260 & 9.05 & 3.43 \\ \hline
**Aminer** & 27,850,748 & 17,120,546 & 8.39 & 3.77 \\ \hline \end{tabular}
\end{table}
Table 2: Real-world dataset
Figure 3: Comparison of running time
Figure 2: Comparison of number of nodes
bipartite graph with \(30,000\) nodes and \(2\) million bipartite edges when the number of hyperedges was \(20,000\). Figure 4 shows the experimental result. It reveals a linear increase in running time when the number of hyperedges increases, indicating that the algorithm scales efficiently with larger hypergraphs. This scalability test demonstrates the algorithm's capability to handle large-sized hypergraphs.
## 6 Conclusion
In this paper, we introduce the \((k,g)\)-core model for cohesive subgraph discovery in hypergraphs, an extension of the existing \(k\)-core model that takes into account node co-occurrence within hyperedges. We also propose a peeling algorithm that effectively identifies the \((k,g)\)-core by iteratively removing nodes that do not satisfy the specified criteria. Experimental evaluations on six real-world networks demonstrate the characteristics of proposed \((k,g)\)-core. In future work, we plan to recommend appropriate values for \(k\) and \(g\) to identify meaningful cohesive subgraphs, thereby eliminating the need for user-specified parameters.
|
2309.12532 | Macroscopic quantum entanglement between an optomechanical cavity and a
continuous field in presence of non-Markovian noise | Probing quantum entanglement with macroscopic objects allows us to test
quantum mechanics in new regimes. One way to realize such behavior is to couple
a macroscopic mechanical oscillator to a continuous light field via radiation
pressure. In view of this, the system that is discussed comprises an
optomechanical cavity driven by a coherent optical field in the unresolved
sideband regime where we assume Gaussian states and dynamics. We develop a
framework to quantify the amount of entanglement in the system numerically.
Different from previous work, we treat non-Markovian noise and take into
account both the continuous optical field and the cavity mode. We apply our
framework to the case of the Advanced Laser Interferometer Gravitational-Wave
Observatory and discuss the parameter regimes where entanglement exists, even
in the presence of quantum and classical noises. | Su Direkci, Klemens Winkler, Corentin Gut, Klemens Hammerer, Markus Aspelmeyer, Yanbei Chen | 2023-09-21T23:10:29Z | http://arxiv.org/abs/2309.12532v2 | Macroscopic quantum entanglement between an optomechanical cavity and a continuous field in presence of non-Markovian noise
###### Abstract
Probing quantum entanglement with macroscopic objects allows to test quantum mechanics in new regimes. One way to realize such behavior is to couple a macroscopic mechanical oscillator to a continuous light field via radiation pressure. In view of this, the system that is discussed comprises an optomechanical cavity driven by a coherent optical field in the unresolved sideband regime where we assume Gaussian states and dynamics. We develop a framework to quantify the amount of entanglement in the system numerically. Different from previous work, we treat non-Markovian noise and take into account both the continuous optical field and the cavity mode. We apply our framework to the case of the Advanced Laser Interferometer Gravitational-Wave Observatory (Advanced LIGO) and discuss the parameter regimes where entanglement exists, even in the presence of quantum and classical noises.
## I Introduction
Entanglement is one of the hallmarks of the "quantumness" of physical systems. Ideally, it is possible for macroscopic objects, massive and/or containing a high number of degrees of freedom, to be entangled with each other. Yet in practice, such macroscopic entanglement can be very delicate in presence of decoherence. It is an intriguing challenge to create and verify macroscopic entanglement, which is often viewed as expanding the limits of the quantum regime.
Optomechanical systems are promising candidates for experimental demonstration of macroscopic entanglement, partly due to their theoretical robustness against temperature effects [1]. They can also be used to engineer the quantum state of the mechanical system [2], where the entanglement is generated by the momentum exchange between the light reflecting from the mechanical oscillator - a phenomenon known as radiation pressure. It is theoretically well understood and broadly discussed in the literature [3; 4; 5; 6; 7; 8], see for example [9; 1] for a review.
Entanglement in optomechanical devices has been widely studied and there have been several successful experimental realizations: stationary entanglement between simultaneous light tones mediated by an optomechanical device [10; 11], generation of entanglement between spaced mechanical oscillators both in the micro and macro regime via radiation pressure [12; 13; 14; 15], and optomechanical entanglement between the light field and the mechanical oscillator in a pulsed scheme [16] are examples of such demonstrations. There also exist many proposals in the literature to further study macroscopic quantum phenomena in optomechanical systems [17; 18; 19; 20; 21] and entanglement between coupled oscillators in presence of non-Markovian baths [22]. In this work, we consider stationary optomechanical entanglement, where the system parameters (e.g. the driving), and statistical behavior thereof, are not changing over time. Schemes to verify stationary optomechanical entanglement were proposed in [23; 24; 25; 4; 6], whereas an experimental demonstration has, to our knowledge, not been performed yet.
Our system consists of a single mechanical mode interacting with an optical cavity mode and the quadratures of the light field exiting the cavity. At any time \(t\), we study the bipartite entanglement that is present in the joint quantum state between mechanical mode, optical mode, and the light that has exited the system during \(t^{\prime}\leq t\). See Fig. 1 of Ref. [6], which includes a space-time diagram that illustrates the configuration. In the regime where the dynamics are linear, the state is Gaussian, and the noise processes are Markovian (white), the open-system optomechanical dynamics is solvable analytically and the state of the system can be known exactly [3]. The white-noise model describes well devices with high-frequency oscillators, where only thermal excitations are expected, and in the limit of large bath temperature where \(k_{B}T\gg\hbar\omega_{m}\), \(T\) is the temperature of the bath and \(\omega_{m}\) is the resonance frequency of the oscillator [26].
In this work, we extend the description to non-Markovian Gaussian noise processes where analytical results are, to our knowledge, not available, thus requiring numerical methods. This approach is applicable whenever non-white noises processes, such as structural damping [27; 28], are relevant. We extend on the methods developed in [6], incorporating a cavity, and, more importantly, non-Markovian noise processes. The technique consists in computing the minimal symplectic eigenvalue of the partially transposed covariance matrix of the
system, constructed with numerical methods, which provides a measure of appropriate bipartite entanglement.
We first investigate entanglement in a generalized setting, considering a heavy suspended oscillator with a low mechanical resonance frequency. This corresponds to the free-mass limit, where the mechanical resonance frequency is much smaller than the other characteristic frequencies of the system. Then, we focus our attention on Advanced LIGO (aLIGO) [29] and use it as a case study. It has been recently shown that by injecting squeezed vacuum, the detector's quantum noise can in principle surpass the free-mass standard quantum limit (SQL) by \(3\,\mathrm{dB}\)[30]. It is natural to ask whether this can already imply that aLIGO has built quantum entanglement between the mirrors and the light field. The answer to this question is non-trivial. First, from [30], we see that the level of classical noise is not yet below the SQL [31]. Second, the strict definition of entanglement we use here requires integrating over all frequencies: it remains uncertain whether having noise below the SQL within a certain finite frequency band automatically leads to entanglement. Therefore, we parametrize aLIGO's noise curves to investigate regimes where entanglement, according to its strict definition, exists.
This paper is organized as follows: In Sec. II, we introduce the dynamics of the system and its equations of motion. In Sec. III, we state our entanglement criterion and the covariance matrix of the system for two partitions of interest. To show the usefulness of our technique for systems with low mechanical resonance frequencies, we investigate entanglement in a generalized setting in Sec. IV. In Sec. V, we give details about aLIGO's noise budget, and talk about how we model it in our calculations. Finally, in Sec. VI, we investigate whether there is entanglement between the mechanical oscillator and the light field at aLIGO for the partitions of interest, given different parametrizations of the classical noise curves.
## II System dynamics
Let us consider an optical cavity with a movable mirror, driven by a laser with frequency \(\omega_{0}\) close to one of the resonant frequencies of the cavity, \(\omega_{0}+\Delta\)[32]. The quantity \(\Delta\) is often referred to as the detuning frequency of the cavity. For such a system, the linearized Hamiltonian in the interaction picture with the rotating-wave approximation (RWA) is given by [33]
\[\begin{split}& H=\hbar\omega_{m}\hat{B}^{\dagger}\hat{B}+\hbar \Delta\hat{A}^{\dagger}\hat{A}-\hbar G\hat{x}(\hat{A}^{\dagger}+\hat{A})\\ +&\hbar\sqrt{2\gamma}\int_{-\infty}^{\infty}\frac{d \Omega}{2\pi}\left[\hat{A}^{\dagger}\hat{c}(\omega_{0}+\Omega)-\hat{A}\hat{c} ^{\dagger}(\omega_{0}+\Omega)\right]\\ &\quad+\int_{-\infty}^{\infty}\frac{d\Omega}{2\pi}\left[\hbar \Omega\hat{c}^{\dagger}(\omega_{0}+\Omega)\hat{c}(\omega_{0}+\Omega)\right] \,,\end{split} \tag{1}\]
where \(\hat{B}\) and \(\hat{B}^{\dagger}\) are the annihilation and creation operators of the mechanical mode (center of mass motion of the mirror), \(\hat{x}\) is the position of the center of mass of the mirror, \(\omega_{m}\) is the mechanical resonance frequency, \(\hat{A}\) and \(\hat{A}^{\dagger}\) are the annihilation and creation operators of the cavity mode, \(\hat{c}(\omega_{0}+\Omega)\) and \(\hat{c}^{\dagger}(\omega_{0}+\Omega)\) are the annihilation and creation operators of the external vacuum light field at frequency \(\omega_{0}+\Omega,G\) is the linear optomechanical coupling constant, and \(\gamma\) is the decay rate of the cavity mode. The position and momentum operators of the mirror are related to the creation and annihilation operators of the mechanical mode by
\[\hat{x} =\sqrt{\frac{\hbar}{M\omega_{m}}}\,\frac{\left(\hat{B}+\hat{B}^{ \dagger}\right)}{\sqrt{2}}\,, \tag{2a}\] \[\hat{p} =\sqrt{\hbar M\omega_{m}}\,\frac{\left(\hat{B}-\hat{B}^{\dagger} \right)}{\sqrt{2}i}\,. \tag{2b}\]
Note that for the sake of convenience we chose a displaced frame where all operators have zero mean. The mode operators satisfy the canonical commutation relations,
\[[\hat{A},\hat{A}^{\dagger}]=[\hat{B},\hat{B}^{\dagger}]=1. \tag{3}\]
aLIGO detectors are power- and signal-recycled Fabry-Perot Michelson interferometers, which contain a high number of degrees of freedom. However, the core optomechanics can still be studied by the Hamiltonian given above; this reduction manifests itself in the "scaling-law" relations governing aLIGO's sensitivity as parameters of the signal-recycling cavity are modified [32]. From the scaling-law, the coupling constant \(G\) is related to the parameters of the interferometer by,
\[G=\sqrt{\frac{2\omega_{0}P_{c}}{\hbar Lc}}\,, \tag{4}\]
where \(L\) is the arm length of the interferometer (i.e. the cavity length), \(P_{c}\) is the power circulating inside the cavity, and \(c\) is the speed of light in vacuum.
We can transform the Hamiltonian such that the cavity mode (\(A,A^{\dagger}\)) couples with the traveling wave at \(z=0\) (where the point-wise cavity interface is located). We derive this transformation in Appendix A. We use \(\hat{u}\) and \(\hat{v}\) to label the field right before entering and right after exiting the cavity, respectively.
In this paper, we restrict ourselves to \(\Delta=0\). In this resonant case, the system is unconditionally stable and it reaches a steady state, in which the Heisenberg equations can be solved using Fourier transformation [34]. To write down and solve the Heisenberg equations, instead of annihilation and creation operators we use the Caves-Schumaker quadrature operators [35; 36]:
\[\hat{u}_{1}(\Omega) =\frac{\hat{u}(\omega_{0}+\Omega)+\hat{u}^{\dagger}(\omega_{0}- \Omega)}{\sqrt{2}}\,, \tag{5a}\] \[\hat{u}_{2}(\Omega) =\frac{\hat{u}(\omega_{0}+\Omega)-\hat{u}^{\dagger}(\omega_{0}- \Omega)}{\sqrt{2}i}\,, \tag{5b}\]
where \(\hat{u}_{j}^{\dagger}(\Omega)=\hat{u}_{j}(-\Omega)\). Quadratures \(\hat{v}_{1}(\Omega)\) and \(\hat{v}_{2}(\Omega)\) are defined from \(\hat{v}(\omega_{0}+\Omega)\) and \(\hat{v}(\omega_{0}-\Omega)\) in a similar fashion. After setting \(c=1\), their commutation relations are [37]
\[[\hat{u}_{1}(\Omega),\hat{u}_{2}(\Omega^{\prime})] =[\hat{v}_{1}(\Omega),\hat{v}_{2}(\Omega^{\prime})]=2\pi\delta( \Omega+\Omega^{\prime})\,, \tag{6a}\] \[[\hat{u}_{j}(\Omega),\hat{u}_{j}(\Omega^{\prime})] =[\hat{v}_{j}(\Omega),\hat{v}_{j}(\Omega^{\prime})]=0\,, \tag{6b}\]
for \(j=1,2\). Then, in the time domain, we have
\[\hat{u}_{j}(t) =\int_{-\infty}^{\infty}\frac{d\Omega}{2\pi}\hat{u}_{j}(\Omega)e^{- i\Omega t}\,, \tag{7a}\] \[=i\delta(t-t^{\prime})\,,\] (7b) \[=[\hat{u}_{2}(t),\hat{u}_{2}(t^{\prime})]=0\,, \tag{7c}\]
for \(j=1,2\). We similarly define quadrature operators \(\hat{A}_{1,2}\) and \(\hat{B}_{1,2}\), in the _time domain_, with
\[\hat{A}_{1}(t)=\frac{\hat{A}(t)+\hat{A}^{\dagger}(t)}{\sqrt{2}}\,,\quad\hat{A }_{2}(t)=\frac{\hat{A}(t)-\hat{A}^{\dagger}(t)}{\sqrt{2}i}\,, \tag{8}\]
and similarly for \(\hat{B}_{1,2}\). We also have
\[[\hat{A}_{1}(t),\hat{A}_{1}(t)] =[\hat{A}_{2}(t),\hat{A}_{2}(t)]=0\,, \tag{9}\] \[=i\,, \tag{10}\]
and the same for \(\hat{B}_{1,2}\). Note here that the commutators are for same-time operators.
We include two classes of "classical" noises [37] in our system: a force noise \(\hat{u}_{F}\) and a sensing noise \(\hat{u}_{X}\), arising from a quantum treatment of the interaction of the system with its environment. Force noise affects the center-of-mass motion of the mechanical oscillator by introducing fluctuations in its momentum. We also introduce a velocity damping of the oscillator, with a damping rate \(\gamma_{m}\). \(\gamma_{m}\) and \(n_{F}\) are associated with the heat bath(s) the mass is coupled to, with the value of \(\gamma_{m}\) and the spectrum of \(n_{F}\) related by the fluctuation-dissipation theorem [38]. Sensing noise affects how the position is measured by the light field. In our model below it arises from fluctuations of the reflecting surface that introduces noise in the cavity field.
In the Heisenberg picture, the dynamics are given by the Langevin equations of motion. In the Fourier domain they are written as
\[-i\Omega\hat{A}_{1} =-\gamma\hat{A}_{1}+\sqrt{2\gamma}\hat{u}_{1}\,, \tag{11a}\] \[-i\Omega\hat{A}_{2} =-\gamma\hat{A}_{2}+\sqrt{2\gamma}\hat{u}_{2}+\sqrt{2}G(\hat{x}+ \hat{u}_{X})\,,\] (11b) \[-i\Omega\hat{x} =\hat{p}/M\,,\] (11c) \[-i\Omega\hat{p} =-\gamma_{m}\hat{p}-M\omega_{m}^{2}\hat{x}+\sqrt{2}\hbar G\hat{A }_{1}+\hat{n}_{F}\,,\] (11d) \[\hat{v}_{1} =\hat{u}_{1}-\sqrt{2\gamma}\hat{A}_{1}\,,\] (11e) \[\hat{v}_{2} =\hat{u}_{2}-\sqrt{2\gamma}\hat{A}_{2}\,. \tag{11f}\]
We refer to Eqs. (11) as the Heisenberg equations for the rest of the article. It is straightforward to solve them to obtain \((\hat{x},\hat{p},\hat{A}_{1,2},\hat{v}_{1,2})\) in terms of the input fields, \((\hat{u}_{1,2},\hat{u}_{X},\hat{n}_{F})\), referred as the input-output relations of the system. More specifically, quantum fluctuations in the in-going quadratures \(\hat{u}_{1,2}(\Omega)\) drive the system's quantum noise [39]. From Eq. (11e) and (11f), reading the out-going field quadratures are subject to noises in \(\hat{u}_{1}\) and \(\hat{u}_{2}\), giving rise to the _shot noise_ (SN) for that readout strategy [39]. On the other hand, from Eq. (11a), we see that \(\hat{u}_{1}\) drives \(\hat{A}_{1}\), which in Eq. (11d) drives the momentum of the test mass, which then shows up in the position of the test mass via Eq. (11c), giving rise to _quantum radiation pressure noise_ (QRPN), also known as backaction noise in the literature. In general, the power spectrum of the SN is inversely proportional to circulating power in the cavity, while that of the QRPN is proportional to circulating power.
## III Entanglement criteria and partitions
The canonical commutation relations imply that \(\mathbf{V}+\frac{1}{2}\mathbf{K}\) is positive semidefinite, where \(\mathbf{V}\) is the covariance matrix with \(\mathbf{V}_{ij}=\langle\{\hat{X}_{i}-\langle\hat{X}_{i}\rangle,\hat{X}_{j}- \langle\hat{X}_{j}\rangle\}\rangle/2\) and \(\mathbf{K}_{ij}=[\hat{X}_{i},\hat{X}_{j}]\) is the commutator matrix of the quadratures in the system. This relation can be stated as
\[\mathbf{V}+\frac{1}{2}\mathbf{K}\geq 0\,. \tag{12}\]
Here for an \(N\)-partite system containing \(N\) harmonic oscillators, the matrices \(\mathbf{V}\) and \(\mathbf{K}\) are \(2N\)-dimensional.
To test for bipartite entanglement in a multimode system, we use the positivity of the partial transpose (PPT) criterion, which is necessary and sufficient to test for the separability of one of the modes from the rest for Gaussian systems [40; 41; 42]. To use the PPT criterion in this context, one obtains the partial-transposed covariance matrix \(\mathbf{V}_{\text{pt}}\) by reverting the momentum of that one mode (which puts a minus sign on the column and the row that contains the momentum in question) [43]. The PPT criterion for separability is expressed as
\[\mathbf{V}_{\text{pt}}+\frac{1}{2}\mathbf{K}\geq 0\Leftrightarrow\text{ Separability}\,. \tag{13}\]
Figure 1: Figurative representation of the two different partitions that is used while testing for entanglement, which are partitioning by tracing over (top row) and **not** tracing over (bottom row) the cavity. Note that the system configuration is not changed, i.e. cavity is still present for both partitions.
The amount of entanglement is quantified by the logarithmic negativity, \(E_{N}\)[44]. For a Gaussian state of N modes, it is defined as
\[E_{N}=\sum_{j=1}^{N}\max\{0,-\log_{2}(\bar{v}_{j})\}\,, \tag{14}\]
where \(\bar{v}_{j}\), \(j=1,\ldots N\) are the symplectic eigenvalues of the partially transposed covariance matrix, \(\mathbf{V}_{\text{pt}}\), which are given by the absolute values of the eigenvalues of \(\mathbf{K}^{-1}\mathbf{V}_{\text{pt}}\). For 1 vs. \(N-1\) mode partitions, only one of the symplectic eigenvalues of \(\mathbf{V}_{\text{pt}}\) can have a magnitude smaller than \(1\)[45], therefore there can be at most one negative eigenvalue of \(\mathbf{V}_{\text{pt}}+\frac{1}{2}\mathbf{K}\). We label the corresponding symplectic eigenvalue as \(\bar{v}_{min}\).
Using the PPT criterion, we test for entanglement between the mechanical oscillator and the optical field in two ways: first, we construct the covariance matrix \(\mathbf{V}\) with the mechanical mode and the modes of the light field, essentially tracing out the cavity mode. Here, we perform the partial transpose operation with respect to the mechanical oscillator. Second, we include the cavity mode in the covariance matrix while still taking the partial transpose with respect to the mechanical oscillator, which corresponds to measuring the entanglement between the oscillator and the joint system of the cavity plus external light field. The two ways of partitioning are depicted in Fig. 1. The elements of the covariance matrix \(\mathbf{V}\) for both types of partitions, as well as the discretization of \(\mathbf{V}\) can be found in Appendix D.
## IV Entanglement in presence of non-Markovian noises
Due to the numerical nature of the algorithm, we can tackle any noise spectral density associated with \(\hat{u}_{1}(\Omega)\), \(\hat{u}_{2}(\Omega)\), \(\hat{n}_{F}(\Omega)\) and \(\hat{n}_{X}(\Omega)\) using the PPT criterion defined in Eq. (13) to determine whether entanglement is present for a given partition. Conversely, this problem is analytically solvable only for some simplified noise models to our knowledge, such as assuming all the noise sources to have a white spectrum [23, 6].
To show the usefulness of the method, we investigate entanglement in heavy suspended oscillators with relatively low mechanical resonance frequencies. For such systems, the mechanical resonance frequency, \(\omega_{m}\), is much smaller than the other frequencies of the system, which is referred to as the _free-mass limit_. In this setting, \(\omega_{m}\) essentially does not affect the dynamics. Furthermore, in this limit where \(\Omega\gg\omega_{m},\gamma_{m}\), the tradeoff between shot noise and QRPN give rise to the SQL [46], given by
\[S_{\text{SQL}}(\Omega)=\frac{2\hbar}{M\Omega^{2}}. \tag{15}\]
In the context of suspended oscillators, \(\hat{n}_{F}(\Omega)\) is the force that gives rise to the suspension thermal noise, whereas \(\hat{n}_{X}(\Omega)\) is an effective displacement that gives rise to coating thermal noise. When thermal noise is due to the internal friction of the suspension or the oscillator, the noise spectrum of \(\hat{n}_{F}(\Omega)\) and \(\hat{n}_{X}(\Omega)\) decrease as \(1/\Omega\) above internal resonances, which is referred to as _structural damping_[47, 48, 49]. Evidently, structural damping gives rise to non-Markovian noises, and the position-referred noise spectral densities of \(\hat{n}_{F}(\Omega)\) and \(\hat{n}_{X}(\Omega)\) are given, in the free-mass limit, by
\[S_{F}(\Omega) =\frac{2\hbar}{M}\frac{\Omega_{F}^{3}}{|\Omega|^{5}}\,, \tag{16a}\] \[S_{X}(\Omega) =\frac{2\hbar}{M}\frac{1}{\Omega_{X}|\Omega|}\,, \tag{16b}\]
where \(\Omega_{F}\) and \(\Omega_{X}\) are the frequencies where the respective noise curves cross the SQL, given in Eq. (15). Accordingly, they encode the strength of the noise processes \(n_{F}\) and \(n_{X}\), relative to the SQL level. For \(\hat{n}_{F}(\Omega)\), the position-referred spectrum is related to the noise spectral density, labeled as \(S_{n_{F}}(\Omega)\), with \(S_{F}(\Omega)=S_{n_{F}}(\Omega)/M^{2}\Omega^{4}\). Whereas for \(\hat{n}_{X}(\Omega)\), the position-referred spectrum \(S_{X}(\Omega)\) is also the noise spectral density \(S_{n_{X}}(\Omega)\)[50]. The incoming field quadratures \(\hat{u}_{j}\) have uncorrelated white spectra given by Eq. (10), since we assume the incoming field to be at vacuum state.
We list some relevant experiments with low mechanical oscillation frequencies in Table 1. Note that large scale experiments such as aLIGO, KAGRA [52], and VIRGO [53] are subject to seismic noise in addition to thermal noises, which usually does not follow structural damping and is disregarded in calculating \(\Omega_{F}\) for this Table. Seismic noise imposes further challenge upon reaching entanglement, which is explained in detail in Sec. VI.1.
In the limit of large cavity bandwidth, \(\gamma\gg\Omega\) the cavity can be eliminated adiabatically. Then, the equations of motion in
Figure 2: Logarithmic negativity between the mechanical oscillator and the outgoing light field in the free mass limit as a function of \(\Omega_{X}/\Omega_{F}\) for various \(\Omega_{q}/\Omega_{F}\). We plot the results for Markovian and non-Markovian force and sensing noise with dashed and plain lines respectively. Note that entanglement does not exist for \(\Omega_{X}/\Omega_{F}\lesssim 1\) with Markovian and for \(\Omega_{X}/\Omega_{F}\lesssim 6.3\) with non-Markovian noise sources, for \(\omega_{m}/(2\pi)=1\) Hz, \(\gamma_{m}/(2\pi)=0.01\) Hz, and a cutoff frequency of \(\Omega_{x}/(2\pi)=0.001\) Hz.
(11) are modified as
\[\hat{v}_{1}(\Omega) = \hat{u}_{1}(\Omega)\,, \tag{17a}\] \[\hat{v}_{2}(\Omega) = \hat{u}_{2}(\Omega)+\alpha(\hat{x}(\Omega)+\hat{n}_{X}(\Omega))\,,\] (17b) \[-i\Omega\hat{\rho}(\Omega) = -\gamma_{m}\hat{\rho}(\Omega)-M\omega_{m}^{2}\hat{x}(\Omega)\] (17c) \[+ \hbar\alpha\hat{u}_{1}(\Omega)+\hat{n}_{F}(\Omega)\,,\] \[-i\Omega\hat{\chi}(\Omega) = \hat{p}(\Omega)/M\,, \tag{17d}\]
where \(\alpha=\Omega_{q}\sqrt{M/\hbar}\) and \(\Omega_{q}=2G\sqrt{\hbar/M\gamma}\) is the characteristic interaction frequency. We fix \(\omega_{m}/(2\pi)=1\) Hz, \(\gamma_{m}/(2\pi)=0.01\) Hz, \(\Omega_{F}/(2\pi)=100\) Hz and vary \(\Omega_{X}\) and \(\Omega_{q}\). Note that the aim of these choices is to ensure that \(\Omega_{F}\gg\omega_{m}\gg\gamma_{m}\), i.e. that the free-mass limit is justified and we are in the large mechanical quality factor regime. Furthermore, \(\Omega_{F}\gg\omega_{m}\) implies the large bath temperature limit where \(k_{B}T\gg\hbar\omega_{m}\), \(T\) being the temperature of the bath. Lastly, \(\Omega_{q}\gg\omega_{m}\) ensures that the measurement of the system performed by light is faster than the dynamics of the system.
Since we work in the free-mass limit, the relevant parameters on which the presence of entanglement depends are \(\Omega_{q}\), \(\Omega_{F}\), and \(\Omega_{X}\). In this limit, scaling all of them by the same constant factor does not have an effect on the resulting dynamics. Therefore, we define the unitless ratios \(\Omega_{q}/\Omega_{F}\) and \(\Omega_{X}/\Omega_{F}\), which describe the system fully, in this limit. Note that the amount of classical noises in the system decreases with increasing \(\Omega_{X}/\Omega_{F}\).
We look for entanglement between the oscillator and the outgoing light field by varying the ratio \(\Omega_{X}/\Omega_{F}\) for various \(\Omega_{q}\) in Fig. 2 and we plot the results with plain lines. We find that entanglement does not exist for \(\Omega_{X}/\Omega_{F}\lesssim 6.3\) for any value of \(\Omega_{q}\). For \(\Omega_{X}/\Omega_{F}\gtrsim 6.3\), the system is entangled for any (finite) \(\Omega_{q}\), and the entanglement increases monotonously with increasing \(\Omega_{X}/\Omega_{F}\). This implies the existence of "universal" entanglement, meaning that whether the system is entangled or not is independent from \(\Omega_{q}\), the interaction frequency (or, in other words, how fast the system is measured by light). When the system is entangled, the amount of entanglement increases when \(\Omega_{q}\) is increased. We also note that the threshold for \(\Omega_{X}/\Omega_{F}\) above which the system is entangled depends on the low-frequency cutoff \(\Omega_{c}\) chosen in our model for the spectra of \(n_{F}\) and \(n_{X}\). Here, our model is in the form \(1/(\Omega+\Omega_{c})\) with \(\Omega_{c}/(2\pi)=0.001\) Hz for both force and the sensing noises, such that the average noise power at DC is \(1/\Omega_{c}=1000/(2\pi)\) s. We saw that the threshold is inversely proportional to the cutoff frequency.
In order to see the significance of non-Markovianity on the results, we repeat the same procedure with a white force and sensing noise. The noise spectra are given by \(S_{n_{F}}(\Omega)=2\hbar M\Omega_{F}^{2}\), \(S_{F}(\Omega)=2\hbar\Omega_{F}^{2}/M\Omega^{4}\), and \(S_{n_{X}}(\Omega)=S_{X}(\Omega)=2\hbar/M\Omega_{X}^{2}\). The results can again be found in Fig. 2, plotted with dashed lines, where the system is not entangled for any \(\Omega_{q}\) when \(\Omega_{X}/\Omega_{F}\lesssim 1\), whereas for \(\Omega_{X}/\Omega_{F}\gtrsim 1\), entanglement exists for all \(\Omega_{q}\) and the amount of entanglement increases with increasing \(\Omega_{q}\). Since we see this behavior for both Markovian and non-Markovian noise, we conjecture that the universality of the entangling-disentangling phase transition is independent of the power spectral densities of the classical noises, and that the power spectral densities only determine the threshold above which we have entanglement for all \(\Omega_{q}\).
## V Noise model of Aligo
The primary noise sources in aLIGO, other than the quantum noise, are the following [29]: seismic noise and suspension thermal noise are the main constituents of the force noise, and mirror coating thermal noise constitutes the sensing noise. The noise spectrum is dominated by seismic and thermal noise at low frequencies (until 100 Hz), and quantum noise at high frequencies, cf. Fig. 3. The interferometer noise is stationary and Gaussian to very good approximation in absence of glitches (i.e. transient noise artifacts) [54].
Seismic noise occurs because of the ground motion at the interferometer sites. This motion is \(\sim 10^{-9}\mathrm{m}/\sqrt{\mathrm{Hz}}\) at 10
Figure 3: aLIGO noise budget obtained from pygwinc. Only the dominant classical noise sources are plotted, along with the quantum noise. The total force noise in the system is the sum of the seismic noise and the suspension thermal noise, which is effective at low frequencies. The coating Brownian thermal noise is taken as the main constituent of the sensing noise. As can be seen from the figure, quantum noise dominates over the sensing noise by a large margin at high frequencies.
\begin{table}
\begin{tabular}{c c c c c c} Experiments & \(\omega_{m}/(2\pi)\) & \(\Omega_{F}/(2\pi)\) & \(\Omega_{X}/(2\pi)\) & \(\Omega_{q}/(2\pi)\) & \(\Omega_{X}/\Omega_{F}\) \\ \hline LIGO & 1 & 10 & 50 & 60 & 5 \\ KAGRA & 1 & 40 & 300 & 80 & 8 \\ VIRGO & 1 & 10 & 40 & 50 & 4 \\ \end{tabular}
\end{table}
Table 1: Relevant Experiments
Hz [55]. To provide isolation from this motion, the mirrors are suspended from quadruple pendulums [56]. The primary components of thermal noise are suspension thermal noise and coating Brownian noise. Suspension thermal noise occurs due to loss in the fused silica fibres used in the final suspension stage [29], whereas the coating Brownian noise (which is classified as a sensing noise) occurs due to the mechanical dissipation in the coatings [57]. Other types of sensing noise comprises of many noise sources that are dominant at high frequencies, such as thermal fluctuations of the mirror's shape, optical losses, or photodetection inefficiency [58]. The noise budget of aLIGO can be found in Figure 3.
Due to the classification above, we represent the sum of the seismic and the suspension thermal noise with \(\hat{n}_{F}(\Omega)\), and the coating thermal noise with \(\hat{n}_{X}(\Omega)\). We use the aLIGO noise budgets as given by the Python Gravitational Wave Interferometer Noise Calculator library (pygwinc) [59] and model them with rational functions of \(\Omega^{2}\). The noise spectra are modelled by
\[S_{F}^{LIGO}(\Omega) =\frac{\tau_{F}\alpha_{F_{1}}}{\left(\frac{\Omega}{\omega_{F}} \alpha_{F_{2}}\right)^{14}+1}\,, \tag{18a}\] \[S_{X}^{LIGO}(\Omega) =\tau_{X_{1}}\left(\frac{\Omega}{\omega_{X}}\right)^{2}\alpha_{X_ {1}}+\tau_{X_{2}}\alpha_{X_{2}}\,, \tag{18b}\]
where \(S_{F}^{LIGO}(\Omega)\) is the spectrum of the force noise and \(S_{X}^{LIGO}(\Omega)\) is the spectrum of the sensing noise. We model \(S_{F}^{LIGO}(\Omega)\) to decay as \(\Omega^{-14}\) instead of \(\Omega^{-16}\) (which is the expected behavior for quadruple suspension systems) since it performs better at approximating the global behavior. The power spectral densities are characterized by the time constants \(\tau_{F}\), \(\tau_{X_{1}}\), \(\tau_{X_{2}}\) and cutoff frequencies \(\omega_{F}\), \(\omega_{X}\). The values of these parameters can be found in Appendix B, Table 2. \(\alpha_{F_{1}}\), \(\alpha_{F_{2}}\), \(\alpha_{X_{1}}\), and \(\alpha_{X_{2}}\) are dimensionless constants that will be used to change the noise curves in Section VI. Their effect on the noise curves can be seen in Fig. 4. If we set all of them to be unity, we get our model of aLIGO noise curves.
Even though the parameters of aLIGO are well-known, they need to be recomputed since we are reducing the antisymmetric mode of the interferometer to a single cavity. The relation between the parameters of the antisymmetric mode and the parameters of the reduced cavity have already been computed [32], however, we choose to work numerically and find the parameters of the reduced cavity \(G\), \(\gamma\), \(M\), \(\gamma_{m}\), \(\omega_{m}\), and \(L\) by fitting aLIGO's quantum noise spectrum found in pygwinc. The modeled classical and quantum noise curves can be found in Appendix C, Fig. C.1. The fitted parameters can be found in Appendix B, Table 3.
## VI Entanglement in aLIGO
In aLIGO, the spectrum is dominated by force noise for low frequencies and quantum noise for high frequencies, therefore we expect the sensing noise to not affect the entanglement significantly, which we observed with our numerics. Then, we focus on the effect of the force noise on the entanglement and use the parameters \(\alpha_{F_{1}}\) and \(\alpha_{F_{2}}\) to modify the force noise spectrum, and set \(\alpha_{X_{1}}=\alpha_{X_{2}}=1\) throughout all of the following subsections. We also introduce resonant modes to investigate the system as accurately as possible. Intuitively, we expect the entanglement to be destroyed in presence of high classical noise levels.
### Effect of Force Noise
First, we investigate the effect of the force noise spectrum on entanglement and we calculate the logarithmic negativity \(E_{N}\) as a function of \(\alpha_{F_{1}}\) and \(\alpha_{F_{2}}\), for both of the partitions described in Sec. III. For all pairs \(\alpha_{F_{1}}\) and \(\alpha_{F_{2}}\) here, we find larger logarithmic negativity values when we do not trace over the cavity. The results for the partition where we do not trace
Figure 4: Force (top row) and sensing noise (bottom row) spectra parametrized by \(\alpha_{F_{1}}\), \(\alpha_{F_{2}}\), \(\alpha_{X_{1}}\), and \(\alpha_{X_{2}}\). The effect of \(\alpha_{F_{1}}\) is to rise and lower the nominal noise strength below the cutoff frequency and \(\alpha_{F_{2}}\) shifts the cutoff frequency. Similarly, \(\alpha_{X_{1}}\) shifts the cutoff frequency where the sensing noise starts increasing as \(\Omega^{2}\), whereas \(\alpha_{X_{2}}\) shifts the nominal noise level.
over the cavity can be found in Fig. 5. The amount of entanglement in the system diminishes when the force noise increases: that is towards the bottom-right of the plot where \(\alpha_{F_{1}}\) increases (proportional to the DC noise power) and \(\alpha_{F_{2}}\) decreases (inversely proportional to the noise bandwidth), cf. Sec. V and Fig. 4. Our fit of aLIGO's force noise level is for \(\alpha_{F_{1}}=\alpha_{F_{2}}=1\), hence this plot is for comparatively low level of force noise. Further to the bottom-right, the numerics become unstable and do not converge due to the wide range of orders of magnitude entering the calculation; see Appendix E for a discussion of the numerical implementation. Therefore, we cannot not give a definite answer about optomechanical entanglement in aLIGO with our model yet.
Next, we look for entanglement in the absence of the seismic noise, since it is the dominating contribution to the force noise of the system. Not being subjected to seismic noise is a realistic scenario for space-based gravitational-wave interferometers, such as the Laser Interferometer Space Antenna (LISA) [60]. For aLIGO, in absence of seismic noise, suspension thermal noise dominates the spectrum for low frequencies, which is modelled as
\[S_{ST}(\Omega)=\frac{\tau_{ST}}{\left(\frac{\Omega}{\omega_{ST}}\right)^{8}+1}\,, \tag{19}\]
similar to how the total force noise was modelled in Sec. V. The parameters \(\tau_{ST}\) and \(\omega_{ST}\) can be found in Table 2. Without changing the other parameters in the system, we find that we can achieve negativities of 1.52 and 1.72 for the partitions where we do and do not trace over the cavity respectively. This means that, in the absence of seismic noise, aLIGO has stationary optomechanical entanglement in its current operating regime.
### Effect of Low Frequency Resonances
Both the classical and the quantum noise curves contain many resonances, as can be seen from Fig. 3. The resonances in the seismic and suspension thermal noise arise from the displacement noises of the rigid-body resonant modes of the 4-stage suspension system [56]. These modes can be modeled with a sum of Lorentzians multiplying the force noise spectrum defined in Eq. (18). Then, the new formula defining the spectrum of the force noise is given as
\[S_{F}^{R}(\Omega) =\frac{\tau_{F}\alpha_{F_{1}}}{\left(\frac{\Omega}{\omega_{F}} \alpha_{F_{2}}\right)^{14}+1}\left(1+\right. \tag{20a}\] \[\left.\sum_{i}\frac{A_{v_{i}}^{2}}{(\Omega-\Omega_{v_{i}})^{2}+( \frac{1}{2}\Gamma_{v_{i}})^{2}}\right)\quad\text{with }\Omega>0\,,\] \[S_{F}^{R}(\Omega)=S_{F}^{R}(-\Omega)\,, \tag{20b}\]
where the sum is over the resonant modes. The parameters \(\Omega_{v_{i}}\), \(\Gamma_{v_{i}}\), and \(A_{v_{i}}\) are the mode frequencies, full widths at half maximum (FWHM), and the amplitudes of the Lorentzians respectively, and are listed in Appendix B, Table 4 for the modes with the biggest relative amplitudes.
We investigate the effect of the resonant modes on the entanglement. In Fig. 6, with low force noise guarantying well-behaved numerics and setting \(\alpha_{F_{2}}=1\), we plot \(E_{N}\) for noise curves with and without these modes (orange and gray, resp.) and for both partitions (circles when the cavity is traced out). We see explicitly here that there is more entanglement in the partition where we do not trace over the cavity. For low noise (low \(\alpha_{F_{1}}\)), the negativity remains unchanged, hence the resonant modes do not affect the entanglement significantly in this regime. As the level of noise increases, resonant modes cause the logarithmic negativity to decrease faster than the negativities calculated without the resonant modes for both partitions. The system becomes separable when resonant modes are included for \(\alpha_{F_{1}}\approx 10^{-15}\) and \(\alpha_{F_{1}}\approx 10^{-12}\) when we do and do not trace over the cavity respectively. It seems reasonable to expect that entanglement will not e
Figure 5: The effect of the force noise spectrum on the logarithmic negativity when we do not trace over the cavity. Note that the force noise levels increase toward the bottom-right of the figure and our fit of aLIGO’s operation regime is for \(\alpha_{F_{1}}=\alpha_{F_{2}}=1\).
Figure 6: The effect of resonant modes on the logarithmic negativity, \(E_{N}\), for both partitions. We see that the negativity reduces with increasing \(\alpha_{F_{1}}\) or with the introduction of resonant modes. Note that \(\alpha_{F_{2}}=1\).
becomes stronger. Hence, extrapolating \(\alpha_{F_{1}}\to 1\), this is evidence (but not a rigorous proof) that aLIGO in its current operation regime (but without squeezed input) probably contains no optomechanical entanglement.
## VII Conclusions
In this paper, we developed a framework to determine and quantify bipartite entanglement in an optomechanical system in the non-sideband-resolved regime, in presence of non-Markovian Gaussian noises. The main novelty of our work is to enable the study of non-Markovian noise drives, which are common in devices with low mechanical frequencies, typically associated with large/macroscopic masses. Hence, we focused on macroscopic entanglement and used a free mass with structural damping and coating Brownian noise as an initial example, and then aLIGO as a more detailed case study.
We tested for bipartite entanglement by looking at the separability between the mechanical oscillator and (i) the outgoing light field, (ii) the joint system of the cavity and the outgoing light field. For low levels of classical noise, we saw that the latter partition is more entangled compared to the former. However, for high levels of classical noise, we did not see a significant advantage in using one partition over the other.
In the low mechanical frequency regime, where the free-mass limit approximation holds, we found that the presence of entanglement is independent of the coherent optomechanical interaction strength and depends only on the relative strength of force and sensing noise. This result is similar to that already found in [4] for white noise drive. Furthermore, by parametrizing the noise curves of aLIGO, we were able to find a region of noise curves where entanglement exists, and we showed that there is a trade-off between the overall noise level and the cutoff frequency. Due to the high level of the current classical noise in aLIGO, we were not able to reach a definite conclusion in terms of the existence of entanglement in the system, even though it's unlikely to have a significant amount of entanglement based on our simulations. However, we saw that entanglement exists if we assume a system without seismic noise, even when the suspension thermal noise is still present. This is an important result since it shows that classical noises, even at very low frequencies, are able to demolish entanglement.
We also looked at how resonances in the noise curves of the system affect the amount of entanglement, and saw that entanglement is more resistant (i.e. it disappears for higher levels of classical noise) for the partition where we test for the separability between the mechanical oscillator and the joint system of the cavity and the outgoing light field. For future work, we plan to develop better sampling strategies to overcome numerical instabilities.
###### Acknowledgements.
We thank the Chen Quantum Group for helpful discussions. S.D. and Y.C. acknowledge the support by the Simons Foundation (Award Number 568762). K.W., C.G., and M.A. received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grant agreement No 951234), and from the Research Network Quantum Aspects of Spacetime (TURIS). K.H. was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project-ID 274200144 - SFB 1227 (projects A06) and Project-ID 390837967 - EXC 2123.
## Appendix A Transformation of the Optomechanical Hamiltonian
Strictly speaking, the Hamiltonian in Eq. (1) is written in terms of Schrodinger operators. The symbol \(\Omega\) in Eq. (1) is used to label a spatial mode which has a reduced wavenumber of \(\Omega/c\), and a free oscillation frequency of \(\Omega\). More specifically, \(\omega_{0}+\Omega\) is used to indicate a spatial mode whose wave number is \((\omega_{0}+\Omega)/c\), where \(c\) is the speed of light. We can also write the same Hamiltonian in the spatial domain. Following [33] and setting \(c=1\), we define
\[\hat{c}(z)=\int_{-\infty}^{\infty}\frac{d\Omega}{2\pi}\hat{c}(\omega_{0}+ \Omega)e^{\mu\Omega z}\,, \tag{10}\]
which represents a spatial mode of the light field with wave number \(\Omega\) -- or a temporal mode of the free light field with frequency \(\Omega\), since we assume no dispersion. The Hamiltonian can then be rewritten as
\[\begin{split} H&=\hbar\omega_{\mu}\hat{B}^{\dagger} \hat{B}+\hbar\Delta\hat{A}^{\dagger}\hat{A}-\hbar G\hat{x}(\hat{A}^{\dagger} +\hat{A})\\ &+i\hbar\sqrt{2\gamma}[\hat{A}^{\dagger}\hat{c}(z=0)-\hat{A}\hat{ c}^{\dagger}(z=0)]\\ &\qquad-i\hbar\int_{-\infty}^{\infty}\hat{c}^{\dagger}(z)\partial _{z}\hat{c}(z)\,dz\,.\end{split} \tag{11}\]
The only non-zero commutators for the creation and annihilation operators, as well as the spatial modes of the light field are
\[[\hat{c}(\omega_{0}+\Omega),\hat{c}^{\dagger}(\omega_{0}+\Omega^{ \prime})]=2\pi\delta(\Omega-\Omega^{\prime})\,, \tag{12a}\] \[[\hat{c}(z),\hat{c}^{\dagger}(z^{\prime})]=\delta(z-z^{\prime})\,, \tag{12b}\]
where \(\delta\) is the Dirac delta distribution.
## Appendix B Parameter Tables
This appendix contains the tables with the numerical values of the parameters used in modelling the classical and quantum noise curves, including the resonant modes. The parameters were calculated by minimizing the mean squared error between the actual noise curves of aLIGO taken from pygwine and the theoretical models, characterized by the parameters of interest, sampled logarithmically in frequency. Table 2 contains the parameters for the classical force and sensing noise in aLIGO defined in Eqs. (18), Table 3 contains the parameters of aLIGO introduced in the optomechanical Hamiltonian
of Sec. II, and lastly, Table 4 contains the parameters of the resonant modes present in aLIGO's noise curves, modeled with Eq. (20).
## Appendix C Noise Model
Our model for the total classical and quantum noise in aLIGO is displayed in Fig. 11. Note that the coating Brownian noise, which is the dominating classical noise source above \(\sim 10\) Hz, is modeled in a piecewise manner: a white noise in the \(10-10^{5}\) Hz band and a noise source increasing as \(\Omega^{2}\) for frequencies larger than \(10^{5}\) Hz. Even though the sensing noise in the \(10-10^{5}\) Hz band decreases as \(\Omega^{0.8}\) in the power spectral density, we choose to model it with a frequency-independent white noise for simplified calculations. Our choice is justified since the quantum noise dominates over the coating Brownian noise in that region.
## Appendix D Structure of the Covariance Matrix
The mirror and the cavity, at \(t=0\), constitute two modes, whereas there is an infinite number of modes in the outgoing light field given by the quadratures \(\hat{v}_{1}(t),\hat{v}_{2}(t),t\in(-\infty,0)\). In continuum coordinates, we can first write down the commutator matrix,
\[\mathbf{K}=\left[\begin{array}{cc}\mathbf{K}^{B}&&\\ &\mathbf{K}^{A}&\\ &&\mathbf{K}^{v}\end{array}\right], \tag{21}\]
with
\[\mathbf{K}^{A}=\mathbf{K}^{B}=\left[\begin{array}{cc}0&i\\ -i&0\end{array}\right],\] \[\mathbf{K}^{v}=\left[\begin{array}{cc}0&i\delta(t-t^{\prime})\\ -i\delta(t-t^{\prime})&0\end{array}\right]\,. \tag{22}\]
Note that \(\mathbf{K}^{v}\) is a \(2\times 2\) block matrix, but each block is infinite dimensional, with columns and rows indexed by \(t\) and \(t^{\prime}\), respectively. The indices \(t\) and \(t^{\prime}\) each run through all negative real numbers, \((-\infty,0]\). In order to represent these covariance matrices in a less ambiguous and more operational way, we shall adopt an index notation, in which \(j,k,l,m\) are discrete and run through \(1\) and \(2\), while \(t\) and \(t^{\prime}\) are continuous and run through negative real numbers. We can then write
\[K^{A}_{jk}=i\varepsilon_{jk}\,,\,\,K^{B}_{lm}=i\varepsilon_{lm}\,,\,\,K^{v}_{ l,m,m^{\prime}}=i\varepsilon_{lm}\delta(t-t^{\prime})\,. \tag{23}\]
Note that for \(\mathbf{K}^{v}\) we have a two-dimensional row index \((l,t)\), as well as a two-dimensional column index \((m,t^{\prime})\), to label quadrature and arrival time.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameter & Symbol & Value & Units \\ \hline \multirow{4}{*}{Mode Frequency} & \(\Omega_{v_{1}}\) & \(2\pi\cdot 0.441\) & rad/s \\ & \(\Omega_{v_{2}}\) & \(2\pi\cdot 0.995\) & rad/s \\ & \(\Omega_{v_{3}}\) & \(2\pi\cdot 1.98\) & rad/s \\ & \(\Omega_{v_{4}}\) & \(2\pi\cdot 2.37\) & rad/s \\ & \(\Omega_{v_{5}}\) & \(2\pi\cdot 3.38\) & rad/s \\ & \(\Omega_{v_{6}}\) & \(2\pi\cdot 3.81\) & rad/s \\ & \(\Omega_{\gamma_{7}}\) & \(2\pi\cdot 9.73\) & rad/s \\ \hline \multirow{4}{*}{Full Width at Half Maximum} & \(\Gamma_{v_{1}}\) & \(2\pi\cdot 1.92\cdot 10^{-3}\) & rad/s \\ & \(\Gamma_{v_{2}}\) & \(2\pi\cdot 5.63\cdot 10^{-5}\) & rad/s \\ & \(\Gamma_{v_{3}}\) & \(2\pi\cdot 2.11\cdot 10^{-5}\) & rad/s \\ & \(\Gamma_{v_{4}}\) & \(2\pi\cdot 1.44\cdot 10^{-1}\) & rad/s \\ & \(\Gamma_{v_{5}}\) & \(2\pi\cdot 1.45\cdot 10^{-4}\) & rad/s \\ & \(\Gamma_{v_{6}}\) & \(2\pi\cdot 1.65\cdot 10^{-3}\) & rad/s \\ & \(\Gamma_{v_{7}}\) & \(2\pi\cdot 1.03\cdot 10^{-3}\) & rad/s \\ \hline \hline \end{tabular}
\end{table}
Table 4: Resonant Mode Parameters
\begin{table}
\begin{tabular}{l c c c} \hline \hline Parameter & Symbol & Value & Units \\ \hline \multirow{4}{*}{Force Noise Time Constant} & \(\tau_{F}\) & \(1.6\cdot 10^{-20}\) & s \\ & Force Noise Cutoff Frequency & \(\omega_{F}\) & \(2\pi\cdot 0.25\) & rad/s \\ & Sensing Noise Time Constant 1 & \(\tau_{X_{1}}\) & \(10^{-50}\) & s \\ & Sensing Noise Time Constant 2 & \(\tau_{X_{2}}\) & \(10^{-48}\) & s \\ & Sensing Noise Cutoff Frequency & \(\omega_{X}\) & \(2\pi\cdot 10^{4}\) & rad/s \\ & \(\tau_{ST}\) & \(3.1\cdot 10^{-35}\) & s \\ & \(\omega_{ST}\) & \(2\pi\cdot 1.9\cdot 10^{3}\) & rad/s \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classical Noise Model Parameters
Figure 11: Modeled noise spectrum for aLIGO.
The covariance matrix \(\mathbf{V}\) of the system contains the steady state correlations between the mechanical mode at \(t=0\), the cavity mode at \(t=0\), and the outgoing light field modes for \(t<0\). It can be written as
\[\mathbf{V}=\left[\begin{array}{cc|c}V^{BB}&V^{BA}&V^{Bv}\\ V^{AB}&V^{AA}&V^{Av}\\ \hline V^{\prime B}&V^{\prime A}&V^{vv}\\ \end{array}\right]\equiv\left[\begin{array}{cc}V^{QQ}&V^{0v}\\ V^{\prime Q}&V^{vv}\\ \end{array}\right]\,. \tag{10}\]
Here each of \(A\) and \(B\) represent two dimensions, while \(v\) represents an infinite number of dimensions. For computational purposes we can group \(A\) and \(B\) together as \(Q\), or
\[\hat{Q}_{J}=\hat{Q}_{(1,2,3,4)}=(\hat{B}_{1},\hat{B}_{2},\hat{A}_{1},\hat{A}_ {2})\,. \tag{11}\]
Here we shall use upper-case Latin indices to run from 1 to 4, to label quadratures in the mechanical and cavity modes. We can write
\[V^{QQ}_{JK}=\frac{1}{2}\langle\hat{Q}_{J}\hat{Q}_{K}+\hat{Q}_{K}\hat{Q}_{J} \rangle=\frac{1}{2}\int_{-\infty}^{+\infty}\frac{d\Omega}{2\pi}S_{Q_{J}Q_{K}}( \Omega)\,, \tag{12}\]
where the (one-sided) cross spectrum is defined via
\[\frac{1}{2}S_{XY}(\Omega)\delta(\Omega-\Omega^{\prime})=\frac{1}{2}\langle \hat{X}(\Omega)\hat{Y}^{\dagger}(\Omega^{\prime})+\hat{Y}^{\dagger}(\Omega^{ \prime})\hat{X}(\Omega)\rangle\,, \tag{13}\]
and they can be obtained from solutions to the Heisenberg Equations, as well as the spectra
\[S_{uu_{i}}(\Omega)=\delta_{ij}\,, \tag{14}\]
and the prescriptions we use for the spectra of \(S_{n_{k}}(\Omega)\) and \(S_{n_{H}}(\Omega)\). The uncorrelated white spectra between \(\hat{u}_{1}\) and \(\hat{u}_{2}\) in Eq. (14) result from the ingoing light field being in its vacuum state. We shall discuss the magnitude and frequency dependence of \(S_{n_{k}}(\Omega)\) and \(S_{n_{H}}(\Omega)\) in depth in the next section.
For elements that involve \(v\), we shall still use \(lt\) for column indices and \(mt^{\prime}\) for row indices. We then have
\[V^{Qv}_{L,m^{\prime}} =\frac{1}{2}\langle\hat{Q}_{J}\hat{v}_{m}(t^{\prime})+\hat{v}_{m} (t^{\prime})\hat{Q}_{J}\rangle\] \[=\frac{1}{2}\int_{-\infty}^{+\infty}\frac{d\Omega}{2\pi}S_{Q_{I}v _{m}}(\Omega)e^{d\Omega t^{\prime}}\,, \tag{15}\]
and
\[V^{vv}_{L,mt^{\prime}} =\frac{1}{2}\langle\hat{v}_{l}(t)\hat{v}_{m}(t^{\prime})+\hat{v}_ {m}(t)\hat{v}_{l}(t^{\prime})\rangle\] \[=\frac{1}{2}\int_{-\infty}^{+\infty}\frac{d\Omega}{2\pi}S_{vv_{m }}(\Omega)e^{-i\Omega(t-t^{\prime})}\,. \tag{16}\]
Note that \(V^{Qv}_{L,mt^{\prime}}=V^{RQ}_{mt^{\prime},J}\).
In numerical computations, we will have to convert the continuum of \(t,t^{\prime}\leq 0\) into a finite grid. This means we will sample a finite duration \(T\) with a step size of \(\Delta t\). We shall still use lower-case Latin indices to run from 1 to 2, and upper case Latin indices to run from 1 to 4, while use Greek indices, for example, \(\alpha=0,1,2,\ldots,T/\Delta t\equiv\mathcal{N}-1\) to replace \(t\). We shall write
\[K^{v}_{lt,m^{\prime}}\to K^{v}_{lt,m^{\prime}}=i\epsilon_{lm}\delta_{\alpha \alpha^{\prime}}\,. \tag{17}\]
Note that a Kronecker delta now replaces the Dirac delta. For the covariance matrix, we replace
\[V^{Qv}_{J,m^{\prime}}\to V^{Qv}_{J,m^{\prime}}=\frac{\sqrt{\Delta t}}{2} \langle\hat{Q}_{J}\hat{v}_{m}(t_{\alpha^{\prime}})+\hat{v}_{m}(t_{\alpha^{ \prime}})\hat{Q}_{J}\rangle\,, \tag{18}\]
which are \(4\times 2\mathcal{N}\) and \(2\mathcal{N}\times 4\) dimensional matrices (with \(V^{RQ}_{J,mt^{\prime}}\)), and
\[V^{vv}_{lt,m^{\prime}}\to V^{vv}_{lt,m^{\prime}}=\frac{\Delta t}{2}\langle \hat{v}_{l}(t_{\alpha})\hat{v}_{m}(t_{\alpha^{\prime}})+\hat{v}_{m}(t_{\alpha}) \hat{v}_{l}(t_{\alpha^{\prime}})\rangle\,, \tag{19}\]
which is a \(2\mathcal{N}\times 2\mathcal{N}\)-dimensional matrix. For the discrete sampling times we have defined
\[t_{\alpha}=-\left(\alpha+\frac{1}{2}\right)\Delta t\,, \tag{20}\]
where the additional \(\frac{1}{2}\Delta t\) provides a faster convergence in numerics. The entire covariance matrix is then \((2\mathcal{N}+4)\times(2\mathcal{N}+4)\) - dimensional.
Our particular convention of inserting \(\Delta t\) at various places of the matrix is associated with our convention of discretizing vectors. For a generic variable, in the continuum form, we can always express it as
\[X =\alpha^{j}\hat{A}_{j}+\beta^{j}\hat{B}_{j}+\int_{-\infty}^{0}\xi^ {j}(t)\hat{v}_{j}(t)dt\] \[=\gamma^{\prime}\hat{Q}_{J}+\int_{-\infty}^{0}\xi^{j}(t)\hat{v}_{ j}(t)dt\,, \tag{21}\]
where we have used upper indices for vector components, and have grouped \(\alpha^{j}\) and \(\beta^{j}\) into \(\gamma^{J}\). The variance of \(X\), which is formally written as \(\mathbf{X}^{\dagger}\mathbf{V}\mathbf{X}\), will then be
\[\mathbf{X}^{\dagger}\mathbf{V}\mathbf{X} =\frac{1}{2}\gamma^{\prime}\langle\hat{Q}_{J}\hat{Q}_{K}+\hat{Q}_{ K}\hat{Q}_{J}\rangle\gamma^{K}\] \[+\int_{-\infty}^{0}\gamma^{\prime}\langle\hat{Q}_{J}\hat{v}_{m}(t^ {\prime})+\hat{v}_{m}(t^{\prime})\hat{Q}_{J}\rangle\xi^{m}(t^{\prime})dt^{\prime}\] \[+\frac{1}{2}\iint_{-\infty}^{0}\xi^{j}(t)\langle\hat{v}_{l}(t) \hat{v}_{m}(t^{\prime})+\hat{v}_{m}(t)\hat{v}_{l}(t^{\prime})\rangle\xi^{m}(t^{ \prime})dtdt^{\prime}\,. \tag{22}\]
As we convert the integrals in Eq. (22) into summations, \(\int\) will become \(\Sigma\), while \(dt\) will become \(\Delta t\). We shall take
\[\xi^{m\alpha}=\xi^{m}(t_{\alpha})\sqrt{\Delta t}. \tag{23}\]
Together with Eq. (18)-(19), the fully discretized version of Eq. (22) will then be
\[\mathbf{X}^{\dagger}\mathbf{V}\mathbf{X}=\gamma^{\prime}V^{QQ}_{JK} \gamma^{K}+\gamma^{\prime}V^{Qv}_{J,m}\xi^{m\alpha} +\xi^{\beta\mu}V^{vQ}_{J\beta,K}\gamma^{K}\] \[+\xi^{\mu}V^{vv}_{J\alpha,m}\xi^{\mu\beta}. \tag{24}\]
In this convention, the usual vector norm for the discretized version of a function of time coincides with the \(L^{2}\)-norm of that function. It can also be checked that discretized matrices in Eqs. (11)-(13), when contracted with vectors in this convention, lead to the appropriate integrals. Note that if a \(\delta(t_{\alpha}-t_{\alpha}^{\prime})\) shows up in Eq. (13), we will take \(\Delta t\delta(t_{\alpha}-t_{\alpha}^{\prime})\rightarrow\delta_{\alpha\sigma}\), as in Eq. (11).
Corresponding to the discussion at the end of Sec. III (also shown in Fig. 1), here we consider entanglement between: (i) mass at \(t=0\) and the out-going light field that had emerged during \(t\leq 0\) and (ii) mass and the joint system of the cavity mode as well as light that had emerged during \(t\leq 0\). In case (i), we simply throw away elements involving \(A\) in both \(\mathbf{K}\) and \(\mathbf{V}\), while in case (ii) we consider the full matrices. In both cases, \(\mathbf{V}_{\text{pt}}\) is obtained by adding a minus sign to the column involving \(\hat{B}_{2}\) and the row involving \(\hat{B}_{2}\) - but not the diagonal element at which they intersect.
## Appendix E Numerical Implementation for aLIGO's Noise
In our simulations, we use \(dt=0.25\) ms and \(T=0.1\) s, which corresponds to sampling the light field at \(4000\) Hz and working with the outgoing field emitted from the cavity between \(t=-0.1\) s and \(t=0\) s. We achieve numerical convergence with these parameters. To quantify the amount of entanglement in the system, we use the logarithmic negativity defined in Eq. (14). However, this is possible only for low levels of classical noise. For high levels of classical noise, classical correlations dominate over quantum correlations, which causes the cross-correlation values in the system to cover a wide-range of orders of magnitude, mostly due to the \(14th\) power of \(\Omega\) in our force noise model Eq. (18). For aLIGO parameters, the entries of the covariance matrix span about \(20\) orders of magnitudes, while we attempt to find a symplectic eigenvalue of order \(1\) - this is numerically an extremely challenging problem.
Numerical errors also arise because of time-binning with an insufficient resolution. Thus, we lose precision on the numerically determined covariance matrices. This affects the smallest symplectic eigenvalue \(\tilde{\nu}_{\text{min}}\) to the point that it cannot be used to measure entanglement with the logarithmic negativity. Numerical imprecision can lead to covariance matrices that do not satisfy Heisenberg uncertainty bound Eq. (12), they thus do not correspond to a bona fide state and we call them _nonphysical_. One way to get around this loss of precision is to use the PPT criterion as a yes/no test only, renouncing the magnitude information of \(\tilde{\nu}_{\text{min}}\). The sampling frequency during time binning should be higher than the Nyquist rate of the system (i.e. twice the largest frequency in the system), since the entries of the covariance matrix contain correlations from all frequencies. In our system, the largest frequency is the cavity decay rate \(\gamma=424\) Hz. Therefore, we choose \(dt<1/(2\cdot 424)\approx 1.2\) ms. However, as we decrease \(dt\), we are limited by computational resources, such as the RAM size, or time. The parameter \(dt\) is limited to an optimal range determined by this trade-off. We thus develop the following strategy: we first quantify the amount of numerical errors in the system by computing the most negative eigenvalue (if it exists) of \(\mathbf{V}+\frac{1}{2}\mathbf{K}\) before and after the partial transpose operation, denoted as \(\lambda_{\mathcal{B}}\) and \(\lambda_{\mathcal{N}}\) respectively. Then, we decide that entanglement exists if \(\lambda_{\mathcal{B}}>0\) and \(\lambda_{\mathcal{N}}<0\), or if \(\lambda_{\mathcal{B}}<0\) and \(|\lambda_{\mathcal{N}}|\gg|\lambda_{\mathcal{B}}|\). Furthermore, we decide that the system is separable if \(\lambda_{\mathcal{B}}>0\) and \(\lambda_{\mathcal{N}}>0\).
Two case studies about this strategy can be found in Fig. E.1 where we fix \(T=0.1\) s, change \(dt\), and examine how \(\lambda_{\mathcal{N}}\) and \(\lambda_{\mathcal{B}}\) change by plotting \(\lambda_{\mathcal{N}}\) and \(\lambda_{\mathcal{B}}\). We decide on entanglement if \(\lambda_{\mathcal{B}}\geq 0\) and \(\lambda_{\mathcal{N}}<0\), or \(\lambda_{\mathcal{B}}<0\), \(\lambda_{\mathcal{N}}<0\), and \(|\lambda_{\mathcal{N}}|\geq 100|\lambda_{\mathcal{B}}|\). Our criteria for convergence is a relative change smaller than \(5\%\) for both \(\lambda_{\mathcal{N}}\) and \(\lambda_{\mathcal{B}}\) as we change \(dt\). In Fig. E.1a, we work with a low level of classical force noise and set \(\alpha_{F_{1}}=10^{-15}\), \(\alpha_{F_{2}}=15\) in Eq. (18). We see that \(\lambda_{\mathcal{N}}\) and \(\lambda_{\mathcal{B}}\) converge with \(\lambda_{\mathcal{N}}\) changing by \(0.053\%\), \(0.068\%\), and \(\lambda_{\mathcal{B}}\) changing by \(4.9\%\), \(0.77\%\) before and after tracing over the cavity respectively, for \(dt=0.1\) ms. The system is entangled for both partitions since \(\lambda_{\mathcal{B}}\geq 0\) and \(\lambda_{\mathcal{N}}<0\). Furthermore, \(\lambda_{\mathcal{B}}\) becomes positive and converges after \(dt\sim 1\) ms, or a sampling frequency of \(1000\) Hz; consistent with the discussion
above relating physicality to Nyquist rate of \(\sim 850\) Hz. We also see that \(\lambda_{\mathcal{N}}\) converges for similar values of \(dt\) from Fig. 1a.
In Fig. 2, we plot the light-field section of the eigenvector associated to the (converged) minimal eigenvalue of \(\mathbf{V}_{\mathrm{pf}}+\frac{1}{2}\mathbf{K}\), for the partition where we do not trace over the cavity and with \(\alpha_{F_{1}}=10^{-15}\), \(\alpha_{F_{2}}=15\). It corresponds to a temporal mode of the free electromagnetic field outside the cavity. It is that particular mode associated to the (sole) negative eigenvalue that is entangled with the joint system mechanics plus cavity. The four curves correspond to the real and imaginary parts of the \(\hat{v}_{1}(t)\) and \(\hat{v}_{2}(t)\). They have the form of smooth decaying oscillations with the same frequency and decay rate, but differing by a phase. This form of the mode functions was predicted for a white force noise in [6]; which gives us confidence in the correctness of our study. Also, exponentially decaying demodulation pulses were used to demonstrate optomechanical entanglement [8; 16] and proposed for a demonstration in the stationary regime [23]. We fit functions in the form of \(e^{-\gamma_{*}t}\sin\left(\omega_{*}t+\theta\right)\) to each curve, which results in \(\omega_{*}/(2\pi)\approx 40\) Hz, and \(\gamma_{*}/(2\pi)\approx 25\) Hz. In the frequency domain, exponentially decaying harmonic oscillations are Lorentzians, centered at \(\pm\omega_{*}\) and with a bandwidth (FWHM) \(2\gamma_{*}\). In aLIGO's noise budget (Fig. 3), these Lorentzians are on the low frequency side of the low-noise band and their HWHM to the left crosses the quantum noise, where it is not yet dominated by suspension thermal and seismic noises - although we saw in Sec. VI.1 that the latter is probably the main mechanism preventing optomechanical entanglement. We add that Lorentzians are heavy-tail distributions, being a possible reason why even lower frequency components matter.
In Fig. 1b, we set \(\alpha_{F_{1}}=10^{-8}\), \(\alpha_{F_{2}}=10\), and \(\alpha_{X_{2}}=10^{3}\), causing the sensing noise to dominate over quantum noise for frequencies in the \(30-2000\) Hz band. We again see that \(\lambda_{\mathcal{N}}\) and \(\lambda_{\mathcal{B}}\) converge with \(\lambda_{\mathcal{N}}\) changing by \(1.3\%\), \(1.2\%\), and \(\lambda_{\mathcal{B}}\) changing by \(1.2\%\), \(1.1\%\) before and after tracing over the cavity respectively, for \(dt=0.1\) ms. Since \(\lambda_{\mathcal{B}}\) and \(\lambda_{\mathcal{N}}\) are positive for both partitions, we conclude that there is no entanglement in the system for either partition. When we increase the classical noise level we see that convergence is much harder to achieve. Furthermore, \(\lambda_{\mathcal{B}}\) and \(\lambda_{\mathcal{N}}\) are negative, and \(\lambda_{\mathcal{B}}\sim\lambda_{\mathcal{N}}\) for every value of \(dt\) regardless of when we do or do not trace over the cavity. Therefore, we cannot conclude that there is entanglement for neither of the partitions. These case studies show that we can use \(\lambda_{\mathcal{B}}\) as an indicator of the "non-physicality" of the covariance matrix \(\mathbf{V}\) introduced by finite-sampling and high levels of classical noise in the system, and that the negativity of \(\lambda_{\mathcal{N}}\) is not enough to decide on entanglement when we consider the numerics.
For our model of aLIGO's non-Markovian noises, Eqs. (18), we study the numerical stability of broad noise regimes, parametrized by the pair \(\alpha_{F_{j}}\), \(j=1,2\). We set \(\alpha_{X_{j}}=1\), \(j=1,2\), since we saw that force noise had a greater impact on numerical stability than sensing noise in our simulations for aLIGO's noise. In Fig. 3, we plot the boundary between noise regimes where the numerics converge and the computed covariance matrices are physical (in gray) and those regimes where the numerics fail (either at converging or at generating physical covariance matrices or both) with the available computing resources (in yellow). As a matter of fact, in all the operation regimes in the gray region where the numerics work, the system is entangled. This means that our model predicts optomechanical entanglement in aLIGO, if its classical noise is in this gray region. Recall that the current status of aLIGO corresponds to \(\alpha_{F_{j}}=\alpha_{X_{j}}=1\), \(j=1,2\), far to the bottom-right in the undecidable yellow region.
If we follow the red dashed line in Fig. 3, we continuously sample the force noise spectrum over the boundary where our numerics converge, and the system is entangled, for the partition where we do not trace over the cavity.
Figure 2: The real and imaginary parts of the eigenvector for the negative eigenvalue of \(\mathbf{V}_{\mathrm{pf}}+\frac{1}{2}\mathbf{K}\) for for \(\alpha_{F_{1}}=10^{-15}\), \(\alpha_{F_{2}}=15\), denoted by \(e_{1}\) and \(e_{2}\), which correspond to the first and second halves of the entire eigenvector respectively. The reason behind this slicing is the block-matrix structure of the light-field sector of the covariance matrix. Furthermore, since the light field modes are continuous in time, \(e_{1}\) and \(e_{2}\) are also functions of time.
Figure 3: Contour plot depicting the effect of the force noise spectrum on our numerical precision for both partitions. The force noise levels increase toward the bottom-right of the figure. The black and the red dashed lines separate the regions where numerics converge from the regions where numerics fail for the partition where we do and do not trace over the cavity respectively. The region where numerics converge for both partitions is marked in gray, whereas numerics fail for both partitions toward the bottom right of the figure, past the red dashed line, in the yellow region. |
2310.20385 | Free energy and metastable states in the square-lattice $J_1$-$J_2$
Ising model | Free energy as a function of polarization is calculated for the
square-lattice $J_1$-$J_2$ Ising model for $J_2 < |J_1|/2$ using the random
local field approximation (RLFA) and Monte Carlo (MC) simulations. Within RLFA,
it reveals a metastable state with zero polarization in the ordered phase. In
addition, the Landau free energy calculated within RLFA indicates a geometric
slab-droplet phase transition at low temperature, which cannot be predicted by
the mean field approximation. In turn, restricted free energy calculations for
finite-size samples, exact and using MC simulations, reveal metastable states
with a wide range of polarization values, but with only two domains. Taking
into account the dependence of the restricted free energy on the
nearest-neighbor correlations allows us to identify several more metastable
states. The calculations also reveal additional slab-droplet transitions at
$J_2 > |J_1|/4$. These findings enrich our knowledge of the $J_1$-$J_2$ Ising
model and the RLFA as a useful theoretical tool to study phase transitions in
spin systems. | V. A. Abalmasov | 2023-10-31T11:49:13Z | http://arxiv.org/abs/2310.20385v2 | # Free energy and metastable states in the square-lattice \(J_{1}\)-\(J_{2}\) Ising model
###### Abstract
We calculate the (restricted) free energy as a function of polarization for the square-lattice \(J_{1}\)-\(J_{2}\) Ising model using the Random local field approximation (RLFA) and Monte Carlo (MC) simulations. Here we consider mainly coupling constants in the range \(0<J_{2}<1/2\) at \(J_{1}=-1\), for which the ground state is ferromagnetic (or Neel antiferromagnetic when \(J_{1}=1\)). Within RLFA, a metastable state with zero polarization is present in the ordered phase, which was recently discussed by V.A. Abalmasov and B.E. Vugmeister, Phys. Rev. E 107, 034124 (2023). In addition, the free energy calculated within RLFA indicates a geometric slab-droplet phase transition at low temperature, which cannot be detected in the mean field approximation. In turn, exact calculations of the free energy for the sample size \(L=6\) and MC simulations for \(L=10\) reveal metastable states with a wide range of polarization values in the ordered phase, the origin of which we discuss. The calculations also reveal additional slab-droplet transitions (at \(J_{2}>0.25\)). These findings enrich our knowledge of the \(J_{1}\)-\(J_{2}\) Ising model and the RLFA as a useful theoretical tool to study phase transitions in spin systems.
## I Introduction
The square-lattice \(J_{1}\)-\(J_{2}\) Ising model is one of the minimal extension of the standard Ising model, in which the coupling \(J_{1}\) between nearest neighbors is complemented by the coupling \(J_{2}\) between diagonally next-nearest neighbors. The properties of this model are of both fundamental and practical interest, in particular, since its quantum Heisenberg counterpart is relevant to the antiferromagnetism in the parent compounds of the cuprate and pnictide families of high-temperature superconductors [1; 2; 3]. Indeed, recent state-of-the-art numerical calculations [4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14] confirm earlier findings [15; 16; 17; 18; 19; 20; 21; 22] that diagonal interactions are important in describing the available experimental data for these compounds. Magnetic frustration due to the \(J_{2}\) coupling leads to the quasi-degeneracy of the ground state [19; 6; 20] and possibly to a quantum spin liquid state at \(J_{2}\) close to \(|J_{1}|/2\)[23; 24].
We recently highlighted the existence of metastable states with arbitrary polarization in the square-lattice \(J_{1}\)-\(J_{2}\) Ising model for \(J_{2}\in(0,|J_{1}|)\) using Monte Carlo (MC) simulations, which was further supported by simple microscopic energy considerations [25]. For the ferromagnetic ground state, i.e. for \(J_{1}<0\) and \(J_{2}<|J_{1}|/2\), these states are rectangles with polarization opposite to the surrounding, briefly considered much earlier in [26; 27]. Significantly, the Random local field approximation (RLFA) [28], also applied in [25], points to a metastable state with zero polarization in the same \(J_{2}\) coupling range, thus reflecting the appearance of microscopic metastable states, which seems impossible for mean field approximations (MFA). Note that the above states differ from the metastable states of the standard Ising model, consisting of straight stripes, into which a system with zero polarization, when applying the single-spin flip MC algorithm and periodic boundary conditions, relaxes after quenching only in about 30% of cases and only in the absence of an external field [29; 30; 31].
Polarization-dependent (called restricted or Landau) free energy \(F(m)\), considered in the framework of Landau's phenomenological theory of phase transitions [32], also provides information on metastable states (including those in an external field) and can be used to calculate the relaxation rate of the system to the ground state via the Landau-Khalatnikov equation [33] (see, e.g., [34] for such calculations in ferroelectrics). It should be noted, however, that for short-range interactions, the restricted free energy obtained within the MFA differs qualitatively from the free energy calculated exactly or using the MC method for finite-size samples [35; 36; 37]. In the former case, below the phase transition temperature, the free energy takes into account only homogeneous states inside the two-phase (spins up and down) coexistence region and, as a consequence, is a double-well shaped function of polarization. In the latter case, all inhomogeneous states contribute to \(F(m)\). Thus, at a temperature close to zero, the barrier between two minima with opposite polarization is determined by the interface energy between two large domains and is proportional to the sample size \(L\). Relative to the total energy, proportional to the number of spins \(N=L^{2}\), it vanishes in the thermodynamic limit [38; 39]. It was shown that, despite the loss of detailed information about microscopic spin configurations, \(F(m)\) can be harnessed to well reproduce the MC polarization dynamics of the Ising model in good agreement with the droplet theory [40]. These ideas were further developed in [41; 42; 43; 44]. The temperature dependence of the free energy barrier in the \(J_{1}\)-\(J_{2}\) Ising model, but only in three dimensions, was analytically estimated in [27] in connection with domain growth and shrinking after low-temperature quenching.
Here we calculate the restricted free energy \(F(m)\) for the square-lattice \(J_{1}\)-\(J_{2}\) Ising model in the framework of RLFA, exactly for a square sample size \(L=6\) and using the MC method for \(L=10\). We pay special attention to the metastable states, which appear in this model and
were studied earlier in [25], and explore how they are reflected in the free energy. The features of the geometric slab-droplet phase transition in the free energy calculated by both methods are also briefly discussed.
## II Model
Thus we consider the Hamiltonian
\[H=J_{1}\sum_{\langle i,j\rangle}s_{i}s_{j}+J_{2}\sum_{\langle\langle i,j\rangle \rangle}s_{i}s_{j}-\sum_{i}h_{i}s_{i}, \tag{1}\]
where the sums are over nearest \(\langle i,j\rangle\) and next-nearest (diagonal) \(\langle\langle i,j\rangle\rangle\) neighbors, as well as over each spin coupled to the external field \(h_{i}\) at its position with spin values \(s_{i}=\pm 1\). In what follows, we set \(J_{1}=-1\) and competing \(J_{2}<1/2\) coupling constants, which correspond to the ferromagnetic ground state (the case \(J_{2}>1/2\) with a striped antiferromagnetic ground state is similar in many aspects, but has a more complex spin topology and will be considered separately). The model properties remain the same for \(J_{1}=1\) with the replacement of the uniform polarization by the Neel checkerboard one.
## III Random Local Field Approximation
RLFA is based on the exact formula for the average spin [28; 45]:
\[\langle s_{i}\rangle=\langle\tanh\beta(h_{i}^{s}+h_{i})\rangle, \tag{2}\]
where \(\beta=1/T\) is the inverse temperature (in energy units, with the Boltzmann constant \(k_{B}=1\)), \(h_{i}^{s}=-\sum_{j}J_{ij}s_{j}\) is the local field acting on the spin \(s_{i}\) due to all spins \(s_{j}\) coupled with it, and the brackets stand for the thermal averaging.
With RLFA the fluctuations of each spin are considered as independent, and averaging in Eq. (2) is carried out with the product of the probability distributions for each spin [28; 46]:
\[P(s_{i})=(1+m_{i}s_{i})/2, \tag{3}\]
where \(m_{i}=\langle s_{i}\rangle=me^{i\mathbf{q}\mathbf{r}_{i}}\) is the thermally averaged polarization at position \(\mathbf{r}_{i}\) with propagation vector \(\mathbf{q}\). The uniform polarization considered here corresponds to \(\mathbf{q}=(0,0)\). The same applies to the spatial dependence of the external field \(h_{i}\).
Eq. (2) follows from equating to zero the derivative of the restricted free energy \(F(m)\)[40], which corresponds to thermodynamic equilibrium at a fixed value of polarization \(m\)[47]. To obtain the correct dependence of \(F(m)\) on the external field \(h\), we rewrite Eq. (2) in the
Figure 1: (a) The RLFA solution for \(J_{2}=0.3\) (see also Fig. 2 in [25] for different values of \(J_{2}\)). Red circles define temperatures \(T_{0}<T_{1}<T_{2}\). (b) Restricted free energy \(F_{0}(m)\) within RLFA for \(J_{2}=0.3\) as a function of temperature. Red points correspond to the local maximum of \(F_{0}(m)\) at each temperature (a barrier), dark blue points correspond to its global minimum (stable states), and light blue points correspond to its local minimum (metastable states), which are zoomed in on in Fig. 2a. Metastable states with zero polarization appear at temperatures from zero to \(T_{0}\approx 0.6\) and from \(T_{1}\approx 1.1\) to \(T_{c}<T_{2}\) (\(T_{2}\approx 1.26\)), at which a first order phase transition occurs.
form \(\partial F/\partial m=f(m)-h\), integration of which yields \(F(m)-F(0)=\int_{0}^{m}f(m)dm-hm\). Although \(F(0)\) depends on temperature, this is of little interest to us and for convenience we choose \(F(0)=0\) at each temperature and define \(F_{0}(m)=\int_{0}^{m}f(m)dm-hm\).
The RLFA solution and the restricted free energy \(F_{0}(m)\) calculated in this way for \(J_{2}=0.3\) are shown in Fig. 1. The calculated free energy indeed points to the metastable state with \(m=0\), discussed in [25], which we have zoomed in on in Fig. 2a. With decreasing temperature, a barrier appears at \(T_{0}\approx 0.6\) near \(m=0\), see Fig. 1a. Then its height first increases and its position shifts up to \(m\approx 0.29\), after which, at a temperature slightly less then \(J_{2}\), the height begins to decrease linearly in \(T\) to zero, see Fig. 2b. The maximum barrier height of about \(0.002\) is close to the estimate in [25] based on the value of the coercive field. In Fig. 2b, the barrier height for various values of \(J_{2}\) is shown. At \(J_{2}\approx 0.31\) we have \(T_{0}=T_{1}\), where \(T_{1}\) is the highest temperature at which \(F(m)\) is maximum at \(m=0\), see Fig. 1. For larger \(J_{2}\), these two temperatures are not defined [25], and the unstable RLFA solution \(m=0\) extends from \(0\) to the critical temperature \(T_{c}<T_{2}\), where \(T_{2}\) is the temperature below which the minimum of \(F(m)\) at \(m\neq 0\) appears. It should be noted that within RLFA, the transition turns out to be first order for \(J_{2}\) from about \(0.25\) up to \(1.25\)[25], while it has recently been shown to be second order everywhere, except perhaps for the region \(0.5<g<0.54\), using the tensor network simulation technique [48]. At the same time, the first-order phase transition was also predicted just below \(J_{2}=0.5\) using the same method [49].
At low temperature, the RLFA restricted free energy shows a kink at a polarization value \(m_{c}\approx 0.5\) for \(J_{2}=0.3\), see Fig. 1b. This kink corresponds to a polarization for which the most likely configuration changes from a slab (for \(m<m_{c}\)) to a droplet (for \(m>m_{c}\)) [40]. For \(J_{2}=0\), the RLFA predicted critical polarization \(m_{c}\approx 0.53\) is close to the exact value \(m_{c}=0.5\)[50], see Fig. 3. In general, this effect is called geometric phase transition and is present in finite-size systems when periodic boundary conditions are used in the simulation [50, 51]. For the two-dimensional Ising model, it was thoroughly studied by the Monte Carlo method used to calculate the free energy in [52, 53]. We emphasize that RLFA is able to predict the geometrical phase transition, in contrast to MFA. We have also checked that even the four-spin cluster approximation, formulated as in [54], does not predict this transition, despite the good accuracy of the approximation in describing ferroelectric phase transitions [55, 56, 57, 58]. It is also worth noting that the RLFA free energy becomes flat, signifying the absence of a phase transition, in the limit \(J_{2}\to 0.5\)[25].
Figure 3: Restricted free energy \(F_{0}(m)\) calculated within RLFA (solid lines) for several values of \(J_{2}\) at temperature \(T=0.1\), which shows a kink at magnetization around \(m=0.5\). The dashed lines are the MFA free energy.
Figure 2: (a) Restricted free energy \(F_{0}(m)\) within RLFA for \(J_{2}=0.3\) and polarization limited by \(m\in(0,0.5)\) to show the appearance at low temperature of a barrier at \(m\neq 0\) whose height first increases and then decreases as the temperature approaches zero. (b) The barrier height \(\Delta F_{\rm bar}=F_{\rm bar}-F(0)\) for the metastable state at \(m=0\), which appears in the restricted free energy \(F(m)\) calculated within RLFA, as a function of temperature \(T\). Only \(J_{2}<0.31\) are considered when \(T_{0}<T_{1}\) (for temperatures definition see Fig. 1a), since these two temperatures become undefined at larger \(J_{2}\)[25].
## IV Exact free energy
Now we want to compare the above results with the free energy obtained by definition,
\[F(M)=-T\log\sum_{E}\text{g}(M,E)\exp(-E/T), \tag{4}\]
where the sum is over all possible energies \(E\) and \(\text{g}(M,E)\) is the density of states with total spin \(M=mN\) and energy \(E\) for \(N\) spins.
For small samples, the sum in (4) can be computed exactly. For a square sample of size \(L=6\), yielding the total number of spins \(N=36\), the result for \(J_{2}=0.3\) is shown in Fig. 4. In all calculations we apply periodic boundary conditions. The critical temperature for \(L=6\) is equal to \(T_{c}=1.67\), while for \(L=100\) (which practically corresponds to an infinite sample size) it is \(T_{c}=1.26\)[25].
Configurations that contribute to the free energy \(F(M)\) at zero temperature for different values of \(M\) are shown in Fig. 5. At zero temperature, at \(m\lesssim 0.5\), the free energy is flat for \(0<J_{2}<0.5\), Fig. 6a. The pits, also mentioned in [40], are due to spin configurations with completely flat interface between two slabs with opposite spin directions, see configurations with \(M=0\) (\(M_{0}\)) and \(M=12\) (\(M_{12}\)) in Fig. 5. When a spin flips on this interface (configurations \(M_{2}\) and \(M_{14}\) in Fig. 5) the energy increases by \(4J_{1}\). When the last spin in the row flips, the energy decreases by this value (see transition \(M_{10}\to M_{12}\) in Fig. 5 and Fig. 6a). The distance between two neighbor pits of \(F(M)\) at \(m\lesssim 0.5\) is equal to \(2L\), since any spin flip changes the total moment by 2.
At \(m\gtrsim 0.5\), the configurations that contribute to the free energy \(F(M)\) at zero temperature correspond mainly to the droplet (starting from configuration \(M_{18}\) in Fig. 5), but depend on \(J_{2}\), as indicated in the bottom row of Fig. 5. The latter also affects the dependence of the energy barrier height, \(\Delta F_{M}=F_{M}-F_{M-2}\), on \(J_{2}\) (Fig. 6a). As shown in Fig. 7a, for \(M=22\) at \(J_{2}<0.33\) and for \(M=26\) at \(J_{2}<0.25\), the barrier height is \(4J_{2}\) and is determined by the spin flip at the corner of the droplet. Note that the exact values of \(J_{2}\) are equal to \(1/3\) and \(1/4\) and follow from the energy ratio of the different configurations \(M_{22}\), \(M_{24}\), and \(M_{26}\) in Fig. 5. At larger values of \(J_{2}\) for both values of \(M\), there is a reverse transition to the slab phase and then back again, as can be seen in the bottom row of Fig. 5. When a spin flips on a side of the droplet whose length \(D>1\), the energy does not change until the last spin on the side flips, then the energy decreases by \(\Delta E=4(J_{1}+J_{2})\) (see transitions \(M_{22}\to M_{24}\) at \(J_{2}<0.25\) and \(M_{26}\to M_{28}\) at \(J_{2}<0.33\) in Fig. 5 and Fig. 6a). For \(D=1\), \(\Delta E=4(J_{1}+2J_{2})\), which is valid for transitions \(M_{30}\to M_{32}\) and \(M_{32}\to M_{34}\) in Fig. 6a. At \(J_{2}\leq 0\), we see only decrease in free energy with \(M\) for the droplet phase in Fig. 6a.
At higher temperatures, other higher energy configurations in addition to those shown in Fig. 5 contribute to the partition function in Eq. (4) for each value of \(M\). This affects the dependence of the above discussed energy barriers on temperature, which for \(J_{2}=0.3\) is shown in Fig. 8a. The metastable state barrier at \(M=22\) disappears at \(T\approx 0.65\), which is close to the corresponding temperature \(T_{0}\approx 0.6\) from the RLFA solution (see Fig. 1). Note that at this value of \(J_{2}\) the barrier at \(M=26\) is determined by the slab-droplet transition and not by the metastable state (see Fig. 5 and Fig. 7a).
Figure 4: Restricted free energy per spin, \(F_{0}/N\), calculated exactly by Eq. (4) at \(J_{2}=0.3\) for sample size \(L=6\) as a function of total spin \(M\) and temperature \(T\). The free energy is defined only for integer even values of \(M\) and linearly interpolated between them. For ease of comparison with the RLFA results in Fig. 1, we set here \(F=0\) at \(M=0\) at each temperature. Dark blue points correspond to the global minimum of \(F_{0}/N\) at each temperature.
Figure 5: Configurations that contribute to the restricted free energy \(F(M)\) at zero temperature, i.e. have minimal energies, for all possible values of the total spin \(M\) at \(L=6\). The configurations that change for \(J_{2}>0.25\) and \(J_{2}>0.33\) are shown separately in the bottom row (they affect the dependence of the energy barrier \(\Delta F_{M}\) on \(J_{2}\) shown in Fig. 7).
## V Monte Carlo simulations
For larger square samples, with \(L=7\) and \(L=8\), the free energy from Eq. (4) can only be calculated using supercomputers, given the large number of \(2^{N}\) configurations for \(N\) spins. Alternatively, it can be calculated approximately with sufficiently high accuracy using the Monte Carlo method. We use the Wang-Landau algorithm [59; 60; 61], which has proven to be very efficient for this purpose at low temperature. It consists in performing a random walk in polarization and energy space to extract an estimate for the density of states \(\textsl{g}(M,E)\) that gives a flat histogram.
Using the Wang-Landau algorithm, we reproduce the exact results for \(L=6\) with high accuracy and obtain similar result for \(L=10\), see Fig. 6b and Fig. 9, where the free energy barriers at \(m>m_{c}\) are clearly visible at low temperature and \(J_{2}=0.3\). For \(J_{2}=0\), the calculated free energy at zero temperature, Fig. 6b, agrees with [40]. The barrier heights for metastable states at \(m>m_{c}\) correspond to the spin flip at the corner of the droplet and are equal to \(4J_{2}\) for \(J_{2}<0.25\) (Fig. 7b). For \(J_{2}>0.25\), this dependence changes due to additional slab-droplet and vice versa transitions, as in the case of \(L=6\). The dependence of these barriers on temperature is shown in Fig. 8b. It is almost linear and the barriers disappear in the temperature range from approximately \(0.6\) to \(0.7\), which is close to the RLFA predicted \(T_{0}\) in Fig. 1. We verified that the linear dependence of barriers height on
Figure 6: Restricted free energy \(F\) as a function of total spin \(M=mL^{2}\) at zero temperature \(T=0\), calculated according to Eq. (4) for several values of \(J_{2}\) (listed in the legend, valid for both plots), (a) exactly for the sample size \(L=6\), (b) using Monte Carlo method for \(L=10\). The solid lines provide guides to the eye.
Figure 7: Restricted free energy barrier height, \(\Delta F_{M}=F_{M}-F_{M-2}\), as a function of \(J_{2}\) for several values of the total spin \(M\) at zero temperature. (a) The sample size is \(L=6\). (b) The sample size is \(L=10\). Lines for different values of \(M\) overlap. The lines corresponding to \(M=62,78\) and overlaping \(\{66,86\}\) are marked separately.
temperature and their disappearance at a temperature close to \(T_{0}\), obtained within RLFA, are also valid for other values of \(J_{2}\). Note that the linear dependence on \(T\) at near-zero temperature follows directly from the definition of the free energy \(F=U-TS\), where \(U\) is the energy and \(S\) is the entropy.
## VI Discussion
The primary goal of this work was to confirm, by calculating the restricted free energy, the presence of metastable states in the \(J_{1}\)-\(J_{2}\) Ising model, recently found in the RLFA solution and MC simulations of low-temperature quenching [25]. The restricted free energy \(F(m)\) as a function of polarization calculated within RLFA indeed shows local minima at zero polarization at low temperature for \(J_{2}>0\), see Fig. 1 and Fig. 2, thus indicating a metastable state.
At the same time, the exact calculations of \(F(m)\) for a small sample size \(L=6\) (Fig. 4 and Fig. 6a) and MC simulations for \(L=10\) (Fig. 6b and Fig. 9) indicate local minima corresponding to metastable states with various values of polarization. Some of them, at \(m\lesssim 0.5\), are due to long stripes with an activation energy of \(4J_{1}\) of a spin flip on a flat domain boundary (Fig. 5). In the standard Ising model, the system can become stuck in these states with a final polarization following a Gaussian distribution after zero-temperature quenching from an initially random configuration with zero polarization [30].
Metastable states at \(m\gtrsim 0.5\) are caused by droplet-shaped domains with an activation energy of \(4J_{2}\) of a spin flip in their corner, at least at \(J_{2}<0.25\) for both sample sizes \(L=6\) and \(L=10\) (Fig. 7). It is interesting whether this value of \(J_{2}\) holds true for larger sample sizes. At \(J_{2}>0.25\), the dependence of the barrier height on \(J_{2}\) for some metastable states changes. Our exact energy calculations for a sample size of \(L=6\) show that for \(J_{2}>1/4\) and then for \(J_{2}>1/3\) the sequence of minimal energy configurations for increasing total spin \(M\) changes (see Fig. 5) and a return occurs into the slab phase at certain values of \(M\), which may be important for some applications. Indeed, the importance of the geometrical slab-droplet transition for various physical situations, including the dewetting transition between hydrophobic surfaces, was highlighted in [51].
Although the zero polarization of the metastable state and the low height of the barrier proportional to the temperature near zero (see Fig. 2) is not exactly what follows from MC calculations, where the barrier heights are much higher and decrease with temperature (see Fig. 8), the fact of even a rough indication of the metastable state by RLFA is very valuable.
Figure 8: Restricted free energy barrier height, \(\Delta F_{M}=F_{M}-F_{M-2}\), as a function of temperature for several values of the total spin \(M\) and \(J_{2}=0.3\). (a) The sample size is \(L=6\). (b) \(L=10\). The three upper curves correspond to \(M=2\), \(22\) and \(42\), and in the middle is \(M=62\).
Figure 9: Restricted free energy \(F\) as a function of total spin \(M\) for \(J_{2}=0.3\) calculated by the MC method at several temperatures below and above the phase transition. The sample size is \(L=10\).
Another valuable RLFA prediction that turns out to be quite accurate is the geometric slab-droplet phase transition at low temperature (Fig. 1b and Fig. 3). The reason why RLFA is so effective in this situation, in our opinion, is that by definition it takes into account the local field due to all possible configurations of spins interacting with the central spin, not just the mean field. The probability of these configurations, in turn, is determined by the mean spin.
The distance between striped metastable states along the \(M\) axis (at \(m\lesssim 0.5\)) is equal to \(2L\) and, as a result, their number is proportional to \(L\). For droplet metastable states (at \(m\gtrsim 0.5\)), the distance is determined by the droplet size, which becomes smaller as \(M\) increases. Thus, one can expect that the number of droplet metastable states scales as \(L^{2}\) and they are distributed along the \(M\) axis much more densely. This is confirmed by our MC simulations in Figs. 6, 7, and 9. This could be the reason why, during low-temperature quenching from high temperature in an external field, the system is not captured into striped metastable states in the standard Ising model [30], but gets stuck in droplet metastable states with finite polarization when \(J_{2}\in(0,1/2)\)[25]. Note that the polarization of such a final state turns out to be about \(0.5\) at zero temperature [25], which is close to the slab-droplet phase transition, from where droplet metastable states begin to appear as \(M\) increases (Fig. 9). However, the polarization after quenching (from high temperature) sharply decreases with final temperature [25] and does not correspond to the critical polarization \(m_{c}\) of the slab-droplet transition in the standard Ising model, the temperature dependence of which resembles the equilibrium polarization [50].
It should be noted here that the metastable states into which the system relaxes after low-temperature quenching in [25] are not exactly the same as shown in Fig. 5 and which determines the free energy at zero temperature. The energy of the former is much higher, and the system is more likely to get stuck in them, relaxing in energy during quenching on the way to thermal equilibrium. Metastable states like in Fig. 5 can in principle be reached after quenching at non-zero temperature after a sufficiently long relaxation time and domain coarsening, with a higher probability for those closer to the equilibrium polarization. However, any of these states will be reached inevitably if the total spin is conserved during quenching, as in the Kawasaki [62] two-spin exchange algorithm, which is relevant for models describing transport phenomena caused by spatial inhomogenity such as diffusion, heat conduction, etc.
Finally, we will mention some recent advances in the experimental observation of metastable states using sub-picosecond optical pulses, which we believe can be applied to reveal metastable states discussed here and in [25]. For instance, in the quasi-two-dimensional antiferromagnet Sr\({}_{2}\)IrO4, a long-range magnetic correlation along one direction was converted into a glassy condition by a single 100-fs-laser pulse [63]. Atomic-scale PbTiO\({}_{3}\)/SrTiO\({}_{3}\) superlattices, counterpoising strain and polarization states in alternate layers, was converted by sub-picosecond optical pulses to a supercrystal phase in [64]. In a layered dichalcogenide crystal of \(1T\)-TaS\({}_{2}\), a hidden low-resistance electronic state with polaron reordering was reached as a result of a quench caused by a single 35-femtosecond laser pulse [65]. See also the references to relevant superconducting and magnetic materials with next-nearest-neighbor interactions mentioned in Introduction and [25].
## VII Conclusion
In conclusion, we calculated the restricted free energy \(F(m)\) as a function of polarization \(m\) for the square-lattice \(J_{1}\)-\(J_{2}\) Ising model (at \(J_{2}<|J_{1}|/2\)) within RLFA and using the MC method. Both approaches indicate the appearance of metastable states at low temperature, corresponding to local minima of \(F(m)\) along the \(m\) coordinate. The zero-polarization metastable state predicted by RLFA reflects the true metastable states with various polarization values at \(m\gtrsim 0.5\) that appear in our exact calculation and MC simulations of the restricted free energy. We show that RLFA predicts the slab-droplet phase transition for the \(J_{1}\)-\(J_{2}\) Ising model as a kink in the polarization dependence of \(F(m)\). Exact calculations of \(F(m)\) for a sample size of \(L=6\) reveal also additional slab-droplet transitions at \(J_{2}>0.25\). We believe, easy-to-use RLFA can help reveal the presence of metastable states and geometrical phase transitions in more complex systems, e.g., with site or bond disorder and spin tunneling in a transverse field.
###### Acknowledgements.
I thank B.E. Vugmeister for many useful discussions. The Siberian Branch of the Russian Academy of Sciences (SB RAS) Siberian Supercomputer Center is gratefully acknowledged for providing supercomputer facilities.
|
2302.14231 | CHGNet: Pretrained universal neural network potential for
charge-informed atomistic modeling | The simulation of large-scale systems with complex electron interactions
remains one of the greatest challenges for the atomistic modeling of materials.
Although classical force fields often fail to describe the coupling between
electronic states and ionic rearrangements, the more accurate
\textit{ab-initio} molecular dynamics suffers from computational complexity
that prevents long-time and large-scale simulations, which are essential to
study many technologically relevant phenomena, such as reactions, ion
migrations, phase transformations, and degradation.
In this work, we present the Crystal Hamiltonian Graph neural Network
(CHGNet) as a novel machine-learning interatomic potential (MLIP), using a
graph-neural-network-based force field to model a universal potential energy
surface. CHGNet is pretrained on the energies, forces, stresses, and magnetic
moments from the Materials Project Trajectory Dataset, which consists of over
10 years of density functional theory static and relaxation trajectories of
$\sim 1.5$ million inorganic structures. The explicit inclusion of magnetic
moments enables CHGNet to learn and accurately represent the orbital occupancy
of electrons, enhancing its capability to describe both atomic and electronic
degrees of freedom. We demonstrate several applications of CHGNet in
solid-state materials, including charge-informed molecular dynamics in
Li$_x$MnO$_2$, the finite temperature phase diagram for Li$_x$FePO$_4$ and Li
diffusion in garnet conductors. We critically analyze the significance of
including charge information for capturing appropriate chemistry, and we
provide new insights into ionic systems with additional electronic degrees of
freedom that can not be observed by previous MLIPs. | Bowen Deng, Peichen Zhong, KyuJung Jun, Janosh Riebesell, Kevin Han, Christopher J. Bartel, Gerbrand Ceder | 2023-02-28T01:30:06Z | http://arxiv.org/abs/2302.14231v2 | # CHGNet: Pretrained universal neural network potential for charge-informed atomistic modeling
###### Abstract
The simulation of large-scale systems with complex electron interactions remains one of the greatest challenges for the atomistic modeling of materials. Although classical force-fields often fail to describe the coupling between electronic states and ionic rearrangements, the more accurate _ab-initio_ molecular dynamics suffers from computational complexity that prevents long-time and large-scale simulations, which are essential to study many technologically relevant phenomena, such as reactions, ion migrations, phase transformations, and degradation.
In this work, we present the Crystal Hamiltonian Graph neural Network (CHGNet) as a novel machine-learning interatomic potential (MLIP), using a graph-neural-network-based force-field to model a universal potential energy surface. CHGNet is pretrained on the energies, forces, stresses, and magnetic moments from the Materials Project Trajectory Dataset, which consists of over 10 years of density functional theory static and relaxation trajectories of \(\sim 1.5\) million inorganic structures. The explicit inclusion of magnetic moments enables CHGNet to learn and accurately represent the orbital occupancy of electrons, enhancing its capability to describe both atomic and electronic degrees of freedom. We demonstrate several applications of CHGNet in solid-state materials, including charge-informed molecular dynamics in Li\({}_{x}\)MnO\({}_{2}\), the finite temperature phase diagram for Li\({}_{x}\)FePO\({}_{4}\) and Li diffusion in garnet conductors. We critically analyze the significance of including charge information for capturing appropriate chemistry, and we provide new insights into ionic systems with additional electronic degrees of freedom that can not be observed by previous MLPs.
+
Footnote †: preprint: APS/123-QED
## I Introduction
Large-scale simulations, such as molecular dynamics (MD), are essential tools in the computational exploration of solid-state materials [1]. They enable the study of reactivity, degradation, interfacial reactions, transport in partially disordered structures, and other heterogeneous phenomena relevant for the application of complex materials in technology. Technological relevance of such simulations requires rigorous chemical specificity which originates from the orbital occupancy of atoms. Despite their importance, accurate modeling of electron interactions or their subtle effects in MD simulations remains a major challenge. Classical force-fields treat the charge as an atomic property that is assigned to every atom _a-priori_[2; 3]. Methodology developments in the field of polarizable force-fields such as the electronegativity equalization method (EEM) [4], chemical potential equalization (CPE) [5], and charge equilibration (Qeq) [6] realize charge evolution via the redistribution of atomic partial charge. However, these empirical methods are often not accurate enough to capture complex electron interactions.
_Ab-initio_ molecular dynamics (AIMD) with density functional theory (DFT) can produce high-fidelity results with quantum-mechanical accuracy by explicitly computing the electronic structure within the density functional approximation. The charge-density distribution and corresponding energy can be obtained by solving the Kohn-Sham equation [7]. Long-time and large-scale spin-polarized AIMD simulations critical for studying ion migrations, phase transformations and chemical reactions are challenging and extremely computing intensive [8; 9]. These difficulties underscore the need for more efficient computational methods in the field that can account for charged ions and their orbital occupancy at sufficient time and length scales needed to model important phenomena.
Machine-learning interatomic potentials (MLIPs) such as enet [10; 11] and DeepMD [12] have provided promising solutions to bridge the gap between expensive electronic structure methods and efficient classical interatomic potentials. Specifically, graph neural network (GNN)-based MLIPs such as DimeNet [13], NequIP [14], and MACE [15] have been shown to achieve state-of-the-art performance by incorporating invariant/equivariant symmetry constraints and long-range interaction through graph convolution [16]. Most recently, GNN-based MLIPs trained on the periodic table (e.g., M3GNet) have demonstrated the possibility of universal interatomic potentials that may not require chemistry-specific training for each new application [17; 18]. However, so far none
of these methods has included the important effects that valences have on chemical bonding.
The importance of an ion's valence derives from the fact that it can engage in very different bonding with its environment depending on its electron count. While traditional MLIPs treat the elemental label as the basic chemical identity, different valence states of transition metal ions behave as different from each other as different elements. For example, high spin Mn\({}^{4+}\) is a nonbonding spherical ion which almost always resides in octahedral coordination by oxygen atoms, whereas Mn\({}^{3+}\) is a Jahn-Teller active ion, radically distorts its environment, and Mn\({}^{2+}\) is an ion that strongly prefers tetrahedral coordination [8]. Such strong chemical interaction variability across different valence states exists for almost all transition metal ions and requires specification of an ion beyond its chemical identity. In addition, the charge state is a degree of freedom that can create configurational entropy and whose dynamic optimization can lead to strongly coupled charge and ion motion, impossible to capture with a MLIP that only carries elemental labels. The relevance of explicit electron physics motivates the development of a robust MLIP model with charge information built in.
Charge has been represented in a variety of ways, from a simple oxidation state label to continuous wave functions derived from quantum mechanics [19]. Challenges in incorporating charge information into MLIPs arise from many factors, such as the ambiguity of representations, complexity of interpretation, and impracticality of taking charge as an input (\(E(\{\mathbf{r}_{i}\},\{q_{i}\})\), as the labels \(\{q_{i}\}\) are generally not _a-priori_ available). In this work, we define charge as an atomic property (_atomic charge_) that can be inferred from the inclusion of magnetic moments (magmons). We show that by explicitly incorporating the site-specific magmons as the charge-state constraints into the **C**rystal **H**amiltonian **G**raph neural-**N**etwork (CHGNet), one can both enhance the latent-space regularization and accurately capture electron interactions.
We demonstrate the charge constraints and latent-space regularization of atomic charge in Na\({}_{2}\)V\({}_{2}\)(PO\({}_{4}\))
Figure 1: **CHGNet model architecture** (a) CHGNet workflow: a crystal structure with unknown atomic charge is used as input to predict the energy, force, stress, and magnetic moments, resulting in a charge-decorated structure. (b) Atom graph: The pairwise bond information is drawn between atoms; Bond graph: the pairwise angle information is drawn between bonds. (c) Graphs run through basis expansions and embedding layers to create atom, bond, angle features. The features are updated through several interaction blocks, and the properties are predicted at output layers. (d) Interaction block in which the atom, bond, and angle share and update information. (e) Atom convolution layer where neighboring atom and bond information is calculated through weighted message passing and aggregates to the atoms.
and show the applications of CHGNet in the study of charge-transfer and phase transformation in Li\({}_{x}\)MnO\({}_{2}\), electronic entropy in the Li\({}_{x}\)FePO\({}_{4}\) phase diagram, and Li diffusivity in garnet-type Li-superionic conductors Li\({}_{3+x}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\). By critically comparing and evaluating the importance of incorporating charge information in the construction of CHGNet, we offer new insights into the materials modeling of ionic systems with additional electronic degrees of freedom. Our analysis highlights the essential role that charge information plays in atomistic simulations for solid-state materials.
## II Results
### CHGNet architecture
The foundation of CHGNet is a GNN, as shown in Fig. 1, where the graph convolution layer is used to propagate atomic information via a set of nodes \(\{v_{i}\}\) connected by edges \(\{e_{ij}\}\). The translation, rotation, and permutation invariance are preserved in GNNs [20; 21; 22]. Figure 1(a) shows the workflow of CHGNet which takes a crystal structure with unknown atomic charges as input and outputs the corresponding energy, forces, stress, and magmoms. The charge-decorated structure can be inferred from the on-site magnetic moments and atomic orbital theory. The details are described in the following section.
In CHGNet, a periodic crystal structure is converted into an atom graph \(G^{a}\) by searching for neighboring atoms \(v_{j}\) within \(r_{\rm cut}\) of each atom \(v_{i}\) in the primitive cell. The edges \(e_{ij}\) are drawn with information from the pairwise distance between \(v_{i}\) and \(v_{j}\), as shown in Fig. 1(b). Three-body interaction can be computed by using an auxiliary bond graph \(G^{b}\), which can be similarly constructed by taking the angle \(a_{ijk}\) as the pairwise information between bonds \(e_{ij}\) and \(e_{jk}\) (see Methods). We adopt similar approaches to include the angular/three-body information as other recent GNN MLIPs [13; 17; 23].
Figure 1(c) shows the architecture of CHGNet, which consists of a sequence of basis expansions, embeddings, interaction blocks, and outputs layers (see Methods for details). Figure 1(d) illustrates the components within an interaction block, where the atomic interaction is simulated with the update of atom, bond, and angle features via the convolution layers. Figure 1(e) presents the convolution layer in the atom graph. Weighted message passing is used to propagate information between atoms, where the message weight \(\tilde{e}_{ij}^{a}\) from node \(j\) to node \(i\) decays to zero at the graph cutoff radius to ensure smoothness of the potential energy surface [13].
Unlike other GNNs, where the updated atom features \(\{v_{i}^{n}\}\) after \(n\) convolution layers are directly used to predict energies, CHGNet regularizes the node-wise features \(\{v_{i}^{n-1}\}\) at the \(n-1\) convolution layer to contain the information about magnetic moments. The regularized features \(\{v_{i}^{n-1}\}\) carry rich information about both local ionic environments and charge distribution. Therefore, the atom features \(\{v_{i}^{n}\}\) used to predict energy, force, and stress are charge-constrained by their charge-state information. As a result, CHGNet can provide charge-state information using only the nuclear positions and atomic identities as input, allowing the study of charge distribution in atomistic modeling.
### Materials Project Trajectory Dataset
The Materials Project database contains a vast collection of DFT calculations on \(\sim 146,000\) inorganic materials composed of 94 elements [24]. To accurately sample the universal potential energy surface, we extracted \(\sim 1.37\) million Materials Project tasks of structure relaxation and static calculations using either the generalized gradient approximation (GGA) or GGA+U exchange-correlation (see Methods). This effort resulted in a comprehensive dataset with 1,580,395 atom configurations, 1,580,395 energies, 7,944,833 magnetic moments, 49,295,660 forces, and 14,223,555 stresses. To ensure the consistency of energies within the MPtrj dataset, we applied the GGA/GGA+U mixing compatibility correction, as described by Wang _et al._[25].
The distribution of elements in the MPtrj dataset is illustrated in Fig. 2. The lower-left triangle (warm color) in an element's box indicates the frequency of occurrence of that element in the dataset, and the upper-right triangle (cold color) represents the number of instances where magnetic information is available for the element. With over 100,000 occurrences for 60 different elements and more than 10,000 instances with magnetic information for 76 different elements, the MPtrj dataset provides comprehensive coverage of all chemistries, excluding only the noble gases and actinoids. The lower boxes in Fig. 2 present the counts and mean absolute deviations of energy, force, stress, and magmoms in the MPtrj dataset.
CHGNet with 400,438 trainable parameters was trained on the MPtrj dataset with 8:1:1 training, validation, and test set ratio partition by materials (see Methods). The statistics of the mean absolute errors of the energy, force, stress, and magmoms on the MPtrj test set structures are shown in Table. 1. We observe a similar test set error with slight improvements in the model trained with magmoms.
### Charge-constraints and charge-inference from magnetic moments
In solid-state materials that contain heterovalent ions, it is crucial to distinguish the atomic charge of the ions, as an element's interaction with its environment can depend strongly on its valence state. It is well known that the valence of heterovalent ions cannot be directly calculated through the DFT charge density because the charge density is almost invariant to the valence state due to the
hybridization shift with neighboring ligand ions [26; 27]. Furthermore, the accurate representation and encoding of the full charge density is another demanding task requiring substantial computational resources [28; 29]. An established approach is to rely on the magnetic moment for a given atom site as an indicator of its atomic charge, which can be derived from the difference in localized up-spin and down-spin electron densities in spin-polarized DFT calculations [8; 30]. Compared with the direct use of charge density, magmoms are found to contain more comprehensive information regarding the electron orbital occupancy and therefore the chemical behavior of ions, as demonstrated in previous studies.
To rationalize our treatment of the atomic charge, we used a NASICON-type cathode material Na\({}_{4}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\) as an illustrative example. The phase stability of the (de-)intercalated material Na\({}_{4-x}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\) is associated with Na/vacancy ordering and is highly correlated to the charge ordering on the vanadium sites [31]. We generated a supercell structure of Na\({}_{4}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\) with 2268 atoms and randomly removed half of the Na ions to generate the structure with composition Na\({}_{2}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\), where half of the V ions are oxidized to a V\({}^{4+}\) state. We used CHGNet
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **Energy** & **Force** & **Stress** & **Magmom** \\ & (meV/atom) & (meV/Å) & (GPa) & (\(\mu_{B}\)) \\ \hline
**With mag** & 30 & 77 & 0.348 & 0.032 \\ \hline
**No mag** & 33 & 79 & 0.351 & N/A \\ \hline \hline \end{tabular}
\end{table}
Table 1: Mean-absolute-errors (MAEs) of CHGNet on MPtrj test set of 157,955 structures from 14,572 materials. ’With mag’ and ’No mag’ indicate whether the model is trained with magmoms (\(\mu_{B}\) is the Bohr magneton).
Figure 2: **Element distribution of Materials Project Trajectory (MPtrj) Dataset.** The color on the lower-left triangle indicates the total number of atoms/ions of an element. The color on the upper right indicates the number of times the atoms/ions are incorporated with magnetic moment labels in the MPtrj dataset. On the lower part of the plot is the count and mean absolute deviation (MAD) of energy, magmoms, force, and stress
to relax the (de-)intercalated structure and analyze its capability to distinguish the valence states of V atoms with the ionic relaxation (see Methods).
Figure 3(a) shows the distribution of predicted magmoms on all V ions in the unrelaxed (blue) and relaxed (orange) structures. Without any prior knowledge about the V-ion charge distribution other than learning from the spatial coordination of the V nuclei, CHGNet successfully differentiated the V ions into two groups of V\({}^{3+}\) and V\({}^{4+}\). Figure 3(b) shows the two-dimensional principal component analysis (PCA) of all the latent space feature vectors of V ions for both unrelaxed and relaxed structures after three interaction blocks. The PCA analysis demonstrates two well-separated distributions, indicating the latent space feature vectors of V ions are strongly correlated to the different valence states of V. Hence, imposing different magmom labels to the latent space (i.e., forcing the two orange peaks to converge to the red dashed lines in Fig. 3(a)) would act as the _charge constraints_ for the model by regularizing the latent-space features.
Because energy, force, and stress are calculated from the same feature vectors, the inclusion of magmoms can improve the featurization of the heterovalent atoms in different local chemical environments (e.g., V\({}^{3+}\) and V\({}^{4+}\) displays very distinct physics and chemistry) and therefore improve the accuracy and expressibility of CHGNet.
### Charge disproportionation in Li\({}_{x}\)MnO\({}_{2}\) phase transformation
The long-time and large-scale simulation of CHGNet enables studies of ionic rearrangements coupled with charge transfer [32; 33], which is crucial for ion mobility and the accurate representation of the interaction between ionic species. As an example, in the LiMnO\({}_{2}\) battery cathode material, transition-metal migration plays a central role in its phase transformations, which cause irreversible capacity loss [34; 35]. The mechanism of Mn migration is strongly coupled with charge transfer, with Mn\({}^{4+}\) being an immobile ion, and Mn\({}^{3+}\) and Mn\({}^{2+}\) generally considered to be more mobile [36; 37; 38]. The dynamics of the coupling of the electronic degrees of freedom with those of the ions has been challenging to study but is crucial to understand the phase transformation from orthorhombic LiMnO\({}_{2}\) (_o_-LMO, shown in Fig. 4(a)) to spinel LiMnO\({}_{2}\) (_s_-LMO), as the time scale and computational cost of such phenomena are far beyond any possible _ab-initio_ methods.
In early quasi-static _ab-initio_ studies, Reed _et al._[32] rationalized the remarkable speed at which the phase transformation proceeds at room temperature using a charge disproportion mechanism: 2Mn\({}^{3+}_{\rm oct}\rightarrow\) Mn\({}^{2+}_{\rm tet}+\) Mn\({}^{4+}_{\rm oct}\), where the subscript indicates location in the tetrahedral or octahedral site of a face-centered cubic oxygen packing, as shown in Fig. 4(a). The hypothesis based on DFT calculations was that Mn\({}^{2+}\) had a lower energy barrier for migration between tetrahedral and octahedral sites and preferred to occupy the tetrahedral site. The ability therefore for Mn to dynamically change its valence would explain its remarkable room temperature mobility. However, Jang _et al._[36] showed in a later magnetic characterization experiment that the electrochemically transformed spinel LiMnO\({}_{2}\) has lower-spin (high-valence) Mn ions on the tetrahedral sites, which suggested the possibility that Mn with higher valence can be stable on tetrahedral sites during the phase transformation.
To demonstrate the ability of CHGNet to fully describe such a process, we used CHGNet to run a charge-informed MD simulation at 1100 K for 1.5 ns (see Methods). The MD simulation started from a partially delithiated supercell structure with the _o_-LMO structure (Li\({}_{20}\)Mn\({}_{40}\)O\({}_{80}\)), which is characterized by peaks at 15\({}^{\circ}\), 26\({}^{\circ}\), and 40\({}^{\circ}\) in the X-ray diffraction (XRD) pattern (the bottom line in Fig. 4(b)). As the simulation proceeded, a phase transformation from orthorhombic ordering to spinel-like ordering was observed. Figure 4(b) presents the simulated XRD pattern of MD structures at different time intervals from 0 to 1.5 ns, with a clear increase in the characteristic spinel peaks (18\({}^{\circ}\), 35\({}^{\circ}\)) and a decrease in the orthorhombic peak. The simulated results agree well with the experimental in-situ XRD results [34; 36].
Figure 4(d) presents the CHGNet-predicted energy of the LMO supercell structure as a function of simulation time, together with the peak strength of 2\(\theta\) = 15\({}^{\circ}\) and 18\({}^{\circ}\). An explicit correlation between the structural transformation and energy landscape is observed. The predicted energy of the spinel phase is approximately 26 meV/oxygen lower than that of the starting _o_-LMO, suggesting that the phase transformation to spinel is indeed, thermodynamically favored.
The advantage of CHGNet is shown in its ability to predict charge-coupled physics, as evidenced by the lower plot in Fig. 4(d). A histogram of the magmoms of all
Figure 3: **Magmom and hidden-space regularization in Na\({}_{2}\)V\({}_{2}\)(PO\({}_{4}\))\({}_{3}\)**. (a) Magmom distribution of the 216 V ions in the unrelaxed structure (blue) and CHGNet-relaxed structure (orange). (b) A two-dimensional visualization of the PCA on V-ion embedding vectors before the magmom projection layer indicates the latent space clustering is highly correlated with magmoms and charge information. The PCA reduction is calculated for both unrelaxed and relaxed structures.
the Mn ions in the structure is presented against time. In the early part of the simulation, the magmoms of Mn ions are mostly distributed between 3\(\mu_{B}\) and 4\(\mu_{B}\), which correspond to Mn\({}^{4+}\) and Mn\({}^{3+}\). At approximately 0.8 ns, there is a significant increase in the amount of Mn\({}^{2+}\), which is accompanied by a decrease in the potential energy and changes in the XRD peaks. Following this major transformation point, the Mn\({}^{3+}\) ions undergo charge disproportionation, resulting in the coexistence of Mn\({}^{2+}\), Mn\({}^{3+}\), and Mn\({}^{4+}\) in the transformed spinel-like structure.
One important observation from the long-time charge-informed MD simulation is the correlation between ionic rearrangements and the charge-state evolution. Specifically, we noticed that the time scale of charge disproportionation (\(\sim\) ns for the emergence of Mn\({}^{2+}\)) is far longer than the time scale of ion hops (\(\sim\) ps for the emergence of Mn\({}_{\rm tet}\)), indicating that the migration of Mn to the tetrahedral coordination is less likely related to the emergence of Mn\({}^{2+}\). Instead, our result indicates that the emergence of Mn\({}^{2+}_{\rm tet}\) is correlated to the formation of the long-range spinel-like ordering. Figure 4(c) shows the average magmoms of Mn\({}_{\rm tet}\) and Mn\({}_{\rm oct}\) as a function of time. The result reveals that Mn\({}^{2+}_{\rm tet}\) only forms over a long time period, which cannot be observed using any conventional
Figure 4: **Li\({}_{0.5}\)MnO\({}_{2}\) phase transformation and charge disproportionation** (a) orthorhombic LiMnO\({}_{2}\) (_o_-LMO) unit cell plotted with the tetrahedral site and the octahedral site. (b) Simulated XRD pattern of CHGNet MD structures as the system transforms from the _o_-LMO phase to the _s_-LMO. (c) Average magmoms of tetrahedral and octahedral Mn ions _vs._ time. (d) Top: total potential energy and the relative intensity of _o_-LMO and _s_-LMO characteristic peaks _vs._ time. Bottom: the histogram of magmoms on all Mn ions _vs._ time. The brighter color indicates more Mn ions distributed at the magmom. (e) Predicted magmoms of tetrahedral Mn ions using r\({}^{2}\)SCAN-DFT (black) and CHGNet (blue), where the structures are drawn from MD simulation at 0.4 ns (left) and 1.5 ns (right).
simulation techniques.
To further validate this hypothesis and the accuracy of CHGNet prediction, we used r\({}^{2}\)SCAN-DFT static calculations to get the magmoms of the structures at 0.4 and 1.5 ns, where the results are shown in Fig. 4(e). The r\({}^{2}\)SCAN-DFT magmoms (black) infer the same Mn\({}_{\text{tet}}\) valence states as the CHGNet prediction (blue). The systematically lower magnoms from r\({}^{2}\)SCAN are expected since CHGNet is trained with GGA+U which over-localizes the electron density in oxides [39]. The r\({}^{2}\)SCAN-DFT shows a 34 meV/oxygen driving force between the structures at 0.4 and 1.5 ns.
### Electronic entropy effect in the phase diagram of Li\({}_{x}\)FePO\({}_{4}\)
The configurational electronic entropy has a significant effect on the temperature-dependent phase stability of mixed-valence oxides, and its equilibrium modeling therefore requires an explicit indication of the atomic charge. However, no current MLPs can provide such information. We demonstrate that using CHGNet one can include the electronic entropy in the thermodynamics of Li\({}_{x}\)FePO\({}_{4}\) and the temperature-dependent phase diagram (PD).
Previous research has shown that the formation of a solid solution in Li\({}_{x}\)FePO\({}_{4}\) is mainly driven by electronic entropy rather than by Li\({}^{+}\)/vacancy configurational entropy [40]. We applied CHGNet as an energy calculator to generate two cluster expansions (CEs), which is the typical approach to studying configurational entropy [41]. One of these is charge-decorated (considering Li\({}^{+}\)/vacancy and Fe\({}^{2+}\)/Fe\({}^{3+}\)) and another is non-charge-decorated (only considering Li\({}^{+}\)/vacancy without consideration of the Fe valence). Semi-grand canonical Monte Carlo was used to sample these cluster expansions and construct Li\({}_{x}\)FePO\({}_{4}\) PDs (see Methods). The calculated PD with charge decoration in Fig. 5(a) features a miscibility gap between FePO\({}_{4}\) and LiFePO\({}_{4}\), with a eutectoid-like transition to the solid-solution phase at intermediate Li concentration, qualitatively matching the experiment result [42; 43]. In contrast, the calculated PD without charge decoration in Fig. 5(b) features only a single miscibility gap without any eutectoid transitions, in disagreement with experiments. This comparison highlights the importance of explicit inclusion of the electronic degrees of freedom, as failure to do so can result in incorrect physics. These experiments show how practitioners may benefit from CHGNet with atomic charge inference for equilibrium modeling of configurationally and electronically disordered systems.
### Activated Li diffusion network in Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\)
In this section, we showcase the precision of CHGNet for general-purpose MD. Lithium-ion diffusivity in fast Li-ion conductors is known to show a drastic non-linear response to compositional change. For example, stuffing a small amount of excess lithium into stoichiometric compositions can result in orders-of-magnitude improvement
Figure 5: **Li\({}_{x}\)FePO\({}_{4}\) phase diagram from CHGNet**. The phase diagrams in (a) and (b) are calculated with and without electronic entropy on Fe\({}^{2+}\) and Fe\({}^{3+}\). The colored dots represent the stable phases obtained in semi-grand canonical MC. The dashed lines indicate the two-phase equilibria between solid solution phases.
Figure 6: **Li diffusivity in garnet Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\)**. The CHGNet simulation accurately reproduces the dramatic increase in Li-ion diffusivity when a small amount of extra Li is stuffed into the garnet structure, qualitatively matching the activated diffusion network theory and agreeing well with the DFT-computed activation energy.
of the ionic conductivity [44]. Xiao _et al._[45] reported that the activation energy of Li diffusion in stoichiometric garnet Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\) decreases from more than 1 eV to \(\sim\)160 meV in a slightly stuffed Li\({}_{3+\delta}\) garnet (\(\delta=1/48\)), owing to the activated Li diffusion network of face-sharing tetrahedral and octahedral sites.
We performed a zero-shot test to assess the ability of CHGNet to capture the effect of such slight compositional change on the diffusivity and its activation energy. Figure 6 shows the Arrhenius plot from CHGNet-based MD simulations and compares it to AIMD results. Our results indicate that not only is the activated diffusion network effect precisely captured, the activation energies from CHGNet are also in excellent agreement with the DFT results [45]. This effort demonstrates the capability of CHGNet to precisely capture the strong interactions between Li ions in activated local environments and the ability to simulate highly non-linear diffusion behavior. Moreover, CHGNet can dramatically decrease the error on simulated diffusivity and enable studies in systems with poor diffusivity such as the unstuffed Li\({}_{3}\) garnet by extending to nano-second-scale simulations [46].
## III Discussion
Large-scale computational modeling has proven essential in providing atomic-level information in materials science, medical science, and molecular biology. Many technologically relevant applications contain heterovalent species, for which a comprehensive understanding of the atomic charge involved in the dynamics of processes is of great interest. The importance of assigning a valence to ions derives from the fundamentally different electronic and bonding behavior ions can exhibit when their electron count changes. _Ab-initio_ calculations based on DFT are useful for these problems, but the \(\sim\mathcal{O}(N^{3})\) scaling intrinsically prohibits its application to large time- and length-scales. Recent development of MLIPs provides new opportunities to increase computational efficiency while maintaining near DFT accuracy. The present work presents an MLIP that combines the need to include the electronic degrees of freedom with computational efficiency.
In this work, we developed CHGNet and demonstrated the effectiveness of incorporating magnetic moments as a proxy for inferring the atomic charge in atomistic simulations, which results in the integration of electronic information and the imposition of additional charge constraints as a regularization of the MLIP. We highlight the capability of CHGNet in distinguishing Fe\({}^{2+}\)/Fe\({}^{3+}\) in the study of Li\({}_{x}\)FePO\({}_{4}\), which is essential for the inclusion of electronic entropy and finite temperature phase stability. In the study of LiMnO\({}_{2}\), we demonstrate CHGNet's ability to gain new insights into the relation between charge disproportionation and phase transformation in a heterovalent transition-metal oxide system from long-time charge-informed MD.
CHGNet builds on recent advances in graph-based MLIPs [13; 17], but is pretrained with electronic degrees of freedom built in, which provides an ideal solution for high-throughput screening and atomistic modeling of a variety of technologically relevant oxides, including high-entropy materials [47; 48]. As CHGNet is already generalized to broad chemistry during pretraining, it can also serve as a data-efficient model for high-precision simulations when augmented with fine-tuning to specific chemistries.
Despite these advances, further improvements can be achieved through several efforts. First, the use of magnetic moments for valence states inference does not strictly ensure global charge neutrality. The formal valence assignment depends on how the atomic charges are partitioned [19]. Second, although magnetic moments are good heuristics for the atomic charge from spin-polarized calculations in ionic systems, it is recognized that the atomic charge inference for non-magnetic ions may be ambiguous and thus requires extra domain knowledge. As a result, the atom-centered magnetic moments cannot accurately reflect their atomic charges. We posit that it is possible to enhance the model by incorporating more advanced and general approaches into charge representations, such as an electron localization function [49], electric polarization [50], and atomic orbital-based partitioning (e.g. Wannier function [51]). These approaches could be used for atom feature engineering in latent space.
In conclusion, CHGNet enables charge-informed atomistic simulations amenable to the study of heterovalent systems using large-scale computational simulations, expanding opportunities to study charge-transfer-coupled phenomena in computational chemistry, physics, biology, and materials science.
## IV Acknowledgments
This work was funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences, Materials Sciences and Engineering Division under Contract No. DE-AC0205CH11231 (Materials Project program KC23MP). The work was also supported by the computational resources provided by the Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number ACI1053575; the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory; and the Lawrence Computational Cluster resource provided by the IT Division at the Lawrence Berkeley National Laboratory. The authors would also like to thank Jason Munro for helpful discussions.
Methods
### Data parsing
The Materials Project Trajectory Dataset (MPtrj) was parsed from the September 2022 Materials Project database version. We collected all the GGA and GGA+U task trajectories under each material-id and followed the criteria below:
1. We removed deprecated tasks and only kept tasks with the same calculation settings as the primary task, from which the material could be searched on the Materials Project website. To verify if the calculation settings were equal, we confirmed the following: (1) The +U setting must be the same as the primary task. (2) The energy of the final frame cannot differ by more than 20 meV/atom from the primary task.
2. Structures without energy and forces or electronic step convergence were removed.
3. Structures with energy higher than 1 eV/atom or lower than 10 meV/atom relative to the relaxed structure from Materials Project's ThermoDoc were filtered out to eliminate large energy differences caused by variations in VASP settings, etc.
4. Duplicate structures were removed to maintain a healthy data distribution. This removal was achieved using a pymatgen StructureMatcher and an energy matcher to tell the difference between structures. The screening criteria of the structure and energy matchers became more stringent as more structures under the same materials-id were added to the MPtrj dataset.
### Model design
In constructing the crystal graph, the default \(r_{\text{cut}}\) is set to 5A, which has been shown adequate enough for capturing long-range interactions [17]. The bond graph is constructed with a cutoff of 3A for computational efficiency. The bond distances \(r_{ij}\) were expanded to \(\tilde{e}_{ij}\) by a trainable smooth radial Bessel function (SmoothRBF), as proposed in Gasteiger _et al._[13]. The SmoothRBF forces the radial Bessel function and its derivative to approach zero at the graph cutoff radius, thus guaranteeing a smooth potential energy surface. The angles \(\theta_{ijk}\) were expanded by Fourier basis functions to create \(\tilde{a}_{ijk}\) with trainable frequency. The atomic numbers \(Z_{i}\), \(\tilde{e}_{ij}\), and \(\tilde{a}_{ijk}\) were then embedded into node \(v_{i}^{0}\), edge \(e_{ij}^{0}\), and angle features \(a_{ijk}^{0}\) (all have 64 feature dimensions by default):
\[\begin{split} v_{i}^{0}&=\mathbf{W}_{v}Z_{i}+\mathbf{b}_{v}, \\ e_{ij,n}^{0}&=\mathbf{W}_{e}\tilde{e}_{ij},\;\tilde{e} _{ij}=\sqrt{\frac{2}{5}}\frac{\sin(\frac{n\pi r_{ij}}{5})}{r_{ij}}\odot u(r_{ ij})\\ a_{ijk,\ell}^{0}&=\begin{cases}\frac{1}{\sqrt{2 \pi}}&\text{if }\ell=0\\ \frac{1}{\sqrt{\pi}}\cos\left[\ell\theta_{ijk}\right]&\text{if }\ell=[1,N]\\ \frac{1}{\sqrt{\pi}}\sin\left[(\ell-N)\theta_{ijk}\right]&\text{if }\ell=[N+1,2N]\end{cases}, \end{split} \tag{1}\]
where \(\{\mathbf{W},\mathbf{b}\}\) are the trainable weights and bias. The angle is computed using \(\theta_{ijk}=\arccos\frac{e_{ij}\cdot e_{ik}}{|e_{ij}|\ell|e_{jk}|}\). The \(u(r_{ij})\) is a polynomial envelope function to enforce the value, the first and second derivative of \(\tilde{e}_{ij}\), smoothly toward 0 at the graph cutoff radius [13]. The subscripts \(n,\ell\) are the expansion orders, and we set the maximum orders for both \(n\) and \(\ell\) to be \(2N+1=9\). The superscript denotes the index of the interaction block. The \(\odot\) represents the element-wise multiplication. The edge vectors \(e_{ij}^{t}\) are bi-directional, which is essential for \(e_{ij}^{t}\) and \(e_{ji}^{t}\) to be represented as a single node in the bond graph [23].
For the atom graph convolution, a weighted message passing layer is applied to the concatenated feature vectors \((v_{i}^{t}||v_{j}^{t}||e_{ij}^{t})\) from two atoms and one bond. For the bond graph convolution, the weighted message passing layer is applied to the concatenated feature vectors \((e_{ij}^{t}||e_{jk}^{t}||a_{ijk}^{t}||v_{j}^{t+1})\) from two bonds, the angle between them, and the atom where the angle is located. For the angle update function, we used the same construction for the bond graph message vector but without the weighted aggregation step. The mathematical form of the atom, bond, and angle updates are formulated below:
\[\begin{split} v_{i}^{t+1}&=v_{i}^{t}+L_{v}^{t} \left[\sum_{j}\tilde{e}_{ij}\cdot\phi_{v}^{t}\left(v_{i}^{t}||v_{j}^{t}||e_{ij} ^{t}\right)\right],\\ e_{jk}^{t+1}&=e_{jk}^{t}+L_{v}^{t}\left[\sum_{i} \tilde{e}_{ij}\cdot\tilde{e}_{jk}\cdot\phi_{e}^{t}\left(e_{ij}^{t}||e_{jk}^{t }||a_{ijk}^{t}||v_{j}^{t+1}\right)\right],\\ a_{ijk,f}^{t+1}&=a_{ijk}^{t}+\phi_{a}^{t}\left(e_{ij}^{t +1}||e_{jk}^{t+1}||a_{ijk}^{t}||v_{j}^{t+1}\right).\end{split} \tag{2}\]
The \(L\) is a linear layer and \(\phi\) is the gated multilayer perceptron (gatedMLP) [22]:
\[\begin{split} L(x)&=\mathbf{W}x+\mathbf{b},\\ \phi(x)&=\left(\sigma\circ L_{\text{gate}}(x)\right) \odot\left(g\circ L_{\text{core}}(x)\right),\end{split} \tag{3}\]
where \(\sigma\) and \(g\) are the Sigmoid and SiLU activation functions, respectively. The magnetic moments are predicted by a linear projection of the atom features \(v_{i}^{3}\) after three interaction blocks by
\[m_{i}=\;L_{m}(v_{i}^{3}). \tag{4}\]
Instead of using a full interaction block, the last convolution layer only includes atom graph convolution
\[v_{i}^{4}=v_{i}^{3}+\sum_{j}\tilde{e}_{ij}\cdot\phi_{v}^{3}\left(v_{i}^{3}||v_ {j}^{3}||e_{ij}^{3}\right). \tag{5}\]
The energy is calculated by a non-linear projection of the site-wise averaged feature vector over all atoms \(\{v_{i}^{4}\}\). The forces and stress are calculated via auto-differentiation of the energy with respect to the atomic Cartesian coordinates and strain:
\[E_{\text{tot}} =\sum_{i}L_{3}\circ g\circ L_{2}\circ g\circ L_{1}(v_{i}^{4}), \tag{6}\] \[\vec{f}_{i} =-\frac{\partial E_{\text{tot}}}{\partial\vec{x}_{i}},\] \[\mathbf{\sigma} =\frac{1}{V}\frac{\partial E_{\text{tot}}}{\partial\mathbf{\varepsilon }}.\]
Overall, with four atom convolution layers, the pre-trained CHGNet can capture long-range interaction up to 20 A with a small computation cost.
### Model training
The model is trained to minimize the summation of Huber loss (with \(\delta=0.1\)) of energy, force, stress, and magmoms:
\[\mathcal{L}(x,\hat{x})=\begin{cases}0.5\cdot(x-\hat{x})^{2}&\text{if }|x-\hat{x}|<\delta\\ \delta\cdot(|x-\hat{x}|-0.5\delta)&\text{otherwise}\end{cases}. \tag{7}\]
The loss function is a weighted sum of the contributions from energy, forces, stress, and magmoms:
\[\mathcal{L}=\mathcal{L}(E,\hat{E})+w_{f}\mathcal{L}(\mathbf{f},\hat{\mathbf{f}})+w_{ \sigma}\mathcal{L}(\mathbf{\sigma},\hat{\mathbf{\sigma}})+w_{m}\mathcal{L}(m,\hat{m}), \tag{8}\]
where the weights for the forces, stress, and magmoms are set to \(w_{f}=1\), \(w_{\sigma}=0.1\), and \(w_{m}=0.1\), respectively. The DFT energies are normalized with elemental reference energies before fitting to CHGNet to decrease variances [17]. The batch size is set to 40 and the Adam optimizer is used with \(10^{-3}\) as the initial learning rate. The CosineAnnealingLR scheduler is used to adjust the learning rate 10 times per epoch, and the final learning rate decays to \(10^{-5}\) after 20 epochs.
### Software interface
CHGNet was implemented using pytorch 1.12.0 [52], with crystal structure processing from pymatgen [53]. Molecular dynamics and structure relaxation were simulated using the interface to Atomic Simulation Environment (ASE) [54]. The cluster expansions were performed using the smol package [55].
### Structure relaxation and molecular dynamics
All the structure relaxations were optimized by the FIRE optimizer over the potential energy surface provided by CHGNet [56], where the atom positions, cell shape, and cell volume were simultaneously optimized to reach converged interatomic forces of 0.1 eV/A.
For the MD simulations of the _o_-LMO to _s_-LMO phase transformation, the initial structure Li\({}_{20}\)Mn\({}_{40}\)O\({}_{80}\) was generated by randomly removing Li from a Li\({}_{40}\)Mn\({}_{40}\)O\({}_{80}\) supercell of the orthorhombic structure and relaxing with DFT. The MD simulation was run under the NVT ensemble, with a time step of 2 fs at T = 1100 K for 2 ns. For the simulated XRD in Fig. 4(b), the structures at 0.0, 0.3, 0.6, 0.9, 1.2, and 1.5 ns were coarse-grained to their nearest Wyckoff positions to remove noisy peaks. In Fig. 4(c), Mn\({}_{\text{oct}}\) and Mn\({}_{\text{tet}}\) were determined by counting the number of bonding oxygen ions within 2.52 A. If six bonding oxygen ions were found, then the Mn ion is categorized into Mn\({}_{\text{oct}}\); if less than six bonding oxygen ions were found, the Mn ion is coarse-grained into Mn\({}_{\text{tet}}\) for representation of lower coordinated environments. In Fig. 4(e), Mn\({}^{2+}\) and Mn\({}^{3+}\) are classified by CHGNet magmom threshold of 4.2 \(\mu_{B}\)[30].
For the MD simulations of garnet Li\({}_{3}\)La\({}_{3}\)Te\({}_{2}\)O\({}_{12}\) systems, a time step of 2 fs was used. We ramped up the temperature to the targeted temperature in the NVT ensemble with at least 1 ps. Then, after equilibrating the system for 50 ps, the lithium self-diffusion coefficients were obtained by calculating the mean squared displacements of trajectories for at least 2.3 ns. The uncertainty analysis of the diffusion coefficient values was conducted following the empirical error estimation scheme proposed by He _et al._[57]. In Li\({}_{3+\delta}\), the excess lithium was stuffed to an intermediate octahedral (48\(g\)) site to face-share with the fully occupied 24\(d\) tetrahedral sites.
### Phase diagram calculations
The cluster expansions (CEs) of Li\({}_{x}\)FePO\({}_{4}\) were performed with pair interactions up to 11 A and triplet interactions up to 7 A based on the relaxed unit cell of LiFePO\({}_{4}\). For better energy accuracy, we first fine-tuned CHGNet with the Materials Project structures in the Li-Fe-P-O chemical space with a MSE loss function for 40 epochs, which resulted in a 12 meV/atom training energy error and 19 meV/atom validation energy error. We applied CHGNet to relax 456 different structures in Li\({}_{x}\)FePO\({}_{4}\) (\(0\leq x\leq 1\)) and predict the energies and magmoms, where the 456 structures were generated via an automatic workflow including CE fitting, canonical CE Monte Carlo for searching the ground state at varied Li\({}^{+}\) composition and CHGNet relaxation. The charge-decorated CE is defined on coupled sublattices over Li\({}^{+}\)/vacancy and Fe\({}^{2+}\)/Fe\({}^{3+}\) sites, where Fe\({}^{2+}\) and Fe\({}^{3+}\) are treated as different species. In addition, the non-charge-decorated CE is defined only on Li\({}^{+}\)/vacancy sites. In the charge-decorated CE, Fe\({}^{2+}\)/Fe\({}^{3+}\) is classified with magmom in \([3\mu_{B},4\mu_{B}]\) and \([4\mu_{B},5\mu_{B}]\), respectively [30].
The semigrand canonical Monte Carlo simulations were implemented using the Metropolis-Hastings algo
rithm, where 20% of the MC steps were implemented canonically (swapping Li\({}^{+}\)/vacancy or Fe\({}^{2+}\)/Fe\({}^{3+}\)) and 80% of the MC steps were implemented grand-canonically using the table-exchange method [58, 59]. The simulations were implemented on a \(8\times 6\times 4\) of the unit cell of LiFePO\({}_{4}\). In each MC simulation, we scanned the chemical potential in the \([-5.6,-4.8]\) range with a step of 0.01 and sampled the temperatures from 0 to 1000 K. The boundary for the solid solution stable phases is determined with a difference in the Li concentration \(<0.05\) by \(\Delta\mu=0.01\) eV.
### DFT calculations
DFT calculations were performed with the _Vienna ab initio simulation package_ (VASP) using the projector-augmented wave method [60, 61], a plane-wave basis set with an energy cutoff of 680 eV, and a reciprocal space discretization of 25 \(k\)-points per A\({}^{-1}\). All the calculations were converged to \(10^{-6}\) eV in total energy for electronic loops and 0.02 eV/A in interatomic forces for ionic loops. We relied on the regularized strongly constrained and appropriately normed meta-GGA exchange-correlation functional (r\({}^{2}\)SCAN) [62, 63], which has improved performance on volume, coordination, and formation-energy prediction in solid-state systems. r\({}^{2}\)SCAN provides better computational efficiency than the earlier version of SCAN [64].
### Code availability
The source code of CHGNet is available at [https://github.com/CederGroupHub/chgnet](https://github.com/CederGroupHub/chgnet).
### Data availability
The dataset will be released after review.
|
2309.07659 | Mitigating controller noise in quantum gates using optimal control
theory | All quantum systems are subject to noise from the environment or external
controls. This noise is a major obstacle to the realization of quantum
technology. For example, noise limits the fidelity of quantum gates. Employing
optimal control theory, we study the generation of quantum single and two-qubit
gates. Specifically, we explore a Markovian model of phase and amplitude noise,
leading to the degradation of the gate fidelity. We show that optimal control
with such noise models generates control solutions to mitigate the loss of gate
fidelity. The problem is formulated in Liouville space employing an extremely
accurate numerical solver and the Krotov algorithm for solving the optimal
control equations. | Aviv Aroch, Ronnie Kosloff, Shimshon Kallush | 2023-09-14T12:23:53Z | http://arxiv.org/abs/2309.07659v3 | # Mitigating controller noise in quantum gates using optimal control theory
###### Abstract
All quantum systems are subject to noise from the environment or external controls. This noise is a major obstacle to the realization of quantum technology. For example, noise limits the fidelity of quantum gates. Employing optimal control theory, we study the generation of quantum single and two-qubit gates. Specifically, we explore a Markovian model of phase and amplitude noise, leading to the degradation of gate fidelity. We show that optimal control with such noise models generates control solutions to mitigate the loss of gate fidelity. The problem is formulated in Liouville space, employing a highly accurate numerical solver and the Krotov algorithm to solve optimal control equations.
In reality, any quantum system is open. External intervention transforms the unitary dynamics of a closed system into a non-unitary evolution of an open one. We can classify three significant sources of such intervention: I. A thermal environment II. Back-action due to quantum measurement III. Noise originating from the external controller The loss of coherence associated with quantum systems is the biggest challenge to achieving quantum technology. [1, 2]. The aim of this study is to explore the use of optimal control theory to mitigate the fidelity destruction of a quantum gate due to controller noise.
We formulate the dynamics of open quantum systems by considering completely positive trace-preserving maps (CPTP) [3]. In the Markovian limit, CPTP maps are generated by the equation:
\[\frac{\partial}{\partial t}\hat{\mathbf{\rho}}=\mathcal{L}\hat{\mathbf{\rho}}\]
Where the generator \(\mathcal{L}\) has the Gorini, Kossakowski, Lindblad, and Sudarshan (GKLS) structure [4, 5]. We adopted this framework for the present study.
Optimal control theory (OCT) has addressed open-system control problems [6, 9, 10, 11, 12, 13, 14, 15, 16, 7, 8]. In such control tasks, the generator of the dynamics \(\mathcal{L}\) becomes explicitly time-dependent, a function of the control field \(\mathcal{L}(\epsilon(t))\)[17]. This observation raises the issue: Are the dissipative and unitary dynamics linked? Does a change in the control field modify the dissipative dynamics?
Consistency with thermodynamics imposes additional restrictions on the form of the master equation of a thermal environment [18]. In this case, the dissipative and the unitary
parts are linked. Employing a thermodynamically consistent master equation, optimal control of quantum state-to-state and quantum gates were studied [19]. It was demonstrated that both entropy-changing and preserving transformations can be achieved. In particular, the negative aspect of dissipation on the fidelity of quantum gates could be mitigated. The mechanism identified can be described as active refrigeration accompanying the control process leading to a unitary quantum gate.
A common approach to mitigating environmental noise is dynamical decoupling [20, 14]. The method can mitigate slow noise by moving the process above the noise spectrum. Such an approach cannot mitigate fast noise, which is addressed in this work.
In the present study, we deal with a different dissipative mechanism of noise originating from the controller. This immediately imposes a link between dissipative and unitary dynamics. Previous studies have shown that such noise leads to loss of purity and is therefore harmful to quantum control [21, 22]. Specifically, we address quantum systems with amplitude and phase noise, [23].
Our primary concern is exploring quantum control strategies to mitigate noise and maintain high-fidelity gates. We address this issue by employing optimal control theory first to generate unitary quantum gates in a pure Hamiltonian setup. We study both single and two-qubit gates that form a universal computation set. High-fidelity unitary gates are obtained. We now add noise to the controller and observe the degradation of fidelity. On this reference gate, we apply open system control, including noise, to mitigate the influence of this noise as much as possible. The paper is arranged as follows: Section 1 describes the noise model and the optimal control equations. Section 2 describes the results, which are discussed in Section 3 and summarized in the concluding section 4.
## 1 Quantum noisy dynamics
The noisy dynamics are modeled within open quantum systems [24]. Expressly, we assume a quantum dynamical semi-group describes Markovian dynamics. The generator of such dynamics has the form of Gorini, Kossakowski, Lindblad, and Sudarshan master equation (GKLS) [5, 25]:
\[\frac{d}{dt}\hat{\mathbf{\rho}}(t)=\mathcal{L}\hat{\mathbf{\rho}}=-\frac{i}{\hbar}[ \hat{\mathbf{H}},\hat{\mathbf{\rho}}]+\mathcal{D}\hat{\mathbf{\rho}} \tag{1}\]
\(\hat{\mathbf{H}}\) is the systems Hamiltonian, and \(\mathcal{D}\) is the dissipative generator. \(\mathcal{L}\), and \(\mathcal{D}\) are termed superoperators as they map operators in Hilbert space to other operators.
### Noise model
We consider a quantum system controlled by an external field. The Hamiltonian of the system:
\[\hat{\mathbf{H}}\ =\ \hat{\mathbf{H}}_{0}+\varepsilon(t)\hat{\mathbf{H}}_{c} \tag{2}\]
where \(\hat{\mathbf{H}}_{0}\) is the drift Hamiltonian and \(\hat{\mathbf{H}}_{c}\) is the control Hamiltonian and \(\varepsilon(t)\) describes the time dependent control field. We aim to study the influence of noise originating from the controller (the pulse generator). We differentiate between two models of control noise: amplitude and phase noise. For amplitude noise, we consider random fluctuations in the amplitude of the control field \(\varepsilon(t)=\varepsilon_{c}(t)+\xi(t)\). Assuming the controller is fast relative to the system dynamics We chose to model the noise as a Gaussian random process \(\langle\xi(t)\xi(t^{\prime})\rangle=\sqrt{2\gamma}\delta(t-t^{\prime})\). Under these conditions, the dissipative part of Eq. (1) is generated from \(\hat{\mathbf{H}}_{c}\)[4, 26, 27]:
\[\mathcal{D}_{A}=-\gamma_{A}\varepsilon_{c}^{2}(t)[\hat{\mathbf{H}}_{c},[\hat {\mathbf{H}}_{c},\bullet]] \tag{3}\]
where \(\gamma\) is the noise strength. Phase noise originates from timing errors in the controller [28, 29] or fluctuations in frequency [30]. Assuming a Gaussian white noise model, the dephasing generator becomes [28]:
\[\mathcal{D}_{P}=-\gamma_{P}[\mathbf{\hat{H}},[\mathbf{\hat{H}},\bullet]] \tag{4}\]
In both cases, the dissipator depends on the control field.
### Vectorizing Louivile space
We describe the dynamics employing the Hilbert space of system operators defined by the scalar product. Superoperators map operators into the same Hilbert space. hence, we choose to describe operators as vectors and superoperators by matrix-vector multiplications. We can now decompose a general operator \(\hat{\mathbf{X}}\) employing a full orthogonal operator basis,
\[\{\hat{\mathbf{A}}\}=[\hat{\mathbf{A}}_{1},\hat{\mathbf{A}}_{2},....,\hat{\mathbf{A}}_{N}] \qquad,\quad\operatorname{tr}\{\hat{\mathbf{A}}_{i}^{\dagger}\cdot\hat{\mathbf{A}}_{j }\}=\delta_{ij}\]
Using this basis, we can represent \(\hat{\mathbf{X}}\) as;
\[\hat{\mathbf{X}}=\sum_{i}\chi_{i}\hat{\mathbf{A}}_{i} \tag{5}\]
where \(\chi_{i}\) is an element of a vector \(\vec{x}\) of expansion coefficients of size \(N^{2}\) that characterizes \(\hat{\mathbf{X}}\). Using this notation, we can define any operator of the dimension \(N\times N\) with a vector of size \(N^{2}\).
In this formalism the mapping of a super-operator \(\Lambda\hat{\mathbf{X}}=\hat{\mathbf{Y}}\) translates to a matrix-vector multiplication: \(\tilde{\mathbf{\mathcal{G}}}\vec{\chi}=\vec{v}\) where \(\hat{\mathbf{Y}}=\sum_{i}v_{i}\hat{\mathbf{A}}_{i}\). The dynamical equation Eq. (1) can be reduced to a series of matrix-vector multiplications where \(\overrightarrow{\rho}\) will denote to \(\hat{\mathbf{\rho}}\) in vector space, and \(\tilde{\mathbf{\mathcal{L}}}\) will denote the superoperator \(\mathcal{L}\) (for Additional details, Cf. ref [31]). This paper will use the notation " - " to assign a super operator in Louiville space. The action of the superoperator on the density operator will read as follows:
\[\mathcal{L}\hat{\mathbf{\rho}}\ \Rightarrow\ \tilde{\mathbf{\mathcal{L}}} \overrightarrow{\rho} \tag{6}\]
The dephasing super-operators,\(\mathcal{D}_{\mathcal{A}}\) and \(\mathcal{D}_{\mathcal{P}}\), are defined by the following equation:
\[\tilde{\mathbf{\mathcal{D}}}_{A}=-\gamma_{A}\varepsilon_{c}^{2}(t)( \mathbf{\mathcal{H}}_{c}\mathcal{H}_{c}\bullet) \tag{7}\] \[\tilde{\mathbf{\mathcal{D}}}_{P}=-\gamma_{A}(\mathbf{\mathcal{H}}\mathcal{ H}\bullet)\]
Where \(\tilde{\mathbf{\mathcal{H}}}^{\prime}\) corresponds to the super-operator \(-\frac{i}{\hbar}[\hat{\mathbf{\mathcal{H}}}^{\prime},\bullet]\) in Louiville vector space.
More details on vectorization are described in Appendix A.
### Optimal Control Theory of Open systems (OCT)
Our general task is to mitigate the influence of noise on a quantum gate. We employ OCT to provide methods to compute such controls. The external fields \(\{\mathcal{E}_{k}\}\) interacting with the quantum system have the task of steering the system's dynamics from an initial state to a final state. A more ambitious task is to find the driving field that generates a quantum map \(\Lambda(T)\)[32]. The dynamical equation of motion for the map becomes:
\[\frac{d\Lambda(t)}{dt}=\mathcal{L}(t)\Lambda(t)\ \ \Rightarrow\ \ \frac{d\tilde{\mathbf{\mathcal{G}}}}{dt}=\tilde{\mathbf{\mathcal{L}}}(t)\tilde{\mathbf{ \mathcal{G}}}, \tag{8}\]
where \(\tilde{\mathbf{\tilde{\mathcal{G}}}}\) is the evolution operator in the vector space. The initial condition is \(\Lambda(t=0)=\mathcal{I}\), where \(\mathcal{I}\) is the identity super-operator. The objective is to obtain the optimal driving field \(\epsilon(t)\) that induces a given transformation \(\hat{\mbox{O}}\) at \(t=T\). This can be interpreted as the mapping of a complete set of operators \(\{\hat{\mathbf{A}}\}\) as close as possible to the desired map \(\hat{\mbox{O}}\).
In OCT, this task is translated to optimizing the objective functional [7]:
\[\begin{split}\mathcal{J}_{max}=\mbox{Tr}\{\hat{\mbox{O}}^{ \dagger}\Lambda(T)\}&=\sum_{j}\mbox{tr}\{(\hat{\mbox{O}}\hat{ \mathbf{A}}_{j})^{\dagger}(\Lambda(T)\hat{\mathbf{A}}_{j}) \}\\ &\Rightarrow\sum_{j}\mbox{tr}\{(\tilde{\mbox{O}}\hat{\mathbf{A}}_{j})^{\dagger}(\tilde{\mathbf{\mathcal{G}}}(T)\hat{ \mathbf{A}}_{j})\}\end{split} \tag{9}\]
Where \(\tilde{\mathbf{\mathcal{G}}}\) corresponds to \(\Lambda\) in Hilbert space. We use the symbol Tr to denote a trace of super-operators, while tr is used to denote the trace operators in Hilbert space. Two constraints are added to the objective. The first restricts the dynamics to comply with the Liouville-von Neumann equation Eq. (8),
\[\mathcal{J}_{con}=\int_{0}^{T}\mbox{Tr}\left\{\left(\frac{\partial\Lambda(t) }{\partial t}-\mathcal{L}(t)\Lambda(t)\right)\Upsilon(t)\right\}dt \tag{10}\]
where \(\Upsilon(t)\) is a super-operator Lagrange multiplier. The second constraint restricts the total field energy
\[\mathcal{J}_{penal}=\lambda\int_{0}^{T}\frac{1}{s(t)}|\epsilon(t)|^{2}dt \tag{11}\]
\(\lambda\) is a scalar Lagrange multiplier, and \(s(t)\) is a shape function that smoothly turns the pulse on and off. For this task, we choose a Gaussian profile. Adding these constraints, we obtain the overall functional,
\[\mathcal{J}_{Tot}=\mathcal{J}_{max}+\mathcal{J}_{con}+\mathcal{J}_{penal} \tag{12}\]
The control task is translated to the maximization of the generalized objective \(\mathcal{J}_{Tot}\), \(\delta\mathcal{J}_{Tot}=0\). Functional derivatives with respect to the various functional elements are then taken, \(\Upsilon,\Lambda\), and \(\epsilon\), resulting in the following set of control equations:
1. The Louivile equation Eq. (8) with the initial condition \(\Lambda(t=0)=\mathcal{I}\) for \(\Lambda\).
2. The inverse Louivile equation \[\frac{\partial\Upsilon(t)}{\partial t}=\mathcal{L}^{*}(t)\Upsilon(t)\;\;\; \Rightarrow\;\;\;\frac{\partial\tilde{\mathbf{Y}}(t)}{\partial t}= \tilde{\mathbf{\mathcal{L}}}^{\dagger}(t)\tilde{Y}(t)\] (13) Where \(\tilde{\mathbf{Y}}\) corresponds to \(\Upsilon\) in the vector space, with the condition \(\Upsilon(t=T)=\hat{\mbox{O}}^{\dagger}\). Note that the loss of information in the reverse equation is also inverted so that dephasing occurs from both time ends.
3. The control field update equation for amplitude noise: \[\begin{split}\Delta\mathbf{\epsilon}(t)&=- \mbox{Im}\,\frac{s(t)[\mbox{Tr}\{\Upsilon(t)\hat{\mathcal{H}}_{c}\Lambda(t) \}]}{2(\lambda+\gamma_{A}[\mbox{Tr}\{\Upsilon(t)\hat{\mathcal{H}}_{c}^{2} \Lambda(t)\}])}\\ \Rightarrow&-s(t)\,\mbox{Im}\,\frac{\mbox{Tr}\{\tilde {\mathbf{Y}}^{*}\tilde{\mathbf{\mathcal{H}}}_{c}\tilde{ \mathbf{\mathcal{G}}}(t)\}}{2(\lambda+\gamma_{A}Tr\{\tilde{\mathbf{Y}}^{*}\tilde{\mathbf{\mathcal{H}}}_{c}^{2}\tilde{\mathbf{\mathcal{G}}}(t)\})}\\ \frac{\partial}{\partial\epsilon}\mathcal{L}=\mathcal{H}_{c}\;\mbox{ where }\;\tilde{\mathbf{\mathcal{H}}}_{c}\;\mbox{is a super operator that corresponds to }\frac{i}{\hbar}[\hat{H}_{c},\bullet].\end{split}\] (14)
The dynamical equations are solved iteratively (counted by k) employing the Krotov method [33, 32], which leads to an update to \(\mathbf{\epsilon}^{(k)}\):
\[\begin{split}\Delta\mathbf{\epsilon}^{(k)}(t)&=-\frac{s(t )}{2\lambda}\operatorname{Im}\left[\operatorname{Tr}\left\{\Upsilon^{(k-1)}(t )\mathcal{L}_{c}\Lambda^{(k)}(t)\right\}\right]\\ &\Rightarrow-\frac{s(t)}{2\lambda}\operatorname{Im}\left[ \operatorname{Tr}\left\{\tilde{\mathbf{Y}}^{(k-1)}(t)\tilde{\mathbf{\mathcal{H}}}_{c} \tilde{\mathbf{\mathcal{G}}}^{(k)}(t)\right\}\right].\end{split} \tag{15}\]
This evaluation continues until certain fidelity \((1-\mathcal{J}_{max})\) is reached. Note that for most cases \(\lambda\gg\gamma_{A}\), one can neglect its term at the denominator in Eq. (15).
We solve numerically the dynamics in Eq. (8) and (13) employing a semi-global propagation method [34] described in Appendix B. With these solutions, we use Eq. (15) to update the field and repeat until the desired fidelity is reached. The Krotov method is known to stagger after many iterations. Therefore, we halt this iterative process when stagnation is reached.
## 2 Results
Optimal control theory is employed first to obtain the desired unitary gate without dissipation. This solution serves as a reference to study the influence of noise on fidelity. At this point, optimal control is used again, including the dissipation to search for control fields that mitigate the impact of noise. The generator of the dynamical map has the form:
\[\tilde{\mathbf{\mathcal{L}}}=\tilde{\mathbf{\mathcal{H}}}_{0}+\tilde{\mathbf{\mathcal{H}}} _{c}+\tilde{\mathbf{\mathcal{D}}}_{A/P}. \tag{16}\]
Where \(\tilde{\mathbf{\mathcal{D}}}\) generates noise, and the indices '\(A\)' and '\(P\)' represent the two types of quantum noise defined at Eq. (3) and Eq. (4), respectively.
We choose a driven system that is completely unitary and controllable. This controllability results from commutators of the drift Hamiltonian, \(\hat{\mathbf{\mathbf{H}}}_{0}\), and the control Hamiltonian, \(\hat{\mathbf{\mathbf{H}}}_{c}\), which generate the full rank algebra [35]. Three representative target gates are studied:
1. A Hadamard gate on a single-qubit SU(2)
2. Pauli-X gate - SU(2) embedded in SU(4).
3. An entangling gate on a 2-qubit system SU(4).
The generator of the dynamical map is Eq. (16), and the cases differ in their Hamiltonians. As a consequence, their dephasing elements of the different cases: Eq.'s (3) and (4) differ.
The first step in OCT is to obtain the reference unitary gate, which also establishes a good guess of the control field \(\varepsilon(t)\). A good guess can become essential for convergence to high fidelity. In this case, the OC dynamics is generated by:
\[\tilde{\mathbf{\mathcal{L}}}=\tilde{\mathbf{\mathcal{H}}}_{0}+\tilde{\mathbf{\mathcal{H}} }_{c}. \tag{17}\]
We define the fidelity as \(F=\operatorname{Tr}\{\hat{\mathrm{O}}^{\dagger}\Lambda(T)\}\), and infidelity, \(IF=1-\operatorname{Tr}\{\hat{\mathrm{O}}^{\dagger}\Lambda(T)\}\). The unitary infidelity is assigned by \(IF_{U}\).
To test the effect of noise on the infidelity, we used the solution obtained from \(IF_{U}\). We propagated a system with noise, using Eq. (16) as the generator of the dynamical map.
In Fig.1, the degradation of the fidelity by the noise from the infidelity of \(IF_{U}\) to \(IF_{n}\) is presented. We plot the log of the ratio \(R=\frac{IF_{n}}{IF_{U}}\) versus the noise rate \(\gamma\).
Fig.1 shows that the system is more resilient to amplitude noise \(\tilde{D}_{A}\) than to phase noise \(\tilde{D}_{P}\) for the Hadamard gate. The degradation is significant when the noise strength increases, saturating with three orders of magnitude in infidelity. The two-qubit entangling gate is two orders of magnitude more sensitive than the single-qubit gate with respect to the noise strength \(\gamma\).
### Hadamard gate-SU(2) space
The Hadamard gate performs a rotation of \(\pi\) about the axis \((\hat{x}+\hat{z})/\sqrt{2}\) of the Bloch sphere. Expressed as a superoperator in the operator basis \(\hat{I},\hat{S}_{X},\hat{S}_{Y},\hat{S}_{Z}\), the Hadamard gate is given by:
\[\Lambda_{U}=\left(\begin{array}{cccc}1&0&0&0\\ 0&0&0&-1\\ 0&0&-1&0\\ 0&-1&0&0\end{array}\right)\ . \tag{18}\]
We chose a Hamiltonian, allowing high fidelity for this type of target gate. The drift Hamiltonian:
\[\hat{\mathbf{H}}_{0}=u\hat{S}_{Z}+a_{X}\hat{S}_{X} \tag{19}\]
and the control Hamiltonian:
\[\hat{\mathbf{H}}_{c}=\varepsilon(t)\hat{S}_{XY} \tag{20}\]
where \(\hat{S}_{XY}\) is a rotation w.r.t\(XY\) plane (it was found that any direction in this plane leads to control). With this generator, we first obtained the reference unitary gate by OCT,
Figure 1: **Destruction of fidelity by noise. The change in infidelity of a quantum gate \(IF=1-\text{Tr}\{\hat{\mathrm{O}}^{\dagger}\Lambda(T)\}\). The reference is a converged unitary control without noise with infidelity \(IF_{U}\). The same field is used to obtain the gate with noise \(IF_{n}\). The log of the ratio \(R=\frac{IF_{n}}{IF_{U}}\), \(\log(R)=\log(IF_{n})-\log(IF_{U})\), is displayed as a function of the noise strength \(\gamma\). Both phase and amplitude noise are displayed. The insert displays a zoom on the low noise level. Left panel: The single qubit Hadamard gate. Right panel: The two-qubit entangling gate.**
with unitary infidelity \(IF_{U}=1\times 10^{-5}\). Using the optimal unitary control field as a guess, solutions under noise are obtained as a function of \(\gamma\), noise rate. For each value of \(\gamma\), we apply OCT to mitigate the influence of noise. Fig.2, shows the infidelity improvement as a function of the noise rate, \(\gamma\) for a short control period (2-Rabi cycles \(\tau=4\pi/\Omega\), \(\Omega=\sqrt{u^{2}+a_{X}^{2}}\)).
The improvement ratio is defined as \(NC=\frac{IF_{n}}{IF_{F}}\), where \(IF_{n}\) is the infidelity obtained using the unitary guess field subject to noise and \(IF_{F}\) is the infidelity brought by optimal control mitigating the noise. Infidelity can be mitigated with OC by a factor of two in low noise rates (\(\gamma<\)0.2\(\times 10^{-3}\)) for phase noise and on amplitude noise systems in high noise rate (\(\gamma>\)0.2\(\times 10^{-3}\)) systems.
The optimal control formulation restricts the accumulated power Cf. Eq. (11), so the field intensity is optimized without noise. Nevertheless, we observe that the invested pulse energy in the optimal solution varies slightly with the noise strength \(\gamma\). The inset of Fig.2 shows this dependence. Especially for amplitude noise, where the source of decoherence is the field strength, optimal control reduces the field strength and, consequently, the controller's noise to battle phase noise at slow noise rates. However, more substantial noise causing significant dephasing can be controlled more effectively only by boosting the field's strength. Thus, for phase noise (\(\tilde{D}_{P}\)), OCT turns on the field's strength at increased noise and, consequently, the effectivity of the control. (Cf. Eq. (28) ).
Fig. 3 shows the infidelity improvement for a longer control period (6-Rabi-cycles). When increasing the time allocated for control, the reference infidelity was maintained at \(IF_{U}\sim 1\times 10^{-5}\). However, to keep comparable infidelity for both cases, the dephasing rate \(\gamma\) had to be reduced by two orders of magnitude. In both cases, we find that OCT mitigates the influence of noise at low values of \(\gamma\). When the dissipation increases, the ability to mitigate the noise vanishes.
Figure 2: **Reduction of the noise impact via Optimal Control for the Hadamard gate at short times (2-Rabi cycles):** The reduction impact is defined as \(NC=\frac{IF_{n}}{IF_{F}}\) Vs the dephasing rate, \(\gamma\) in logarithmic scale. The two dephasing mechanisms, Amplitude noise,\(\tilde{D}_{A}\), (red, A) and Phase noise, \(\tilde{D}_{P}\), (blue, P), are presented. (inset) The accumulated energy of the control field vs. the dephasing rate in the two dephasing mechanisms.
Optimal control mitigation can reduce \(IF_{F}\) for the phase noise more effectively than \(IF_{F}\) of the amplitude noise at low noise rates. This trend is reversed at high rates. The inset of Fig.3 shows that the optimal control solution reduces the field's energy for higher dephasing rates, resulting in higher fidelity.
Figure 4: **The trajectory of an initial operator on the \(Z\) direction during a Hadamard transformation under phase noise.** The reference unitary evolution in blue has an infidelity of \(IF_{U}=1\times 10^{-5}\). The trajectory with noise in red does not reside on the surface of the Bloch sphere. The optimal control solution in yellow mitigates the infidelity to a value of \(IF_{n}=15\times 10^{-3}\) The right panel zooms on the target state. The outer sphere represents the purity of \(1\). The inner-sphere purity is \(90\%\).
Figure 3: **Noise reduction using Optimal Control for the Hadamard gate at longer times (6-Rabi cycles):** The two dephasing mechanisms, amplitude noise, \(\tilde{D}_{A}\), (red, A) and phase noise, \(\tilde{D}_{P}\) (blue), (blue, P) are shown. (inset) The accumulated energy of the control field vs. the noise rate in the two noise mechanisms.
Fig. 4 follows a trajectory of an element of the operator basis during the Hadamard transformation. When the generator is noiseless, the dynamics stay on the surface of the Bloch sphere, and noise pushes the dynamics into the interior of the Bloch sphere. Optimal control mitigation helps to redirect the trajectory towards the surface.
### Pauli-X gate - SU(2) embedded in SU(4).
The two-qubit entangling and single-qubit gates comprise a universal computational set that can simulate any quantum circuit. We first optimize the control field by generating the single-qubit \(Pauli-X\) gate embedded in a two-qubit Hilbert space:
\[\hat{W}=\left(\begin{array}{cccc}0&0&0&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{array}\right) \tag{21}\]
This gate rotates the second qubit on the \(X\) axis conditioned on the first qubit upper projection in the \(Z\) direction. For the lower projection, no restriction is imposed.
For this task, we employ the trivial drift generator:
\[\tilde{\mathbf{\mathcal{H}}}_{0}=\tilde{\mathcal{I}}^{1}\otimes\tilde{\mathcal{I }}^{2} \tag{22}\]
and the following control generator:
\[\tilde{\mathbf{\mathcal{H}}}_{c}=\varepsilon(t)\sum_{i=X,Y}a_{i}(\tilde{\mathcal{ I}}^{1}-\tilde{S}_{Z}^{1})\otimes\tilde{S}_{i}^{2}\ \, \tag{23}\]
which is sufficient to generate the unitary gate. This framework is then employed to study the two noise models \(\tilde{D}_{A}\) and \(\tilde{D}_{P}\).
First, we employed OCT on pure unitary dynamics to obtain a high-fidelity solution converging to infidelities of \(IF_{U}\ \sim 1\times 10^{-5}\).
We now employ this solution as a guess for the OCT protocol on noisy systems, mitigating noise degradation.
Fig. 5 shows the degradation due to noise \(IF_{n}\) relative to \(IF_{U}\) (star) and the mitigation due to OCT (\(IF_{F}\)). The inset quantifies the noise mitigation due to OCT.
So far, in the last example, we examined the case of a single qubit embedded in a two-qubit space but uncoupled from it. Now, we want to investigate the behavior of a noisy system when two qubits are coupled. At this level, the simulation controls a single qubit in a noisy environment using a controller embedded in a larger Hilbert space, with the other qubit serving as the ancilla. The objective remains the same, Eq. (21). First, we changed the drift Hamiltonian, removing the resonance condition between the two qubits:
\[\tilde{\mathbf{\mathcal{H}}}_{0}=a\tilde{\mathcal{I}}^{1}\otimes\tilde{\mathcal{I }}^{2}+\omega_{1}\tilde{S}_{Z}^{1}\otimes\tilde{\mathcal{I}}^{2} \tag{24}\]
where \(a\) is a phase factor and \(\omega_{1}\) is the qubit frequency. In addition, to couple the ancilla, we increased the number of control fields so that the two control generators are:
\[{}^{Z}\tilde{\mathbf{\mathcal{H}}}_{c}=\varepsilon_{Z}(t)\sum_{i=X,Y}a_{i}(\tilde {\mathcal{I}}-\tilde{S}_{Z})^{1}\otimes\tilde{S}_{i}^{2} \tag{25}\]
\[{}^{E}\tilde{\mathbf{\mathcal{H}}}_{c}=\varepsilon_{E}(t)(\tilde{S}_{X}^{1} \otimes\tilde{S}_{Z}^{2}) \tag{26}\]
here the field \({}^{E}\tilde{\mathbf{\mathcal{H}}}_{c}\) adds a correlation between the qubits.
Figure 6(a) demonstrates the infidelity achieved by OCT as a function of the noise strength. The inset illustrates the ability of OCT to harness noise to achieve higher fidelity in the mild noise regime.
To understand how OCT works and reduces noise, we analyzed the temporal population in the ancilla. Fig.6(b) displays the maximum population (%) in the ancilla as a function of noise strength. We observe that more of the population moves to the ancilla states when noise levels increase. To reach the objective, OCT must return the ancilla population to the qubit. Fig.6(c) shows the time dependence of the transformed population under two distinct noise regimes: high (yellow) and low (green). Initially, the ancilla states are empty, meaning only the qubit states are occupied. We compare this control problem with the one shown in fig.5, where the coupling Hamiltonian is different, but the target and the initial state are the same.
Notice that there are now two sources of dephasing (\(\tilde{D}_{P}\)) noise, one of which couples the system out of the qubit states. When introducing higher noise levels, we notice that the coupling to the ancilla affects our system infidelities, making it harder for OCT to achieve its aim. However, at low dephasing rates, we can observe that the ancilla helps to achieve lower infidelities.
### Entangling gate-SU(4) space
The ultimate test of optimal control mitigation employs the full palette of the 16 operator bases of SU(4). The dynamical generators connect all elements of the algebra, using the one described in Eq's. (24),(25),(26).
Obtaining converged results was difficult. The entangling gate set as a target is:
\[\hat{U}=\left(\begin{array}{cccc}1&0&0&0\\ 0&1&0&0\\ 0&0&0&i\\ 0&0&-i&0\end{array}\right) \tag{27}\]
Figure 5: **Infidelity as a function of the noise strength \(\gamma\) for the Pauli-X gate.** The dashed line (brown, \(IF_{n}\)): infidelity employing the unitary control field. The solid line (purple, \(IF_{F}\)): infidelity of the OCT solution, mitigating noise. (inset) The noise reduction, NC (log scale), Vs. \(\cdot\gamma\).
The CNOT+Phase gate can be part of the universal set. We, therefore, employ it to test the influence of noise.
OC could reduce the infidelity to \(IF_{U}\sim 1\times 10^{-6}\) for the unitary case when the Krotov iterative process reached stagnation.
The infidelity changes due to phase noise for the entangling SU(4) gate are shown in Fig. 7. The noise significantly degrades the fidelity. Optimal control can recover high fidelity. When the noise rate is low, we overcome the saturation in the Krotov method and obtain infidelities better than the unitary case (Cf. the inset) improving with noise rate.
Figure 6: **Single qubit gate with an ancilla: a) Noise reduction due to optimal control as a function of dephasing noise.** The insert shows the small dephasing regime. b) The maximum population in the ancilla as a function of the dephasing rate. c) The population in the ancilla as a function of dimensionless time for two noise strengths. The time is relative to the energy scale of \(\hat{\mathbf{H}}_{0}\): \(\Delta t=2\pi/\Delta E\).
Figure 7: **The noise cancellation \(NC=\frac{IF_{n}}{IF_{F}}\) (log scale) Vs. the noise rate \(\gamma\) for the entangling gate under phase noise. The insert focuses on the low noise regime.**
The reason is that Krotov's iterative procedure for systems with noise stagnates much later. New control channels are introduced to the control protocol as suggested in [36]. For higher noise values, the mitigation due to OCT declines as expected. The influence of amplitude noise on infidelity is shown in Fig.8. Optimal control managed to restore \(\sim 4\) orders of magnitude. As expected, this mitigation decreases with the growth of the dephasing rate.
Additional insight was obtained using the optimal control field with noise to a noiseless propagation. The resulting infidelity was \(IF_{U}\sim 10^{-3}\), 3 orders of magnitude lower that the optimized \(IF_{U}\sim 10^{-6}\). This means that the optimal field with the noise represents a new active solution that emerges due to the control under noise.
## 3 Discussion
A theoretical analysis can rationalize the fidelity loss of the gates due to the noise. Two factors can cause the loss: The first is a misguided unitary without loss of purity. On the Bloch sphere, it reaches an erroneous final position. Such errors can be correctable with the aid of an additional coherent tool. The second element is an unrecoverable loss of purity. On the Bloch sphere, motion toward the interior of the sphere.
The instantaneous purity loss is calculated as
\[\frac{d}{dt}\mathrm{tr}\{\hat{\mathbf{\rho}}^{2}\}\ =\ 2\mathrm{tr}\{\hat{\mathbf{\rho}} \mathbf{\mathcal{L}}\hat{\mathbf{\rho}}\}\ =\ 2\mathrm{tr}\{\hat{\mathbf{\rho}}\mathbf{\mathcal{D}}\hat{\mathbf{\rho}}\}\ \, \tag{28}\]
the second equality result from the conservation of purity by unitary transformations.
For state-to-state control for pure dephasing, the optimal solution prefers adiabatic following [37] where \(\mathbf{\mathcal{D}}_{P}\hat{\mathbf{\rho}}=0\). The objective must consider the purity loss for all possible inputs for a gate. Thus, the optimal solution is more involved.
Phase noise is equivalent to the noise generated by imperfect timekeeping, where the fidelity was found to scale as [29]: \(F=(2+e^{-\frac{d^{2}}{2N}})/3\), where \(\theta\) is the pulse area and \(N\propto 1/\gamma\) is the ticking accuracy. When calculating the fidelity as a function of \(\gamma\), our results align with this scaling (Cf. Fig. 9).
For amplitude noise, the instantaneous purity loss becomes in vector notation:
\[\begin{split}\frac{d}{dt}\mathrm{tr}\{\overrightarrow{\rho}^{2} \}=2\left(\overrightarrow{\rho}\cdot\tilde{\mathbf{\mathcal{D}}}_{A}\overrightarrow {\rho}\right)=\\ -2\gamma_{A}\varepsilon_{c}^{2}(t)\left(\overrightarrow{\rho} \cdot\mathbf{\mathcal{H}}_{c}\mathbf{\mathcal{H}}_{c}\overrightarrow{\rho}\right) \end{split} \tag{29}\]
Figure 8: **The noise cancellation \(NC=\frac{IF_{n}}{IF_{F}}\) (log scale) Vs. the noise rate \(\gamma\) for entangling gate under amplitude noise.**
For a pure state, the rate of purity loss for the amplitude noise model becomes [38]:
\[\frac{d}{dt}\mathrm{tr}\{\hat{\mathbf{\rho}}^{2}\}\ =\ -2\gamma\varepsilon^{2}(t) \Delta\hat{\mathbf{H}}_{c} \tag{30}\]
where \(\Delta\hat{\mathbf{H}}_{c}\) is the variance of the control operator [21]. Examining Eq. (30), the optimal solutions will therefore minimize the control amplitude as the noise rate increases (Cf. Fig. 2 and Fig. 3). The gate transformation of the term \((\overrightarrow{\rho}\cdot\mathbf{\mathcal{H}}_{c}\mathbf{\mathcal{H}}_{c} \overrightarrow{\rho})\) in Eq. (29) was found to be almost constant during the propagation, when averaged over a complete set of initial states.
Two general strategies of control mitigating the noise can be envisioned. The first is a selection from all possible unitary controls the one that minimizes the noise. The second is to employ the noise to modify the control trajectory. For thermal noise, we found the first mechanism to be operative [19]. For controller noise, we observe that the control solution fails when applied to noiseless dynamics. As observed in Fig. 4, the optimal solution with noise finds a different trajectory to the target.
## 4 Conclusions
Quantum devices employ interference and entanglement as crucial resources. Dissipation is the primary limiting factor to achieving quantum technology [1, 2]. Significant effort is therefore devoted to isolating the quantum device from the environment. However, the noise originating from the controller is typically ignored. Since such noise is unavoidable, we addressed the question: Can optimal theory achieves the desired quantum gate while mitigating the harm due to noise?
What is unique about controller noise is that upon "hands-off," the damage is avoided. Our Markovian noise model assumes the controller is fast. A GKLS master equation describes the quantum dynamics of both amplitude and phase noise. Extension of this study to non-Markovian noise model [27], and Poissonian noise is feasible.
Employing advanced techniques of OCT for open systems, we first find a control that generates a high-fidelity unitary gate. We then use this as a reference gate to study the degradation of fidelity due to the controller noise. Typically, the system is more sensitive to phase noise. We then apply OCT again to mitigate the fidelity loss due to noise. We have found that quantum control can find control fields that restore or surpass fidelity.
The present study could not be possible without a highly accurate propagator, addressing explicit time dependence and non-hermitian operators (see Appendix). The gate infidelity is extremely small but significant, and the differences between the unitary and noisy solutions are minute [8].
Our findings emphasize the significance of error mitigation techniques tailored to specific quantum gates and system characteristics. The varying quality of fidelity improvement based on the target gate and system underscores the need for targeted and adaptable strategies in the face of noise [39].
Notably, the integration of optimal control theory into the framework of open quantum systems led us to discover novel control solutions distinct from noiseless optimal solutions.
In conclusion, our study showcases the potential of Optimal Control Theory in mitigating errors caused by phase and amplitude noise, elevating gate fidelity closer to a feasible working framework within the domain of quantum computing.
## Acknowledgement
We thank Ido Schaefer for his help in implementing the semi-global propagator. In addition we thank Raam Uzdin and Roie Dann for helpful discussions. This study was supported by the Israel Science Foundation (grant nos. 510/17 and 526/21).
## Appendix A Vectorizing Liouville space
Propagators or solvers of dynamical equations of motion approximate the solution in a polynomial series [40]. The basic, elementary step is matrix-vector multiplication. To employ such propagators for the current open system dynamics in Liouville space, the numerical scheme has to be adopted.
The following appendix describes the execution of superoperators acting on operators. This requires vectorizing Liouville space to adopt the common matrix-vector operation. A description of the vectorizing procedure is employed in both analytical and numerical descriptions.
A Hilbert space composed of operators can be generated by defining a scalar product between operators. This is equivalent to a linear space of matrices, converting the matrices effectively into vectors \((\rho\rightarrow|\rho\rangle)\). This is the Fock-Liouville space (FLS) [41]. The usual definition of the scalar product of matrices \(\phi\) and \(\rho\) is defined as \(\langle\langle\phi\mid\rho\rangle\rangle\equiv\mathrm{Tr}\left[\phi^{\dagger }\rho\right]\). The Liouville superoperator from Eq. (1) is now an operator acting on the Hilbert space composed of operators. The main utility of the FLS is to allow the matrix representation of the evolution operator. For example, the evolution of a two-level system density matrix can be expressed in the FLS as
\[|\rho\rangle\rangle=\left(\begin{array}{c}\rho_{00}\\ \rho_{01}\\ \rho_{10}\\ \rho_{11}\end{array}\right). \tag{31}\]
The Liouville von Neumann equation describes the time evolution of a mixed state Eq. (1). In vector notation, the Liouvillian superoperator is expressed as a matrix,
\[\tilde{\mathbf{\mathcal{L}}}=\left(\begin{array}{cccc}0&i\Omega&-i\Omega&0\\ i\Omega&iE&0&-i\Omega\\ -i\Omega&0&-iE&i\Omega\\ 0&-i\Omega&i\Omega&0\end{array}\right), \tag{32}\]
Each row is calculated by observing the output of the operation \(-i[H,\rho]\) in the computational basis of the density matrices space. The system's time evolution corresponds to the matrix equation \(\frac{d|\rho\rangle\rangle}{dt}=\tilde{\mathcal{L}}|\rho\rangle\rangle\), which in matrix notation would be
\[\left(\begin{array}{c}\dot{\rho}_{00}\\ \dot{\rho}_{01}\\ \dot{\rho}_{10}\\ \dot{\rho}_{11}\end{array}\right)=\left(\begin{array}{cccc}0&i\Omega&-i \Omega&0\\ i\Omega&iE&0&-i\Omega\\ -i\Omega&0&-iE&i\Omega\\ 0&-i\Omega&i\Omega&0\end{array}\right)\left(\begin{array}{c}\rho_{00}\\ \rho_{01}\\ \rho_{10}\\ \rho_{11}\end{array}\right). \tag{33}\]
A similar approach is used for the dissipative part \(\tilde{\mathbf{\mathcal{D}}}\)
Semi-global Propagation method
To solve the Liouville von Neumann equation, obtaining high-fidelity solutions for controlling quantum gates requires highly accurate and efficient numerical propagators to solve the Liouville von Neumann equation. For this task, we have modified the semi-global propagator [34] to perform in the Liouville vector space.
The Liouville von Neumann equation describes the dynamics of an open quantum system:
\[\frac{d}{dt}\hat{\boldsymbol{\rho}}=\mathcal{L}\hat{\boldsymbol{\rho}} \tag{34}\]
where \(\hat{\boldsymbol{\rho}}\) is the density operator and \(\mathcal{L}\) the Liouvillian. For a driven open system, the Liouvillian can be partitioned to:
\[\begin{split}\frac{d}{dt}\hat{\boldsymbol{\rho}}=\mathcal{L}_{t} \hat{\boldsymbol{\rho}}=\left(\mathcal{L}_{H}(t)+\mathcal{L}_{D}(t)\right) \hat{\boldsymbol{\rho}}\\ \mathcal{L}_{H}=\mathcal{L}_{H_{0}}+\mathcal{L}_{H_{t}}\end{split} \tag{35}\]
\(\mathcal{L}_{H}(t)\) is the generator of the unitary part of the dynamics and can be decomposed into time-independent and time-dependent components. The dissipative part \(\mathcal{L}_{D}(t)\) implicitly describes the influence of the environment and can be time-dependent.
For a time-independent Lindblidian \(\mathcal{L}_{0}\), a formal solution becomes:
\[\hat{\boldsymbol{\rho}}(t)=e^{\mathcal{L}_{0}t}\hat{\boldsymbol{\rho}}(0) \tag{36}\]
with the initial conditions \(\hat{\boldsymbol{\rho}}(0)\).
When the Liouviilan can be partitioned into a time-dependent and time-independent part \(\mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{t}\) a formal solution of Eq. (35) can be written as an integral equation:
\[\hat{\boldsymbol{\rho}}(t)=e^{\mathcal{L}_{0}t}\hat{\boldsymbol{\rho}}(0)+ \int_{0}^{t}e^{\mathcal{L}_{0}(t-\tau)}\mathcal{L}_{t}\hat{\boldsymbol{\rho}} (\tau)d\tau \tag{37}\]
Eq. (37) will form the basis for the numerical approximation.
In typical control problems \(\mathcal{L}\) varies considerably with the time. Therefore, the total evolution is practically broken into finite time steps \(\Delta t\). Then, one can concatenate the propagators and obtain the total evolution from \(t=0\) to \(t=T\) by
\[\overrightarrow{\rho}(T)\approx\prod_{j=1}^{N_{t}}\mathcal{G}_{j}(\Delta t) \overrightarrow{\rho}(0) \tag{38}\]
where \(\mathcal{G}_{j}(\Delta t)\) is the propagator for \(t\) to \(t+\Delta t\) where \(t=j\Delta t\).
A direct approximation assumes that \(\mathcal{L}_{t}\) is time independent within a time step, then
\[\mathcal{G}_{j}\approx e^{\mathcal{L}_{j}\Delta t} \tag{39}\]
where \(\mathcal{L}_{j}=\mathcal{L}(t+\Delta t/2)\). Sampling \(\mathcal{L}\) in the middle of the time step leads to second-order accuracy in \(\Delta t\).
In what follows, we formulate the density operator as a vector \(\overrightarrow{\rho}\) and the Liouvillian superoperator \(\mathcal{L}\) as a matrix ( see above App. A [31]).
A numerical method to solve Eq.(39) is based on expanding the exponent or any analytic function in a polynomial series in the matrix \(\mathcal{L}_{j}\):
\[\overrightarrow{\rho}(t+\Delta t)\approx\sum_{n=0}^{K-1}a_{n}(t+\Delta t)P_{n} \left(\mathcal{L}_{j}\right)\overrightarrow{\rho}(t) \tag{40}\]
where \(P_{n}(\mathcal{L}_{j})\) is a polynomial of degree \(n\), and \(a_{n}(t+\Delta t)\) is the corresponding expansion coefficient in the interval \(t,t+\Delta t\). This requires choosing the set of expansion polynomials \(P_{n}(\mathcal{L}_{j})\) and the corresponding coefficients \(a_{n}\)[42].
The expansion (40) has to be accurate in the eigenvalue domain of \(\mathcal{L}_{j}\) so that the form (40) will be helpful to represent \(\mathcal{G}_{j}\). Successive matrix-vector multiplications can compute this series of polynomials at Eq. (40). It scales as \(O\left(N^{2}\right)\), and the computational effort can be reduced even further. For sparse superoperators, the matrix-vector operation can be replaced by an equivalent semi-linear scaling, with \(\sim O(N)\)[43].
An immediate question is how to choose the set of expansion polynomials \(P_{n}(x)\). As a thumb rule, we search for a polynomial basis that converges the fastest. An orthogonal set of polynomials is the first step for fast convergence [40].
An efficient implementation can be done recursively. The Chebyshev polynomial and Newton interpolation polynomials are two orthogonal expansion series using \(P_{n}(\mathcal{L}_{j})\). When the Hamiltonian is non-Hermitian, the eigenvalue domain becomes complex, and the Chebychev approach is not appropriate anymore. Then, the Newtonian or Arnoldi approach should be used instead [40, 44, 45].
Note that in Eq. (36), only the coefficients \(a_{n}(t)\) are time-dependent. The solution at intermediate time points can be obtained by calculating the coefficients for intermediate points with negligible additional computational effort.
Quantum control of gates requires exceptionally high accuracy. The convergence rate of Eq. (38) with a piecewise constant \(\mathcal{L}_{j}\) is slow, leading to extensive numerical effort. To obtain faster convergence, we must consider the problem of time ordering within the time step \(\Delta t\). To overcome the problem of time ordering, we will combine the polynomial solution of Eq. (36) and the integral equation formal solution (37). In Eq. (37), the free propagator appears both as a free term and at the integrand. The solution of the integral equation requires an iterative approach since \(\hat{\boldsymbol{\rho}}(\tau)\) appears also in the integrand. The iteration is carried out by extrapolating the solution from one time step to the next, \(t\) to \(t+dt\). The integral in the formal solution Eq. (37) is reformulated as an inhomogeneous source term:
\[\frac{d\overrightarrow{\rho}(t)}{dt}=\mathcal{L}_{j}\overrightarrow{\rho}(t) +\overrightarrow{\mathbf{s}}(t) \tag{41}\]
The source term will represent the time-dependent/nonlinear part of the dynamics. Treating Eq. (41) as an inhomogeneous ODE will give rise to a formal solution to the time-dependent problem.
We can write the solution to Eq. (36):
\[\begin{split}\overrightarrow{\rho}(t+\Delta t)&= \tilde{\boldsymbol{\mathcal{G}}}_{j}(t)\overrightarrow{\rho}(t)+\int_{t}^{t+ \Delta t}\tilde{\boldsymbol{\mathcal{G}}}_{j}(t-\tau)\overrightarrow{ \mathbf{s}}(\tau)d\tau\\ &=\exp\left(\mathcal{L}_{j}t\right)\overrightarrow{\rho}+\\ &\int_{t}^{t+\Delta t}\exp\left[\mathcal{L}_{j}(t-\tau)\right] \overrightarrow{\mathbf{s}}(\tau)d\tau\\ &=\exp\left(\mathcal{L}_{j}t\right)\overrightarrow{\rho}_{0}+\\ &\exp\left(\mathcal{L}_{j}t\right)\int_{t}^{t+\Delta t}\exp \left(-\mathcal{L}_{j}\tau\right)\overrightarrow{\mathbf{s}}(\tau)d\tau\end{split} \tag{42}\]
Where \(\widehat{\boldsymbol{\mathcal{G}}}_{j}\) is defined as the time-independent propagator by the vec-ing procedure: subsection 1.2.
The source term is expanded as a time-dependent polynomial to solve for the integral analytically.
\[\overrightarrow{\mathbf{s}}(t)=\sum_{n=0}^{M-1}\frac{t^{n}}{n!}\overrightarrow {\mathbf{s}}_{n} \tag{43}\]
This expansion allows us to solve formally the integral in Eq. (42)
\[\int e^{at}t^{m}/m!=\sum_{n=1}^{m}e^{at}t^{n-m}/a^{n}(n-m)!.\]
The problem is now shifted to obtaining the expansion coefficients \(\overrightarrow{\mathbf{s}}_{n}\). This task is obtained by approximating \(\overrightarrow{\mathbf{s}}(t)\) by an orthogonal polynomial in the time interval. We chose a Chebychev expansion
\[\overrightarrow{\mathbf{s}}(t)\approx\sum_{n=0}^{M-1}\overrightarrow{ \mathbf{b}}_{n}T_{n}(t) \tag{44}\]
where the coefficients \(\overrightarrow{\mathbf{b}}_{n}\) are computed by a scalar product of the \(T_{n}(t)\) with \(\overrightarrow{\mathbf{s}}(t)\). Approximating the coefficients using Chebychev sampling points in the time interval \(\Delta t\).
The coefficients \(\overrightarrow{\mathbf{s}}_{n}\) are calculated by relating the polynomial Eq. (44) to the Chebychev expansion. This source term is inserted into the integral Eq. (42), leading to a numerical approximation to the solution of the TDLE. The addition of the source term into the dynamics gives rise to an analytical solution for the last term in Eq. (42), presented here on the RHS of Eq. (45)
\[J_{m+1}(\mathcal{L}_{j},t)\equiv\int_{t}^{t+\Delta t}\exp(-\mathcal{L}_{j} \tau)\tau^{m}d\tau,\quad m=0,1,\ldots \tag{45}\]
with the recursion relations:
\[\begin{split} J_{m}(\mathcal{L}_{j},t)=-\frac{\exp(-\mathcal{L}_{ j}t)t^{m-1}}{\mathcal{L}_{j}}+\frac{m-1}{\mathcal{L}_{j}}J_{m-1}(\mathcal{L}_{j}, t),\\ m=2,3,\ldots\end{split} \tag{46}\]
where
\[J_{1}(\mathcal{L}_{j},t)\equiv\int_{t}^{t+\Delta t}\exp(-\mathcal{L}_{j}\tau) d\tau=\frac{1-\exp(-\mathcal{L}_{j}t)}{\mathcal{L}_{j}}. \tag{47}\]
Plugging Eq. (43) into this formulation leads to the following:
\[\begin{split}\exp(\mathcal{L}_{j},t)\sum_{n=0}^{M-1}\frac{1}{n!} \int_{0}^{t}\exp(-\mathcal{L}_{j}\tau)t^{n}d\tau s_{n}=\\ \exp(\mathcal{L}_{j}t)\sum_{n=0}^{M-1}\frac{1}{n!}J_{n+1}( \mathcal{L}_{j},t)s_{n}=\sum_{n=0}^{M-1}f_{n+1}(\mathcal{L}_{j},t)s_{n}\end{split} \tag{48}\]
In Eq. (35), the Louivilian was split into explicit time-dependent and approximated time-independent parts; this partition is used to re-write the TDLE
\[\frac{d\overrightarrow{\rho}(t)}{dt}=\mathcal{L}_{j}\overrightarrow{\rho}( t)+\overrightarrow{\mathbf{s}}(\overrightarrow{\rho}(t),t) \tag{49}\]
Which is a reformulation of Eq. (41). The same analysis leads to:
\[\begin{split}\overrightarrow{\rho}(t,t+\Delta t)=\exp(\mathcal{L}_{ j}t)\overrightarrow{\rho}(t)+\\ \exp(\mathcal{L}_{j}t)\int_{t}^{t+\Delta t}\exp(-\mathcal{L}_{j} \tau)\overrightarrow{\mathbf{s}}(\overrightarrow{\rho}(\tau),\tau)d\tau\end{split} \tag{50}\]
Now, we can use these formulations to approximate Eq. (50):
\[\begin{split}\overrightarrow{\rho}(t,t+\Delta t)\approx P_{M}( \mathcal{L}_{j},(t,t+\Delta t))\overrightarrow{S}(t,t+\Delta t)_{M}+\\ \sum_{n=0}^{M-1}\frac{t^{n}}{n!}\overrightarrow{S}(t,t+\Delta t )_{n}\end{split} \tag{51}\]
\(P_{M}(\mathcal{L}_{j},(t,t+\Delta t)\overrightarrow{S}(t,t+\Delta t)_{M}\), is approximated by the Arnoldi method (the eigenvalue spectrum of \(\mathcal{L}\) is distributed on the complex plane) where
\[\overrightarrow{S(t)_{M}}\equiv\overrightarrow{\mathbf{s}}(t)+\mathcal{L}_{ t}\overrightarrow{\rho(t)}\]
are computed by expanding it by time in the same form of (43). We have used here the fact that Eq. (51) and \(\overrightarrow{\mathbf{s}}_{j}\) include dependence on \(\overrightarrow{\rho}(t)\). It would seem we are back to the same problem. However, it can be conquered through a process of repetition and refinement.
First, we guess a solution \(\overrightarrow{\rho}_{g}(t)\), within a time step \(\Delta t\) and use it in Eq. (51) to obtain a new approximate solution. This procedure can be continued until the solution converges with the desired accuracy. The initial guess is extrapolated from the previous time step to accelerate convergence.
Three numerical parameters determine the precision of the propagation and the convergence rate:
* The size of the time step \(\Delta t\).
* Number of Chebyshev sampling points in each time step \(M\).
* The size of the Krylov space \(K\) corresponds to the basis of the Arnoldi algorithm. It is important to note that this parameter is limited by \(Dim\{\mathcal{L}\}-1\)
Each of those is adjustable by the user to fit their needs best. For example, in the Hadamard transformation system, we have used the following parameters: \(\Delta t=0.1\), \(M=7\), and \(K=3\). With these parameters, we got an accuracy of \(10^{-8}\) for the propagator, two orders of magnitude higher than the fidelity of the target transformation. For the entangling gate, we adjusted the parameters for a higher resolution that would fit \(10^{-8}\) for the fidelity of the target transformation (\(M=K=9\)).
## Appendix C The Impact of Imperfect Time-keeping on Quantum Control
|
2309.11985 | Super-localised wave function approximation of Bose-Einstein condensates | This paper presents a novel spatial discretisation method for the reliable
and efficient simulation of Bose-Einstein condensates modelled by the
Gross-Pitaevskii equation and the corresponding nonlinear eigenvector problem.
The method combines the high-accuracy properties of numerical homogenisation
methods with a novel super-localisation approach for the calculation of the
basis functions. A rigorous numerical analysis demonstrates superconvergence of
the approach compared to classical polynomial and multiscale finite element
methods, even in low regularity regimes. Numerical tests reveal the method's
competitiveness with spectral methods, particularly in capturing critical
physical effects in extreme conditions, such as vortex lattice formation in
fast-rotating potential traps. The method's potential is further highlighted
through a dynamic simulation of a phase transition from Mott insulator to
Bose-Einstein condensate, emphasising its capability for reliable exploration
of physical phenomena. | Daniel Peterseim, Johan Wärnegård, Christoph Zimmer | 2023-09-21T11:51:09Z | http://arxiv.org/abs/2309.11985v1 | # Super-localised wave function approximation of Bose-Einstein condensates+
###### Abstract
This paper presents a novel spatial discretisation method for the reliable and efficient simulation of Bose-Einstein condensates modelled by the Gross-Pitaevskii equation and the corresponding nonlinear eigenvector problem. The method combines the high-accuracy properties of numerical homogenisation methods with a novel super-localisation approach for the calculation of the basis functions. A rigorous numerical analysis demonstrates superconvergence of the approach compared to classical polynomial and multiscale finite element methods, even in low regularity regimes. Numerical tests reveal the method's competitiveness with spectral methods, particularly in capturing critical physical effects in extreme conditions, such as vortex lattice formation in fast-rotating potential traps. The method's potential is further highlighted through a dynamic simulation of a phase transition from Mott insulator to Bose-Einstein condensate, emphasising its capability for reliable exploration of physical phenomena.
## 1 Introduction
In this paper we consider the calculation of the ground states and dynamics of Bose-Einstein condensates (BECs). Mathematically, this corresponds to first computing the wave function of the condensate as the solution of a constrained nonlinear minimisation problem representing either ground or excited states. The dynamical evolution of these eigenstates under external perturbation or excitation of the system is then modelled by a nonlinear time-dependent PDE. Among the many applications, we note in particular the keen interest of the physics community in ultracold atoms forming BECs and, more recently, the study of quasi-particles in semiconductor physics, such as exciton-polaritons, which behave as BECs under certain conditions [22]. It is noted that the numerical treatment of the latter example may require methods that are robust to low regularity of the wave function, e.g. due to the presence of Gaussian noise [43]. A distinctive feature of the novel methodology of this work is
its higher order convergence results under very low regularity assumptions and at low computational cost. Numerical methods for calculating ground states and dynamics of BECs have attracted much interest in both the physics and computational mathematics communities. It is therefore difficult to give an exhaustive list of previous work. Categorised by spatial discretisation, the most popular approach has arguably been that of Fourier-based or spectral methods [14, 17, 10, 12], but finite difference methods (FDM) [25, 47, 14, 16, 3], adaptive finite element methods (FEM) [26] and even combinations of Fourier methods and FEM [19] have also been investigated. Overviews and meaningful comparisons of these approaches have been made in [15, 9, 18]. In general, these considerations have found that spectral methods are the most efficient for simple geometries and smooth potentials, although certain situations such as a rapidly varying optical lattice potential have been found to favour an FDM over a spectral approach [47]. Recently, advanced problem-adapted FEMs, which are both fast and, unlike Fourier-based methods, perform well in cases of low regularity, have been proposed to solve both the time-dependent cubic nonlinear Schrodinger equation and the associated Gross-Pitaevskii nonlinear eigenvector problem [35, 38, 28, 36]. The advantages of these problem-adapted finite element methods over classical polynomial-based approaches derive mainly from two ideas. First, the full flexibility of finite elements is exploited to adapt their shape functions to a partial differential operator (e.g. a suitable linearisation of the full nonlinear operator) associated with the particular problem. This adaptation, in the spirit of numerical homogenisation by localised orthogonal decomposition [46, 44, 6], preserves the favourable optimal convergence rates known for smooth solutions to problems with low regularity due to the presence of rough potentials. Second, instances of the wave function's density are replaced by their \(L^{2}\) projection onto this adapted finite element space. This quadrature-like simplification dramatically speeds up the assembly of nonlinear terms, which are often the complexity bottleneck in practice.
The present work not only provides a mathematically rigorous foundation for this second step, but also substantially advances this demonstrably promising approach. On the theoretical side, we prove that the optimal convergence rates can be recovered without increasing the computational complexity when computing the minimum energy in a low regularity regime. In particular, our proof of the convergence of \(\mathcal{O}(H^{6})\) holds for potentials in \(L^{2+\sigma}(\mathcal{D})\), where \(\sigma\) is non-negative in 1d, 2d and positive in 3d. Here \(H\) denotes the mesh size, which can be coarse in the sense that it need not resolve features of the solution such as kinks and fast variations. Previously, optimal convergence rates of \(\mathcal{O}(H^{6})\) were known only for the much more expensive exact evaluation of the nonlinearity in finite element space [36].
On the practical side, it is shown that the construction of the problem-adapted FE space can be significantly improved by using the recent super-localisation approach of [34]. From a computational point of view, the complexity of the approach depends strongly on the localisation, i.e. the decay, of the operator-adapted basis functions. By using the super-localised basis representation of the wave function, speed-ups in 2d of up to two orders of magnitude are achieved compared to the earlier work in [28]. This huge speed-up is complemented by other practical improvements. The operator-adapted space involves a two-level discretisation. In our implementation, a virtual mesh with a classical \(\mathbb{P}^{3}\) finite element space is used to represent the basis functions. This is in contrast to previous work having used localised orthogonal decomposition to compute a basis and a \(\mathbb{P}^{1}\) representation of the basis functions. The minimisation and time-stepping algorithms are state of the art and an implementation in Julia is provided.
Overall, our novel superlocalised wave-function approximation combines robustness to low regularity with competitive speed and geometric flexibility. This is in contrast to the competing approach of Fourier-based methods, whose efficiency and optimality depend strongly
on the smoothness of the potential and the wave solution. To highlight this difference, several comparisons with the popular GPELab [10, 12] are presented. For a fast rotating large BEC, i.e. with strong nonlinearity, we also compare with the highly optimised high performance computing package BEC2HPC [29]. These comparisons show that the performance of our method is already competitive with spectral approaches for smooth problems. In the presence of rough potentials, our approach is of unprecedented efficiency in terms of accuracy per computation time.
Outline: A short mathematical introduction to the equations is given in section 2, our spatial discretisation is subsequently introduced in section 3, optimal convergence rates of a modified energy functional are proved for this spatial discretisation in section 4. Numerical examples demonstrating these rates are given in section 5. A combined minimisation and time-dependent problem is solved in 3d in section 6 and some details of our implementation and complements to our proof are found in the Appendix.
## 2 Mathematical modeling of BECs
BECs provide a way to study quantum physics on much larger scales than, say, individual atoms. A BEC is formed when a dilute gas of bosons is cooled to near absolute zero. Dilute means that essentially only pairwise interactions occur, so that crystal formation is avoided. Mathematically, before condensation, the gas is described by the high-dimensional wave function \(\Psi(x_{1},\ldots,x_{N},t)\in L^{2}(\mathbb{R}^{3N},\mathbb{C})\), whose time evolution is subject to the linear Schrodinger equation, where \(N\) is the number of bosons. Below a certain threshold temperature, however, most of the bosons condense into the same quantum state, whereby the wave function of the gas becomes well described by a single complex-valued wave function \(u=u(x,t)\), which is governed by the cubic nonlinear Schrodinger equation (also called the Gross-Pitaevskii equation in the present context) in only 3+1-dimensional space.
In experiments with BECs, it is common to first create a BEC under the influence of a trapping potential, then perturb the BEC and study its dynamics. Mathematically, this experimental setup translates into two problems of different flavor, namely a nonlinear constrained minimisation or a nonlinear eigenvector problem, to compute a steady state that will serve as the initial state for the dynamical simulation in a perturbed configuration.
More specifically, we will consider the problem of minimising energy
\[E(v)=\int_{\mathbb{R}^{3}}\frac{1}{2}|\nabla v|^{2}+V|v|^{2}+v\overline{\left( \mathbf{\Omega}\cdot(\mathbf{x}\times-\mathrm{i}\nabla)\right)}v+\frac{\beta }{2}|v|^{4}\,\mathrm{d}x \tag{1}\]
within an appropriate class of wave functions \(\mathbb{S}\subset L^{2}(\mathbb{R}^{3})=L^{2}(\mathbb{R}^{3},\mathbb{C})\) subject to a unit mass constraint. In this formulation, \(\beta\) is proportional to the number of Bosons and their interaction strength (scattering length), \(V\) is a trapping potential, \(\mathrm{i}=\sqrt{-1}\) is the imaginary unit, \(\overline{v}\) denotes the complex conjugate of \(v\) and we consider the possibility of a rotating BEC whose rotation, i.e. direction and angular velocity, is given by \(\mathbf{\Omega}\in\mathbb{R}^{3}\). Note that the mathematical formulation is in the rotating frame of reference. Without loss of generality, we can assume that \(\mathbf{\Omega}=(0,0,\Omega)\) for some scalar parameter \(\Omega\geq 0\).
While the mathematical problem is typically phrased in full space \(\mathbb{R}^{3}\), practical computations are restricted to some sufficiently large but bounded domain \(\mathcal{D}\subset\mathbb{R}^{d}\) (for \(d=1,2,3\)) which is assumed convex with a polyhedral boundary. In fact, \(\mathcal{D}\) need seldom be large since imposing only \(\lim_{|x|\to\infty}V(x)=\infty\) on admissible potentials leads to the conclusion that any minimiser must decay exponentially fast [15, Thm. 2.5] and for harmonic trapping potentials the decay is in fact similar to that of a Gaussian. On \(\mathcal{D}\), the Sobolev space of complex-valued, weakly differentiable functions with \(L^{2}\)-integrable partial derivatives is denoted as usual by
\(H^{1}(\mathcal{D}):=H^{1}(\mathcal{D},\mathbb{C})\) and its subspace of functions with zero trace on \(\partial\mathcal{D}\) by \(H^{1}_{0}(\mathcal{D})\). The class of admissible wave functions for the minimisation problem is therefore given by
\[\mathbb{S}\coloneqq\{u\in H^{1}_{0}(\mathcal{D}):\|u\|=1\},\]
where \(\|\bullet\|\) denotes the \(L^{2}(\mathcal{D})\) norm. The minimisation problem then becomes
\[\min_{v\in\mathbb{S}}\int_{\mathcal{D}}\tfrac{1}{2}|\nabla v|^{2}+V|v|^{2}+v \Omega\overline{L_{z}v}+\frac{\beta}{2}|v|^{4}\,\mathrm{d}x, \tag{2}\]
where \(L_{z}=-\mathrm{i}(x\partial_{y}-y\partial_{x})\) is the z-component of the angular momentum. The corresponding Euler-Lagrange equation is
\[\lambda^{0}u^{0}=-\tfrac{1}{2}\triangle u^{0}+Vu^{0}+\mathrm{i}\omega L_{z}u^{ 0}+\beta|u^{0}|^{2}u^{0}, \tag{3}\]
where equality is to be understood in a sufficiently weak sense. This is an eigenvalue problem for a nonlinear partial differential operator, which in numerical linear algebra and scientific computing is called a nonlinear eigenvector problem.
For \(\Omega=0\), the problem of finding the normalised eigenpair \((\lambda^{0},u^{0})\in\mathbb{R}\times\mathbb{S}\) satisfying (3) with minimal \(\lambda^{0}\) and finding the (global) minimiser of (2) are in fact equivalent [23]. However, this does not hold in the case \(\Omega>0\) and indeed it is possible to find \((u^{0},\lambda^{0})\) and \((u_{\mathrm{gs}},\lambda_{\mathrm{gs}})\) satisfying Eq. (3) such that \(E(u_{\mathrm{gs}})<E(u^{0})\) but \(\lambda^{0}<\lambda_{\mathrm{gs}}\). A striking such example is given in Section 5.4. We note that in the absence of a rotational term there is a unique, real and non-negative (positive in the interior) minimiser, the argument is classical and can, for example, be found in [23, Appendix]. However, if \(\Omega>0\), the additional assumption that there is an \(\epsilon>0\) such that
\[V(x)-\frac{1+\epsilon}{4}\Omega^{2}|x|^{2}\geq 0,\]
is required for \(E\) to be positive, coercive and weakly lower semi-continuous, thus guaranteeing existence of a minimal energy [2]. In this setting uniqueness is much harder to establish. If \(\Omega\) is less than some critical value, uniqueness can be recovered up to a complex shift and, in addition, up to rotation if \(V\) is rotationally invariant. Interestingly, for a certain critical rotational speed, uniqueness can ultimately be lost in terms of the density \(|u|^{2}\) even up to rotation [20].
Once the condensate has formed under the influence of the trapping potential \(V\), one may perturb it by, e. g., changing the potential to \(\hat{V}\). The subsequent dynamics are governed by the Schrodinger equation:
\[\langle\mathrm{i}\partial_{t}u,v\rangle=\tfrac{1}{2}\langle E^{\prime}(u),v \rangle\quad\forall v\in H^{1}_{0}(\Omega),\]
where \(E^{\prime}(u)\) denotes the Frechet derivative of \(E\). The \(L^{2}(\mathcal{D})\) inner product is denoted \(\langle v,w\rangle:=\int_{\mathcal{D}}v(x)\overline{w(x)}\,\mathrm{d}x\). We restrict our attention to studying the dynamics without external rotation, accordingly the following time-dependent problem is considered: Given \(u(x,0)=u_{0}(x)\in H^{2}(\mathcal{D})\cap H^{1}_{0}(\mathcal{D})\), find \(u\in C([0,T],H^{1}_{0}(\mathcal{D}))\) and \(\partial_{t}u\in C([0,T],H^{-1}(\mathcal{D}))\) such that
\[\langle\mathrm{i}\partial_{t}u,v\rangle_{H^{-1},H^{1}_{0}}=\tfrac{1}{2} \langle\nabla u,\nabla v\rangle+\langle\hat{V}u+\beta|u|^{2}u,v\rangle \tag{4}\]
for all \(v\in H^{1}_{0}(\mathcal{D})\) and \(t\in[0,T]\). The assumption that \(u_{0}\in H^{2}(\mathcal{D})\cap H^{1}_{0}(\mathcal{D})\) may seem restrictive but is consistent with \(u_{0}\) being an energy minimiser in the sense previously described. This time-dependent problem is locally well-posed in the sense that, on some time
interval that may depend on \(\|u_{0}\|_{H^{1}}\), there exists a unique solution depending continuously on the initial datum [24, Theorem 3.3.9, Corollary 3.3.11], where we point out that the corollary enters into effect due to the regularity of the initial value, i.e., in the terms of the cited reference, \(g(u)|_{t=0}=Vu_{0}+\beta|u_{0}|^{2}u_{0}\in L^{2}(\mathcal{D})\) provided \(V\in L^{2}(\mathcal{D})\). By testing the time-dependent Eq. (4) with \(v=u\) and considering the imaginary part we find that the mass is conserved,
\[\|u(t)\|=\|u_{0}\|.\]
In a similar fashion, testing with \(v=\partial_{t}u\) and considering the real part, formally leads to the conclusion that energy is conserved, i.e.,
\[E(u(t))=E(u_{0}).\]
Though, a priori, this last argument assumes \(\partial_{t}u\in L^{2}(\mathcal{D})\), it does in fact hold in our setting as soon as \(u_{0}\in H^{1}_{0}(\mathcal{D})\)[24, Theorem 3.3.9]. These two conservation laws are the only known to hold in all dimensions \(d\leq 3\) and in the presence of potential terms. There are however more known conservation laws in less general settings.
## 3 Spatial discretisation
In this section we discuss the spatial discretisation for the nonlinear minimisation problem (2) as well as for the time-dependent problem (4). As a starting point we introduce a quasi-uniform simplicial mesh on the convex and polyhedral domain \(\mathcal{D}\). The simplicial subdivision is denoted \(\mathcal{T}_{H}\) so that \(\overline{\mathcal{D}}=\bigcup_{T\in\mathcal{T}_{H}}T\) and the parameter \(H\) denotes the mesh size. For an efficient implementation we will use a Cartesian grid to define the simplicial subdivision, however the method and its numerical analysis are not restricted to such a structured grid. The details of the specific mesh is therefore found in the Appendix. Given \(k\in\mathbb{N}\) and a mesh \(\mathcal{T}_{H}\), the finite element space \(\mathbb{P}^{k}_{H}\subset H^{1}(\mathcal{D})\) is defined by
\[\mathbb{P}^{k}_{H}\coloneqq\mathbb{P}^{k}_{H}(\mathcal{T}_{H}) \coloneqq\big{\{}v_{H}\in C(\overline{\mathcal{D}})\,\big{|}\,v_{H}|_{T}\text{ is a polynomial of degree }\leq k\text{ for all }T\in\mathcal{T}_{H}\big{\}}.\]
For the subspace of \(\mathbb{P}^{k}_{H}\)-functions vanishing at a subset of boundary edges or faces \(\Gamma\subset\partial\mathcal{D}\), we write \(\mathbb{P}^{k}_{H,\Gamma}\).
### Ideal OD-spaces
A natural goal in spatial discretisation is to achieve high accuracy at low resolution and low computational cost, especially for problems with multiscale or more general features of low regularity. Consider the abstract setting of a variational equation that, given \(f\in H^{2}(\mathcal{D})\cap H^{1}_{0}(\mathcal{D})\), seeks \(u\in H^{1}_{0}(\mathcal{D})\) such that
\[a(u,v)=\langle f,v\rangle \tag{5}\]
holds for all \(v\in H^{1}_{0}(\mathcal{D})\). For expository purposes we also write \(\langle\mathcal{A}u,v\rangle=a(u,v)\) and let \(\mathcal{A}^{-1}\) denote the solution operator \(\mathcal{A}^{-1}\colon L^{2}(\mathcal{D})\mapsto H^{1}_{0}(\mathcal{D})\). Not surprisingly, the universal approximation space \(\mathbb{P}^{1}_{H}\) generally lacks the desired properties just described. In contrast, a general formal way to obtain these properties is to consider the problem-adapted space
\[\mathcal{V}_{\text{\tiny OD}}\coloneqq\mathcal{A}^{-1}\mathbb{P}^{1}_{H}.\]
To demonstrate the universal approximation properties of this space, consider the candidate approximation \(u_{H}=\mathcal{A}^{-1}P_{H}f\), where \(P_{H}\) denotes the \(L^{2}(\mathcal{D})\) projection onto \(\mathbb{P}^{1}_{H}\). For \(u_{H}\), the equality
\[a(u_{H},v)=a(\mathcal{A}^{-1}P_{H}f,v)=\langle P_{H}f,v\rangle \tag{6}\]
holds for all \(v\in H^{1}_{0}(\mathcal{D})\). Subtracting (6) from (5), choosing \(v=u-u_{H}\), and assuming the coercivity of \(a(\cdot,\cdot)\) leads to
\[c\|u-u_{H}\|_{H^{1}(\mathcal{D})}^{2}\leq a(u-u_{H},u-u_{H}) =\langle f-P_{H}f,u-u_{H}\rangle\] \[=\langle f-P_{H}f,u-u_{H}-P_{H}(u-u_{H})\rangle\] \[\leq CH^{2}\|f\|_{H^{2}(\mathcal{D})}H\|u-u_{H}\|_{H^{1}(\mathcal{ D})}. \tag{7}\]
This then implies, via Cea's lemma, the error estimate
\[\min_{v\in\mathcal{A}^{-1}\mathbb{P}^{1}_{H}}\|u-v\|_{H^{1}( \mathcal{D})}\lesssim\|u-u_{H}\|_{H^{1}(\mathcal{D})}\lesssim H^{3}\|f\|_{H^ {2}(\mathcal{D})}. \tag{8}\]
Note that the error estimate does not depend on the possible oscillatory nature of \(a\), nor on its regularity (other than its coercivity). To derive \(L^{2}\) estimates, we use the following characterisation of the operator-adapted space. The space \(\mathcal{V}_{\textsc{OD}}=\mathcal{A}^{-1}\mathbb{P}^{1}_{H}\) equals the \(a\)-orthogonal complement of the kernel of \(P_{H}\) in \(H^{1}_{0}(\mathcal{D})\), i.e.,
\[\mathcal{A}^{-1}\mathbb{P}^{1}_{H}=\ker(P_{H})^{\perp_{a}}. \tag{9}\]
A proof can be found in [6, Rem. 3.6 & 3.7]. Thus we also have the ideal splitting \(H^{1}_{0}(\mathcal{D})=\mathcal{A}^{-1}\mathbb{P}^{1}_{H}\oplus\ker(P_{H})\). For this reason, the space \(\mathcal{V}_{\textsc{OD}}=\mathcal{A}^{-1}\mathbb{P}^{1}_{H}\) is also called OD space, where OD is short for Orthogonal Decomposition. With Eq. (9), it is straightforward to derive \(L^{2}\) error estimates for the solution of the variational equation in the OD space, denoted \(u_{\textsc{OD}}\). Since \(u-u_{\textsc{OD}}\in\ker(P_{H})\), there holds
\[\|u-u_{\textsc{OD}}\|=\|u-u_{\textsc{OD}}-P_{H}(u-u_{\textsc{OD}})\|\leq CH\| u-u_{\textsc{OD}}\|_{H^{1}}\lesssim H^{4}. \tag{10}\]
Now the question arises, what is \(\mathcal{A}\) in our context? A priori, \(a\) can be any linear operator associated with the equations (3) and (4). For this cubic nonlinear Schrodinger equation, the choice of \(\mathcal{A}=-\triangle+V_{d}\), where \(V_{d}\) is the low regularity part of the potential, has proven sufficient both theoretically and numerically, at least in the absence of rotational terms. Thus, for sufficiently smooth \(V\), \(\mathcal{A}=-\triangle\) is a good choice. Our numerical experiments show that this is also true for rotating BECs. As will become clear, the more precise statement is that \(V_{d}\) is any part of the potential with regularity less than \(H^{2}(\mathcal{D})\). As an aside, we note that it is also possible to define higher order OD spaces, see [45].
### Super-localised wave function approximation
As an approximation of a basis of the OD-space \(\mathcal{V}_{\textsc{OD}}\), we use the so-called super-localised orthogonal decomposition (SLOD) [34]. This means that, similar to the Wannier functions [49, 50], our approximation space is represented by problem-adapted, local responses to linear operators associated with the Eq. (3). To represent these responses, we use classical finite element spaces. Since localisation of the basis functions is key, we define the local spaces \(\mathbb{P}^{k}_{H}(\omega)\) and \(\mathbb{P}^{k}_{H,\Gamma}(\omega)\) for any open subset \(\omega\subset\mathcal{D}\) whose triangulation is a subset of \(\mathcal{T}_{H}\). More specifically, given a node \(z\in\mathcal{N}_{H}\), we define the \(\ell\)th-order patch \(\omega_{\ell}(z)\) for a given \(\ell=0,1,2,\ldots\) as
\[\omega_{\ell}(z)\coloneqq\bigcup_{\begin{subarray}{c}T\subset\mathcal{T}_{H,}\\ T\subset B_{H(\ell+1)}(z)\end{subarray}}T,\]
where \(B_{r}(z)\) denotes the ball with radius \(r\) around \(z\) with respect to the \(\infty\)-norm in \(\mathbb{R}^{d}\). We will use this notation to illustrate the SLOD for one-dimensional domains first. The ideas are then easily transferred to higher dimensions.
#### 3.2.1 Perfectly localised basis functions in one dimension
Consider the differential operator \(\mathcal{A}=-\partial_{xx}+V_{d}\) defined on a finite interval and with homogeneous Dirichlet boundary conditions. Here, \(V_{d}\) is the non-regular part of the potential \(V\), such as a highly oscillatory or discontinuous part. For a single hat function \(\Lambda_{z}\in\mathbb{P}^{1}_{H}\) centered at node \(z\), the response or wave, for lack of a better word, of \(\Lambda_{z}\) to the inverse operator \(\mathcal{A}^{-1}\) is generally globally supported, cf. Figure 1. The main idea behind SLOD is to use scaled wave responses associated with neighboring nodes as destructive interference, such that the tails of the waves cancel each other in such a way that their linear combination is locally supported. These linearly combined waves then form the basis of our approximation spaces. Since in one dimension there is only one direction in which the initial wave must be canceled away from the central node, two additional waves - one to the left and one to the right - (or one if \(z\in\partial\mathcal{D}\)) are sufficient to obtain a non-vanishing, fully localised basis function, e.g. for \(\mathcal{A}=-\partial_{xx}\) we have that \(\mathcal{A}^{-1}(2\Lambda_{z}-(\Lambda_{z+1}+\Lambda_{z-1}))\) has local support, cf. Figure 1. The support of the resulting basis function is contained in a patch of order \(1\) around the node \(z\), i.e. \(\omega_{1}(z)\). Note that the number of waves needed for localisation is independent of the choice of \(V_{d}\).
To represent the basis functions, one typically uses a finite element space \(\mathbb{P}^{k}_{h,\partial\mathcal{D}}\) with more degrees of freedom due to a higher polynomial degree, \(k\geq 1\), or due to a finer underlying mesh \(h\leq H\). A higher polynomial degree can capture the smoothness of the responses, while a finer mesh can capture information on smaller scales, e.g. due to a highly oscillatory potential \(V_{d}\). However, the number of basis functions of our discretisation space is independent of its representation and is only determined by the underlying space of the right-hand sides on which \(\mathcal{A}^{-1}\) operates, i.e. in our case \(\mathbb{P}^{1}_{H}\).
In higher dimensions, where there are infinitely many directions in which to cancel, it is still an open question whether perfect localisation is possible [30]. However, we will show below that quasi-locality in the sense of extremely fast decay to zero is always possible.
Figure 1: Response of a hat-function (left) and localised basis function (right) for the Laplace operator in one dimension. The dashed line illustrates the scaled right-hand side.
#### 3.2.2 Super-localisation in dimension two and three
Before generalising to higher dimensions, let us formalize the one-dimensional problem; since we know that there exists a normalised response \(\varphi_{\textsc{OD}}\) that is zero outside of \(\omega_{1}\coloneqq\omega_{1}(z)\), it becomes possible to compute \(\varphi_{\textsc{OD}}\) locally. For this purpose we introduce the operator \(\mathcal{A}\) restricted to \(H^{1}_{0}(\omega_{1})\), which we denote \(\mathcal{A}|_{\omega_{1}}\), and search for the normalised right side \(p^{*}\in\mathbb{P}^{1}_{H,\partial\omega_{1}\setminus\partial\mathcal{D}}( \omega_{1})\), such that \(\varphi_{\textsc{OD},1}^{*}=\mathcal{A}|_{\omega_{1}}^{-1}p^{*}\) minimises the conormal derivative \(-\partial_{x}\varphi_{\textsc{OD},1}^{*}\) at the local patch boundary \(\partial\omega_{1}\setminus\partial\mathcal{D}\). Note that \(-\partial_{x}\varphi_{\textsc{OD},1}^{*}=0\) holds at \(\partial\omega_{1}\setminus\partial\mathcal{D}\) by counting the degrees of freedom, and thus we have \(\varphi_{\textsc{OD},1}^{*}=\pm\varphi_{\textsc{OD}}\) if we extend \(\varphi_{\textsc{OD},1}^{*}\) by zero outside of \(\omega_{1}\).
Similarly in two and three dimensions, we fix \(\ell\geq 1\) and consider local problems around each node \(z\) on corresponding patches \(\omega_{\ell}\coloneqq\omega_{\ell}(z)\). Given the restriction \(\mathcal{A}|_{\omega_{\ell}}\colon H^{1}_{0}(\omega_{\ell})\to H^{-1}(\omega_{ \ell})\) of \(\mathcal{A}\) and the local approximation (extended by zero) \(\varphi_{\textsc{OD},\ell}=\mathcal{A}|_{\omega_{\ell}}^{-1}p\) of \(\varphi_{\textsc{OD}}=\mathcal{A}^{-1}p\), for a right-hand side \(p\in\mathbb{P}^{1}_{H,\partial\omega_{\ell}\setminus\partial\mathcal{D}}( \omega_{\ell})\), one can show that
\[\|\varphi_{\textsc{OD},\ell}-\varphi_{\textsc{OD}}\|_{a}=\sup_{v\in H^{1}_{0} (\mathcal{D})\setminus\{0\}}\frac{a(\varphi_{\textsc{OD},\ell}-\varphi_{ \textsc{OD}},v)}{\|v\|_{a}}=\sup_{v\in H^{1}_{0}(\mathcal{D})\setminus\{0\}} \frac{1}{\|v\|_{a}}\int_{\omega_{\ell}}-(\nabla\varphi_{\textsc{OD},\ell}) \cdot nv\,\mathrm{d}S \tag{11}\]
holds, see [34, p. 5 f.]. Therefore, to minimise the localisation error, we compute
\[p^{*}=\operatorname*{argmin}_{\begin{subarray}{c}p\in\mathbb{P}^{1}_{H, \partial\omega_{\ell}\setminus\partial\mathcal{D}}(\omega_{\ell})\\ \text{s.t.}\ \|p\|_{L^{2}(\omega_{\ell})}=1\end{subarray}}-\nabla(\mathcal{A} |_{\omega_{\ell}}^{-1}p)\cdot n\|_{L^{2}(\partial\omega_{\ell})\setminus \partial\mathcal{D}}. \tag{12}\]
Note that, in the original paper [34], the authors search for a \(p\) that is (almost) \(L^{2}\)-orthogonal to the space of \(\mathcal{A}\)-harmonic functions on \(\omega_{\ell}\). This leads to a singular value decomposition of a large matrix in order to capture the behaviour of the \(\mathcal{A}\)-harmonic functions. Our approach (12), on the other hand, requires only to solve a generalised eigenvalue problem of the size of \(\dim\mathbb{P}^{1}_{H,\partial\omega_{\ell}\setminus\partial\mathcal{D}}( \omega_{\ell})\), while the localised response \(\varphi_{\textsc{OD},\ell}^{*}\) of \(p^{*}\) shows the super-exponential decaying localisation error with respect to \(\ell\), which is numerically observed and justified under a spectral geometric conjecture in [34, Sec. 7 f.], see Figure 2. If the patch \(\omega_{\ell}(z)\) intersects the boundary \(\partial\mathcal{D}\) for the node \(z\), then we consider a second minimisation problem by introducing an artificial trapping potential to find an optimal linear combination of \(\mathcal{A}|_{\omega_{\ell}}^{-1}p_{i}^{*}\), \(i=1,\ldots,n\) which localises around the current node \(z\). Here, the \(p_{i}^{*}\)s denote the first \(n\), pairwise \(L^{2}\)-orthogonal minimisers. The linear combination is again denoted by \(p^{*}\).
The basis of the SLOD space \(\mathcal{V}_{\textsc{OD},\ell}\) is then given by the function \(\varphi_{\textsc{OD},\ell}=\mathcal{A}|_{\omega_{\ell}}^{-1}p^{*}\) associated with each node \(z\) in \(\mathcal{T}_{H}\). To represent this basis we use the space \(\mathbb{P}^{3}_{h,\partial\mathcal{D}}\) with a spatial mesh width \(h\leq H\). There are two main motivations for this choice of representation of the SLOD basis functions. First it is noticed that in 1d, the canonical SLOD-basis functions, i.e., with \(\mathcal{A}=-\partial_{xx}\), coincide with cubic B-splines and are thus exactly represented by piecewise cubic polynomials on the same mesh. In higher dimensions it no longer holds that \(\mathbb{P}^{3}_{H}\)-functions exactly represent the SLOD basis functions. However, for sufficiently smooth problems, there is still an indication that, while not an exact representation, no order of accuracy is lost when approximating the canonical SLOD basis functions with \(\mathbb{P}^{3}_{H}\) functions on the same mesh as the \(\mathbb{P}^{1}_{H}\)-functions. Namely, considering the NEVP Eq. (3), the eigenvalue is expected to converge, for sufficiently smooth problems, with \(\mathcal{O}(H^{6})\) in the \(\mathbb{P}^{3}_{H}\)-space (see [40] for a precise statement in arbitrarily high \(\mathbb{P}^{k}_{H}\)-spaces and [23] for \(k\leq 2\)). Therefore, the best approximation in the underlying \(\mathbb{P}^{3}_{H}\)-space allows for optimal convergence in the canonical OD space based on \(\mathbb{P}^{1}_{H}\)-functions. In cases of less regularity, e.g., when the potential is included into the construction of the space, a refined mesh and its \(\mathbb{P}^{3}_{h}\)-space must be used to represent the SLOD basis functions.
## 4 A modified energy minimisation problem
We now turn to the analysis of a slightly modified energy minimisation problem in the spaces introduced in Section 3, with special attention to potentials of low regularity. The modification is introduced as a means to speed up the computations, the details of the speed-up are outlined in the Appendix A.2. Given the ideal OD-space \(\mathcal{V}_{\textsc{\tiny OD}}\), we introduce the modified energy functional,
\[\tilde{E}(u) =\int\frac{1}{2}|\nabla u|^{2}+V|u|^{2}+\frac{\beta}{2}P_{\textsc {\tiny OD}}(|u|^{2})|u|^{2}\,\mathrm{d}x\] \[=\int\frac{1}{2}|\nabla u|^{2}+V|u|^{2}+\frac{\beta}{2}P_{\textsc {\tiny OD}}(|u|^{2})^{2}\,\mathrm{d}x, \tag{13}\]
where \(P_{\textsc{\tiny OD}}\colon L^{2}(D)\mapsto\mathcal{V}_{\textsc{\tiny OD}}\) denotes the \(L^{2}\)-projection. With this, we consider minimising with respect to \(\tilde{E}\), but evaluating \(E\) at the minimiser of \(\tilde{E}\), i.e.,
\[E\bigg{(}\underset{v\in\mathbb{S}\cap\mathcal{V}_{\textsc{\tiny OD}}}{\mathrm{ argmin}}\ \tilde{E}(v)\bigg{)}\]
In this section, we prove optimal convergence rates of this energy, without rotation, under the weak assumption that \(V\in L^{2+\sigma}(\Omega)\), where \(\sigma=0\) in 1d, 2d, but \(\sigma>0\) in 3d. By optimal we mean \(\mathcal{O}(H^{6})\) in terms of minimum energy and \(\mathcal{O}(H^{3})\) as measured in the \(a\)-norm or the equivalent \(H^{1}\)-norm. In contrast, and rather remarkably, we show that in low regularity regimes the modified energy converges only as \(\mathcal{O}(H^{4})\). For the ease of presentation we introduce the following notation:
\[a(u,v)=\int\frac{1}{2}\nabla u\nabla v+Vuv\,\mathrm{d}x,\qquad \|u\|_{a}^{2}=a(u,u),\] \[\tilde{u}_{\textsc{\tiny OD}}^{0}=\underset{u\in\mathbb{S}\cap \mathcal{V}_{\textsc{\tiny OD}}}{\mathrm{argmin}}\ \tilde{E}(v),\qquad u_{\textsc{\tiny OD}}^{0}=\underset{u\in\mathbb{S}\cap \mathcal{V}_{\textsc{\tiny OD}}}{\mathrm{argmin}}\ E(v),\qquad u^{0}=\underset{u \in\mathbb{S}\cap H^{1}_{0}(\mathcal{D})}{\mathrm{argmin}}\ E(v).\]
The operator \(\mathcal{A}\colon H^{1}_{0}(\mathcal{D})\to H^{-1}(\mathcal{D})\) is defined with respect to the bilinear form \(a\). For notational brevity we shall prove the convergence in the OD space \(\mathcal{V}_{\textsc{\tiny OD}}=\mathcal{A}^{-1}\mathbb{P}^{1}_{H}\) using
the inner product \(a\). However, it is emphasised that it is sufficient to include only the low regularity part of the potential \(V\) in the construction of the OD space. That is, whenever the potential splits into two contributions \(V=V_{\mathrm{s}}+V_{\mathrm{d}}\), \(V_{\mathrm{s}}\) being at least in \(H^{2}(\mathcal{D})\), \(V_{\mathrm{d}}\) being in \(L^{2+\sigma}(\Omega)\), optimal convergence rates are obtained in the OD space defined by the inner product \(a_{\mathrm{d}}(u,v)=\frac{1}{2}(\nabla u,\nabla v)+(V_{\mathrm{d}}u,v)\) and its associated operator \(\mathcal{A}_{\mathrm{d}}\), see Remark 4.12. This can also be of computational importance if \(V_{\mathrm{d}}\) is periodic but \(V_{\mathrm{s}}\) is not. Similarly, whenever \(V\) is smooth, the canonical OD space, i.e., \((-\triangle)^{-1}\mathbb{P}^{1}_{H}\), achieves an optimal order of convergence.
### Error estimates for OD spaces
The high-level idea behind our proof is based on the observation that the first variation of the energy at \(\tilde{u}^{0}_{\textsc{OD}}\), in the direction of \(u^{0}_{\textsc{OD}}-\tilde{u}^{0}_{\textsc{OD}}\) behaves like \(E^{\prime}(\tilde{u}^{0}_{\textsc{OD}})[u^{0}_{\textsc{OD}}-\tilde{u}^{0}_{ \textsc{OD}}]\sim H^{3}\|u^{0}_{\textsc{OD}}-\tilde{u}^{0}_{\textsc{OD}}\|_{a}\), at the same time at the minimiser we have \(|E^{\prime}(u^{0}_{\textsc{OD}})[u^{0}_{\textsc{OD}}-\tilde{u}^{0}_{ \textsc{OD}}]|=\mathcal{O}(\|\tilde{u}^{0}_{\textsc{OD}}-u^{0}_{\textsc{OD}} \|_{a}^{2})\), A clever use of the properties of the second variation will then allow us to conclude that \(\|u^{0}_{\textsc{OD}}-\tilde{u}^{0}_{\textsc{OD}}\|_{a}\lesssim H^{3}\) and subsequently that \(E(\tilde{u}^{0}_{\textsc{OD}})=E(u^{0}_{\textsc{OD}})+\mathcal{O}(\|u^{0}_{ \textsc{OD}}-\tilde{u}^{0}_{\textsc{OD}}\|_{a}^{2})\).
We begin by recalling some known results about the minimiser of \(E\), \(u^{0}_{\textsc{OD}}\).
**Theorem 4.1** (Error estimates for \(u^{0}_{\textsc{OD}}\)).: _Let \(u^{0}_{\textsc{OD}}\in\mathcal{V}_{\textsc{OD}}\) be an energy minimiser of \(E\) with \((u^{0}_{\textsc{OD}},u^{0})\geq 0\). If \(H\) is small enough, then_
\[\|u^{0}_{\textsc{OD}}-u^{0}\|_{H^{1}(\mathcal{D})}\lesssim H^{3},\qquad|E(u^{ 0}_{\textsc{OD}})-E(u^{0})|\lesssim H^{6},\qquad|\lambda^{0}_{\textsc{OD}}- \lambda^{0}|\lesssim H^{3},\]
_where the eigenvalue \(\lambda^{0}_{\textsc{OD}}\) is given by \(2E(u^{0}_{\textsc{OD}})+\frac{\beta}{2}\|u^{0}_{\textsc{OD}}\|_{L^{4}(\mathcal{ D})}^{4}\). The constants only depend on the data and the mesh regularity of the triangulation._
Proof.: The estimates of the \(H^{1}\) and energy errors are proved in [36, Sec. 4]. Consequently, we have by [23, Th. 1], that
\[|\lambda^{0}_{\textsc{OD}}-\lambda^{0}|\lesssim\|u^{0}_{\textsc{OD}}-u^{0}\| _{H^{1}}^{2}+\|u^{0}_{\textsc{OD}}-u^{0}\|_{L^{2}}\leq\|u^{0}_{\textsc{OD}}-u^ {0}\|_{H^{1}}^{2}+\|u^{0}_{\textsc{OD}}-u^{0}\|_{H^{1}}\lesssim H^{3}.\qed\]
**Remark 4.2**.: _For \(V\in L^{\infty}(\mathcal{D})\), the estimate of the error of the eigenvalues can be improved to sixth order [36, Prop. 4.6]._
The next step is to investigate the smoothness of functions in \(\mathcal{V}_{\textsc{OD}}\) in general, and of \(u^{0}_{\textsc{OD}}\) in particular. For this, we need that the \(a\)-projection to \(\mathcal{V}_{\textsc{OD}}\) is \(L^{2}(\mathcal{D})\)-stable.
**Lemma 4.3** (\(H^{2}(\mathcal{D})\)-regularity of \(\mathcal{V}_{\textsc{OD}}\)).: _If \(V\in L^{2+\sigma}(\mathcal{D})\) is satisfied with \(\sigma=0\) for \(d=1,2\) and \(\sigma>0\) for \(d=3\), then \(\mathcal{V}_{\textsc{OD}}\subset H^{1}_{0}(\mathcal{D})\cap H^{2}(\mathcal{D})\) holds._
Proof.: Let \(v_{\textsc{OD}}\in\mathcal{V}_{\textsc{OD}}\) be arbitrary. By definition, there exists a \(p\in\mathbb{P}^{1}_{H}\subset H^{1}(\mathcal{D})\) such that \(-\frac{1}{2}\triangle v_{\textsc{OD}}+Vv_{\textsc{OD}}=p\) in the weak sense in \(H^{1}_{0}(\mathcal{D})\). In particular, \(v_{\textsc{OD}}\in H^{1}_{0}(\mathcal{D})\hookrightarrow L^{6}(\mathcal{D})\) holds and, hence, \(-\frac{1}{2}\triangle v_{\textsc{OD}}=p-Vv_{\textsc{OD}}\eqqcolon g\in L ^{r}(\mathcal{D})\) with \(r=3/2+9\sigma/(16+2\sigma)\) from Holder's inequality. Notably, \(r>d/2\) holds which by [48, Sec. 7, Th. 1.5] implies that the solution \(v_{\textsc{OD}}\) is bounded almost everywhere in \(\mathcal{D}\). Therefore, \(Vv_{\textsc{OD}}\) and, hence, the right-hand side \(g\) are \(L^{2}(\mathcal{D})\)-functions. This implies \(v_{\textsc{OD}}\in H^{2}(\mathcal{D})\) by standard smoothness results, see, e.g., [33, Th. 9.1.22].
**Remark 4.4**.: _With the cited references in the proof of Lemma 4.3, we have_
\[\|v_{\textsc{OD}}\|_{H^{2}}\lesssim\|\tfrac{1}{2}\triangle v_{\textsc{OD}}\| \leq\|p\|+\|V\|\,\|v_{\textsc{OD}}\|_{L^{\infty}}\lesssim\|p\|+\|V\|(\|p-Vv_{ \textsc{OD}}\|_{L^{r}}+\|v_{\textsc{OD}}\|_{H^{1}})\lesssim\|p\|\]
_with the notation of the above proof._
**Lemma 4.5** (\(L^{2}(\mathcal{D})\)-stability of \(A_{\mbox{\tiny{\sc OD}}}\)).: _The \(a\)-projection \(A_{\mbox{\tiny{\sc OD}}}\) from \(H^{1}_{0}(\mathcal{D})\) to \(\mathcal{V}_{\mbox{\tiny{\sc OD}}}\) is \(L^{2}\)-stable, i.e., \(\|A_{\mbox{\tiny{\sc OD}}}v\|_{L^{2}(\mathcal{D})}\lesssim\|v\|_{L^{2}( \mathcal{D})}\) for every \(v\in H^{1}_{0}(\mathcal{D})\), independent of \(H\)._
Proof.: Let \(P_{H}\) be the \(L^{2}\)-projection onto \(\mathbb{P}^{1}_{H}\). Furthermore, we define the corrector \(C\colon H^{1}_{0}(\mathcal{D})\ni u\to w\in\mathcal{W}\coloneqq\ker P_{H} \cap H^{1}_{0}(\mathcal{D})\) via the (partial) solution \((w,\mu)\in H^{1}_{0}(\mathcal{D})\times\mathbb{P}^{1}_{H}\) of the saddle-point problem
\[a(w,\hat{v})+\int_{D}\nu P_{H}w-\mu P_{H}\hat{v}\,\mathrm{d}x=a(u,\hat{v})\]
for every \(\hat{v}\in H^{1}_{0}(\mathcal{D})\) and \(\nu\in\mathbb{P}^{1}_{H}\). Then it is well-known that \(\mathcal{V}_{\mbox{\tiny{\sc OD}}}=(\operatorname{id}-C)H^{1}_{0}(\mathcal{D})\) holds and \(C\) is a projection onto \(\mathcal{W}\), as well as that \(\mathcal{V}_{\mbox{\tiny{\sc OD}}}\) and \(\mathcal{W}\) are \(a\)-orthogonal complements in \(H^{1}_{0}(\mathcal{D})\), see [6, Sec. 3.2] and (9). In particular, this implies
\[A_{\mbox{\tiny{\sc OD}}}v=(\operatorname{id}-C)v=(\operatorname{id}-C)P_{H}v+ (\operatorname{id}-C)(\operatorname{id}-P_{H})v=(\operatorname{id}-C)P_{H}v\]
for every \(v\in H^{1}_{0}(\mathcal{D})\). Here, in the last equality, we used that \((\operatorname{id}-P_{H})v\) is an element of \(\mathcal{W}\). Finally, we have
\[\|A_{\mbox{\tiny{\sc OD}}}v\|=\|(\operatorname{id}-C)P_{H}v\| \leq\|P_{H}v\|+\|(\operatorname{id}-P_{H})CP_{H}v\|\] \[\lesssim\|v\|+H\|CP_{H}v\|_{H^{1}}\] \[\lesssim\|v\|+H\|P_{H}v\|_{H^{1}}\lesssim\|v\|\]
by the inverse estimate, see, e.g., [21, Ch. II.6.8]
**Lemma 4.6** (\(H^{2}(\mathcal{D})\)-boundedness of \(u^{0}_{\mbox{\tiny{\sc OD}}}\)).: _The \(H^{2}(\mathcal{D})\)-norm of an energy minimiser \(u^{0}_{\mbox{\tiny{\sc OD}}}\) in the OD-space \(\mathcal{V}_{\mbox{\tiny{\sc OD}}}\) is bounded independently of \(H\)._
Proof.: By the density of \(H^{1}_{0}(\mathcal{D})\) into \(L^{2}(\mathcal{D})\), \(H^{1}_{0}(\mathcal{D})\hookrightarrow L^{6}(\mathcal{D})\), Lemma 4.5, and the definition of the eigenvalue \(\lambda^{0}_{\mbox{\tiny{\sc OD}}}\), we have the estimate
\[\|-\triangle u^{0}_{\mbox{\tiny{\sc OD}}}+Vu^{0}_{\mbox{\tiny{ \sc OD}}}\| =\sup_{v\in H^{1}_{0}(\mathcal{D})\setminus\{0\}}\frac{1}{\|v\|}a(u^{0 }_{\mbox{\tiny{\sc OD}}},v)\] \[=\sup_{v\in H^{1}_{0}(\mathcal{D})\setminus\{0\}}\frac{1}{\|v\|}a (u^{0}_{\mbox{\tiny{\sc OD}}},A_{\mbox{\tiny{\sc OD}}}v)\] \[\leq\sup_{v\in H^{1}_{0}(\mathcal{D})\setminus\{0\}}\frac{1}{\|v \|}\|\lambda^{0}_{\mbox{\tiny{\sc OD}}}u^{0}_{\mbox{\tiny{\sc OD}}}-2\beta|u^{ 0}_{\mbox{\tiny{\sc OD}}}|^{2}u^{0}_{\mbox{\tiny{\sc OD}}}\|\,\|A_{\mbox{\tiny{ \sc OD}}}v\|\] \[\lesssim|\lambda^{0}_{\mbox{\tiny{\sc OD}}}|+2\beta\|u^{0}_{\mbox{ \tiny{\sc OD}}}\|^{3}_{H^{1}}\] \[\leq 5E(u^{0}_{\mbox{\tiny{\sc OD}}})+4\sqrt{2}\beta E^{3/2}(u^{0}_{ \mbox{\tiny{\sc OD}}})\]
with a constant only depending on \(\mathcal{D}\) and the shape regularity of the triangulation. Since \(u^{0}_{\mbox{\tiny{\sc OD}}}\) is the minimiser of \(E\) for \(L^{2}\)-normalised functions in \(\mathcal{V}_{\mbox{\tiny{\sc OD}}}\), \(E(u^{0}_{\mbox{\tiny{\sc OD}}})\) in the right-hand side can be replaced by \(E(v_{\mbox{\tiny{\sc OD}}})\) for any \(v_{\mbox{\tiny{\sc OD}}}\in\mathcal{V}_{\mbox{\tiny{\sc OD}}}\) with \(\|v_{\mbox{\tiny{\sc OD}}}\|=1\). Hence, \(-\triangle u^{0}_{\mbox{\tiny{\sc OD}}}+Vu^{0}_{\mbox{\tiny{\sc OD}}}\) is bounded in \(L^{2}\). The \(H^{2}\)-bound then follows by the lines of Lemma 4.3 and Remark 4.4.
Naturally, we will need to estimate the effect of replacing \(|u_{\mbox{\tiny{\sc OD}}}|^{2}\) with \(P_{\mbox{\tiny{\sc OD}}}(|u_{\mbox{\tiny{\sc OD}}}|^{2})\).
**Lemma 4.7**.: _Let \(V\in L^{2}(\mathcal{D})\), then the estimates_
\[\|\,|u|^{2}-P_{\mbox{\tiny{\sc OD}}}(|u|^{2})\|_{L^{2}(\mathcal{D})} \lesssim H^{2}\|u\|^{2}_{H^{2}(\mathcal{D})}, \tag{14a}\] \[\|u\eta-P_{\mbox{\tiny{\sc OD}}}(u\eta)\|_{L^{2}(\mathcal{D})} \lesssim H\|u\|_{H^{2}(\mathcal{D})}\|\eta\|_{a} \tag{14b}\]
_hold for every \(u\in H^{1}_{0}(\mathcal{D})\cap H^{2}(\mathcal{D})\) and \(\eta\in H^{1}_{0}(\mathcal{D})\) with constants only depending on the dimension \(d\), the domain \(\mathcal{D}\), the shape regularity of the triangulation, and the potential \(V\)._
Proof.: Let us first estimate (14a) and then prove (14b).
_Estimate_ (14a): To ease notation, we introduce the density \(\rho=|u|^{2}\). To estimate how well \(P_{\textsc{od}}(\rho)\) approximates \(\rho\), consider first the \(a\)-projection of \(\rho\) onto \(\mathcal{V}_{\textsc{od}}\), i.e., \(A_{\textsc{od}}\rho\). We shall now use the properties of the OD-space \(\mathcal{V}_{\textsc{od}}\) to demonstrate the bound
\[\|\rho-A_{\textsc{od}}\rho\|_{L^{2}}\lesssim H^{2}\|f\|_{L^{2}}, \tag{15}\]
where \(f\) denotes \(-\frac{1}{2}\triangle\rho+V\rho\). Using the coercivity of \(a(\cdot,\cdot)\) and the fact that \(\rho-A_{\textsc{od}}\rho\) is in \(\ker(P_{H})\), we find:
\[\|\rho-A_{\textsc{od}}\rho\|_{H^{1}}^{2} \lesssim a(\rho-A_{\textsc{od}}\rho,\rho-A_{\textsc{od}}\rho)= \langle f,\rho-A_{\textsc{od}}\rho\rangle\] \[=\langle f,\rho-A_{\textsc{od}}\rho-P_{H}(\rho-A_{\textsc{od}} \rho)\rangle\] \[\lesssim\|f\|\;H\|\rho-A_{\textsc{od}}\rho\|_{H^{1}},\]
wherefore \(\|\rho-A_{\textsc{od}}(\rho)\|_{H^{1}}\lesssim H\|f\|\). For the \(L^{2}\)-estimate we notice,
\[\|\rho-A_{\textsc{od}}\rho\|=\|\rho-A_{\textsc{od}}\rho-P_{H}(\rho-A_{\textsc{ od}}\rho)\|\lesssim H\|\rho-A_{\textsc{od}}\rho\|_{H^{1}},\]
wherefore \(\|\rho-A_{\textsc{od}}\rho\|\lesssim H^{2}\|f\|\). As for the regularity of the right-hand side, it is left to show that
\[f=-\frac{1}{2}\triangle\rho+V\rho=-(u\triangle u+(\nabla u)^{2})+Vu^{2}\]
is an element of \(L^{2}(\mathcal{D})\). Using the Sobolev embeddings \(H^{1}(\mathcal{D})\hookrightarrow L^{6}(\mathcal{D})\) and \(H^{2}(\mathcal{D})\hookrightarrow L^{\infty}(\mathcal{D})\) for \(d\leq 3\), we have \(\|\nabla u\|_{L^{4}}+\|u\|_{L^{\infty}}\lesssim\|u\|_{H^{2}}\). This yields the bound
\[\|f\|\leq\|u\|_{L^{\infty}}\|u\|_{H^{2}}+\|\nabla u\|_{L^{4}}^{2}+\|V|\|u\|_{L ^{\infty}}^{2}\lesssim(1+\|V\|)\|u\|_{H^{2}}^{2}.\]
In sum, this implies the stated estimate \(\|\rho-P_{\textsc{od}}(\rho)\|\leq\|\rho-A_{\textsc{od}}(\rho)\|\lesssim H^{2 }\|u\|_{H^{2}}^{2}\).
_Estimate_ (14b): For the error \(\|P_{\textsc{od}}(u\eta)-u\eta\|\), we observe
\[\|P_{\textsc{od}}(u\eta)-u\eta\|^{2}\] \[\leq \|A_{\textsc{od}}(u\eta)-u\eta\|^{2}\] \[\leq \|A_{\textsc{od}}(u\eta)-u\eta-P_{H}(A_{\textsc{od}}(u\eta)-u \eta)\|^{2}\] \[\lesssim H^{2}\|A_{\textsc{od}}(u\eta)-u\eta\|_{H^{1}}^{2}\leq H^{2}\|A_{ \textsc{od}}(u\eta)-u\eta\|_{a}^{2}\leq H^{2}\|u\eta\|_{a}^{2}\] \[= H^{2}\big{(}\langle\eta^{2},\nabla u\cdot\nabla u\rangle+2\langle \eta u\nabla u,\nabla\eta\rangle+\langle u^{2}\nabla\eta,\nabla\eta\rangle+ \langle u^{2}V\eta,\eta\rangle\big{)}\] \[\leq H^{2}\big{(}\|\eta\|_{L^{4}}^{2}\|\nabla u\|_{L^{4}}^{2}+2\|u\|_{ L^{\infty}}\|\nabla u\|_{L^{4}}\|\eta\|_{L^{4}}\|\eta_{H^{1}}+\|u\|_{L^{ \infty}}^{2}\|\eta\|_{H^{1}}^{2}+\|u\|_{L^{\infty}}^{2}\|V\|\,\|\eta\|_{L^{4}}^ {2}\big{)}\] \[\lesssim H^{2}\|u\|_{H^{2}}^{2}\|\eta\|_{H^{1}}^{2}\leq H^{2}\|u\|_{H^{2}} ^{2}\|\eta\|_{a}^{2}. \tag{16}\]
Here, we used Sobolev embeddings similar to the ones in the proof of (14a).
We note that, a priori, Lemma 4.6 does not translate to an \(H^{2}(\mathcal{D})\)-bound of \(\tilde{u}^{0}_{\textsc{od}}\) since \(\|P_{\textsc{od}}\tilde{u}^{0}_{\textsc{od}}|^{2}\tilde{u}^{0}_{\textsc{od}}\|\) need not be bounded by \(\|\tilde{u}^{0}_{\textsc{od}}\|_{L^{6}}^{3}\). It is, however, possible to obtain a bound by complementing the argument of Lemma 4.6.
**Lemma 4.8** (\(H^{2}(\mathcal{D})\)-boundedness of \(\tilde{u}^{0}_{\textsc{od}}\)).: _Let \(\tilde{u}^{0}_{\textsc{od}}\) be a minimiser of \(\tilde{E}\). Then \(\|\tilde{u}^{0}_{\textsc{od}}\|_{H^{2}(\mathcal{D})}\leq C\) holds with a constant \(C\) independent of \(H\)._
Proof.: For the sake of readability, we refer to the Appendix B for a proof.
Before proving the optimal order of convergence as measured in energy, eigenvalue and \(a\)-norm, some sub-optimal estimates are needed.
**Lemma 4.9** (Crude convergence estimate I).: _Let \(u^{0}_{{\textsc{OD}}},\tilde{u}^{0}_{{\textsc{OD}}}\in\mathcal{V}_{{\textsc{OD}}}\) be energy minimisers of \(E\) and \(\tilde{E}\), respectively, with \((u^{0}_{{\textsc{OD}}},u^{0})\geq 0\) and \((\tilde{u}^{0}_{{\textsc{OD}}},u^{0})\geq 0\). If \(H\) is small enough, then_
\[\|u^{0}_{{\textsc{OD}}}-\tilde{u}^{0}_{{\textsc{OD}}}\|_{a}\lesssim H^{2}, \tag{17}\]
_where the constant only depends on the data and the mesh regularity of the triangulation._
Proof.: By Theorem 4.1 and the triangle inequality, it is enough to prove \(\|\tilde{u}^{0}_{{\textsc{OD}}}-u^{0}\|_{H^{1}}\lesssim H^{2}\). For this, we use
\[\|\tilde{u}^{0}_{{\textsc{OD}}}-u^{0}\|^{2}_{H^{1}}\lesssim E(\tilde{u}^{0}_{ {\textsc{OD}}})-E(u^{0}), \tag{18}\]
which follows the lines of [23, p. 96 f.].
To estimate the difference of the energies, we notice that for every \(v\in H^{2}\cap H^{1}_{0}\), we have
\[\tilde{E}(v)=E(v)-\frac{\beta}{2}\|v\|^{2}-P_{{\textsc{OD}}}(|v|^{2})\|^{2}. \tag{19}\]
On one hand, this implies \(\tilde{E}(v)\leq E(v)\) and, hence, the order
\[\tilde{E}(\tilde{u}^{0}_{{\textsc{OD}}})\leq\tilde{E}(u^{0}_{{\textsc{OD}}}) \leq E(u^{0}_{{\textsc{OD}}})\leq E(\tilde{u}^{0}_{{\textsc{OD}}}). \tag{20}\]
With estimate (14a) and Lemma 4.8, on the other hand, the bound
\[E(\tilde{u}^{0}_{{\textsc{OD}}})-E(u^{0}_{{\textsc{OD}}})\leq E(\tilde{u}^{0 }_{{\textsc{OD}}})-\tilde{E}(\tilde{u}^{0}_{{\textsc{OD}}})=\frac{\beta}{2} \|\tilde{u}^{0}_{{\textsc{OD}}}|^{2}-P_{{\textsc{OD}}}(|\tilde{u}^{0}_{{ \textsc{OD}}}|^{2})\|^{2}\lesssim H^{4}\|\tilde{u}^{0}_{{\textsc{OD}}}\|^{4}_{ H^{2}}\lesssim H^{4} \tag{21}\]
holds. In summary, estimates (18), (21), and Theorem 4.1 prove
\[\|\tilde{u}^{0}_{{\textsc{OD}}}-u^{0}\|^{2}_{H^{1}}\lesssim E(\tilde{u}^{0}_{ {\textsc{OD}}})-E(u^{0}_{{\textsc{OD}}})+E(u^{0}_{{\textsc{OD}}})-E(u^{0}) \lesssim H^{4}+H^{6}.\qed\]
**Lemma 4.10** (Crude convergence estimate II).: _Let \(\tilde{\lambda}^{0}_{{\textsc{OD}}}\coloneqq 2\tilde{E}(\tilde{u}^{0}_{{ \textsc{OD}}})+\frac{\beta}{2}\|P_{{\textsc{OD}}}(|\tilde{u}^{0}_{{\textsc{OD }}}|^{2})\|^{2}\) with \(\tilde{u}^{0}_{{\textsc{OD}}}\) from Lemma 4.9 and \(\lambda^{0}_{{\textsc{OD}}}\) be defined as in Theorem 4.1, then the estimate_
\[|\lambda^{0}_{{\textsc{OD}}}-\tilde{\lambda}^{0}_{{\textsc{OD}}}|\lesssim H^{2}\]
_holds if \(H\) is small enough. The constant only depends on the data and the mesh regularity of the triangulation._
Proof.: By the definition of the eigenvalues, the difference is given by
\[|\lambda^{0}_{{\textsc{OD}}}-\tilde{\lambda}^{0}_{{\textsc{OD}}}|\leq 2|E(u^{0}_{ {\textsc{OD}}})-\tilde{E}(\tilde{u}^{0}_{{\textsc{OD}}})|+\frac{\beta}{2} \big{|}\,\int_{\mathcal{D}}|u^{0}_{{\textsc{OD}}}|^{4}-P_{{\textsc{OD}}}(| \tilde{u}^{0}_{{\textsc{OD}}}|^{2})|\tilde{u}^{0}_{{\textsc{OD}}}|^{2}\, \mathrm{d}x\,\big{|}.\]
The difference of the energies can be estimated similarly to (21) by \(CH^{4}\). For the second difference, we observe that
\[\big{|}\int_{\mathcal{D}}|u^{0}_{{\textsc{OD}}}|^{4}-P_{{\textsc{OD}}}(| \tilde{u}^{0}_{{\textsc{OD}}}|^{2})|\tilde{u}^{0}_{{\textsc{OD}}}|^{2}\, \mathrm{d}x\,\big{|}\] \[= \big{|}\int_{\mathcal{D}}(|u^{0}_{{\textsc{OD}}}|^{2}-|\tilde{u}^{ 0}_{{\textsc{OD}}}|^{2})|u^{0}_{{\textsc{OD}}}|^{2}+(|u^{0}_{{\textsc{OD}}}| ^{2}-P_{{\textsc{OD}}}|u^{0}_{{\textsc{OD}}}|^{2})|\tilde{u}^{0}_{{\textsc{OD }}}|^{2}+P_{{\textsc{OD}}}(|u^{0}_{{\textsc{OD}}}|^{2}-|\tilde{u}^{0}_{{\textsc{ OD}}}|^{2})|\tilde{u}^{0}_{{\textsc{OD}}}|^{2}\,\mathrm{d}x\,\big{|}\] \[\leq \big{\|}\,|u^{0}_{{\textsc{OD}}}|^{2}-|\tilde{u}^{0}_{{\textsc{ OD}}}|^{2}\,\|\left(|u^{0}_{{\textsc{OD}}}\|^{2}_{L^{4}}+\|\tilde{u}^{0}_{{ \textsc{OD}}}\|^{2}_{L^{4}}\right)+\big{\|}\,|u^{0}_{{\textsc{OD}}}|^{2}-P_{{ \textsc{OD}}}(|u^{0}_{{\textsc{OD}}}|^{2})\big{\|}\,\|\tilde{u}^{0}_{{\textsc{ OD}}}\|^{2}_{L^{4}}\] \[\leq \|u^{0}_{{\textsc{OD}}}-\tilde{u}^{0}_{{\textsc{OD}}}\|_{L^{4}} \left(\|u^{0}_{{\textsc{OD}}}\|^{2}_{L^{4}}+\|\tilde{u}^{0}_{{\textsc{OD}}}\|^{2 }_{L^{4}}\right)^{2}+\big{\|}\,|u^{0}_{{\textsc{OD}}}|^{2}-P_{{\textsc{OD}}}(| u^{0}_{{\textsc{OD}}}|^{2})\big{\|}\,\|\tilde{u}^{0}_{{\textsc{OD}}}\|^{2}_{L^{4}}\] \[\lesssim H^{2}+H^{4}\]
holds by \(H^{1}(\mathcal{D})\hookrightarrow L^{4}(\mathcal{D})\), Lemma 4.7, and 4.9. This finish the proof.
We are now ready to prove the optimal rate of convergence of \(\tilde{u}^{0}_{\textsc{{OD}}}\); cf. Theorem 4.1.
**Theorem 4.11** (Optimal convergence rate for \(\tilde{u}^{0}_{\textsc{{OD}}}\)).: _Given the setting of Lemma 4.9 and 4.10, let \(H\) be small enough, then the energy, the \(H^{1}\)-norm, and the eigenvalue of the modified problem converge with optimal order, namely,_
\[\|\tilde{u}^{0}_{\textsc{{OD}}}-u^{0}\|_{H^{1}}\lesssim H^{3},\qquad|E(\tilde{ u}^{0}_{\textsc{{OD}}})-E(u^{0})|\lesssim H^{6},\qquad|\tilde{\lambda}^{0}_{ \textsc{{OD}}}-\lambda^{0}|\lesssim H^{3}.\]
Proof.: The main idea behind the proof is to show that
\[\eta=\tilde{u}^{0}_{\textsc{{OD}}}-u^{0}_{\textsc{{OD}}}\in\mathcal{V}_{ \textsc{{OD}}}\]
is of \(\mathcal{O}(H^{3})\) with respect to the \(H^{1}\)-norm (or in the equivalent norm induced by \(a\)). We note that, by the convergence of \(u^{0}_{\textsc{{OD}}}\) and \(\tilde{u}^{0}_{\textsc{{OD}}}\) to the \(L^{2}\)-normalised \(u^{0}\), cf. Theorem 4.1 and Lemma 4.9, \((\tilde{u}^{0}_{\textsc{{OD}}},u^{0}_{\textsc{{OD}}})\geq 0\) holds for \(H\) small enough.
For the proof, we consider the variation of \(E\) and \(\tilde{E}\) at \(u^{0}_{\textsc{{OD}}}\) and \(\tilde{u}^{0}_{\textsc{{OD}}}\), in the direction of \(\eta\). Observe that, since both are normalised in \(L^{2}\), we have that
\[\|\eta\|^{2}=\|\tilde{u}^{0}_{\textsc{{OD}}}\|^{2}-2(\tilde{u}^{0}_{\textsc{{ OD}}},u^{0}_{\textsc{{OD}}})+\|u^{0}_{\textsc{{OD}}}\|^{2}=2\big{[}1-(\tilde{u}^{0}_{ \textsc{{OD}}},u^{0}_{\textsc{{OD}}})\big{]}.\]
This allows us to write the first variations as
\[\tilde{E}^{\prime}(\tilde{u}^{0}_{\textsc{{OD}}})[\eta] =\tilde{\lambda}^{0}_{\textsc{{OD}}}(\tilde{u}^{0}_{\textsc{{OD}} },\eta)=\tilde{\lambda}^{0}_{\textsc{{OD}}}(1-(\tilde{u}^{0}_{\textsc{{OD}}},u^{0}_{\textsc{{OD}}}))=\quad\tfrac{1}{2}\tilde{\lambda}^{0}_{\textsc{{OD}} }\|\eta\|^{2}, \tag{22a}\] \[E^{\prime}(u^{0}_{\textsc{{OD}}})[\eta] =\lambda^{0}_{\textsc{{OD}}}(u^{0}_{\textsc{{OD}}},\eta)=\lambda ^{0}_{\textsc{{OD}}}((\tilde{u}^{0}_{\textsc{{OD}}},u^{0}_{\textsc{{OD}}})-1) =-\tfrac{1}{2}\lambda^{0}_{\textsc{{OD}}}\|\eta\|^{2}. \tag{22b}\]
On the other hand, we have by a simple expansion
\[E^{\prime}(\tilde{u}^{0}_{\textsc{{OD}}})[\eta]=E^{\prime}(u^{0}_{\textsc{{ OD}}})[\eta]+E^{\prime\prime}(u^{0}_{\textsc{{OD}}})[\eta,\eta]+R(u^{0}_{ \textsc{{OD}}},\eta)\]
with the remainder
\[R(u^{0}_{\textsc{{OD}}},\eta)=2\beta\int_{\mathcal{D}}3u^{0}_{\textsc{{OD}}} \eta^{3}+\eta^{4}\,\mathrm{d}x.\]
By a rearrangement, we have
\[\underbrace{\langle(E^{\prime\prime}(u^{0}_{\textsc{{OD}}})- \lambda^{0}_{\textsc{{OD}}})\eta,\eta\rangle}_{=\widehat{D_{1}}} =E^{\prime\prime}(u^{0}_{\textsc{{OD}}})[\eta,\eta]+2E^{\prime}(u^ {0}_{\textsc{{OD}}})[\eta]\] \[=E^{\prime}(\tilde{u}^{0}_{\textsc{{OD}}})[\eta]+E^{\prime}(u^{ 0}_{\textsc{{OD}}})[\eta]-R(u^{0}_{\textsc{{OD}}},\eta)\] \[=\underbrace{E^{\prime}(\tilde{u}^{0}_{\textsc{{OD}}})[\eta]- \tilde{E}^{\prime}(\tilde{u}^{0}_{\textsc{{OD}}})[\eta]}_{=\widehat{D_{2}}}+ \underbrace{\tilde{E}^{\prime}(\tilde{u}^{0}_{\textsc{{OD}}})[\eta]+E^{\prime }(u^{0}_{\textsc{{OD}}})[\eta]}_{=\widehat{D_{3}}}-\underbrace{R(u^{0}_{ \textsc{{OD}}},\eta)}_{=\widehat{D_{4}}}.\]
In the following, we estimate the terms \(D_{i}\), \(i=1,\ldots,4\).
For the left-hand side, i.e., term \(D_{1}\), we note that \(\gamma\|\eta\|_{H^{1}}^{2}\leq\langle(E^{\prime\prime}(u^{0})-\lambda^{0})\eta,\eta\rangle\) holds for the continuous minimiser \(u^{0}\) and a constant \(\gamma>0\); see [23, Lem. 1]. On the other hand, the estimate
\[\big{|}\langle(E^{\prime\prime}(u^{0})-\lambda^{0})\eta,\eta\rangle -\langle(E^{\prime\prime}(u^{0}_{\textsc{{OD}}})-\lambda^{0}_{\textsc{{OD}}}) \eta,\eta\rangle\big{|}\] \[\leq 6\beta\int_{\mathcal{D}}\big{|}\,|u^{0}|^{2}-|u^{0}_{\textsc{{OD }}}|^{2}\,|\eta^{2}\,\mathrm{d}x+|\lambda^{0}-\lambda^{0}_{\textsc{{OD}}}|\| \eta\|^{2}\] \[\leq 6\beta\|u^{0}_{\textsc{{OD}}}+u^{0}\|_{L^{6}}\|u^{0}_{\textsc{ {OD}}}-u^{0}\|_{L^{2}}\|\eta\|_{L^{6}}^{2}+|\lambda^{0}-\lambda^{0}_{\textsc{{ OD}}}|\|\eta\|^{2}\] \[\lesssim(1+H^{3})H^{3}\|\eta\|_{H^{1}}^{2}\]
holds by Theorem 4.1. Therefore, for \(H\) small enough, the left-hand side is bounded by
\[D_{1}=\langle(E^{\prime\prime}(u^{0}_{\textsc{\tiny OD}})-\lambda^{0}_{\textsc{ \tiny OD}})\eta,\eta\rangle\geq\tfrac{2}{3}\gamma\|\eta\|_{H^{1}}^{2}.\]
The second term \(D_{2}\) is estimated using Lemma 4.7, i.e.,
\[|D_{2}|=|E^{\prime}(\tilde{u}^{0}_{\textsc{\tiny OD}})[\eta]- \tilde{E}^{\prime}(\tilde{u}^{0}_{\textsc{\tiny OD}})[\eta]| =2\beta\|\int_{\mathcal{D}}(P_{\textsc{\tiny OD}}(|\tilde{u}^{0} _{\textsc{\tiny OD}}|^{2})-|\tilde{u}^{0}_{\textsc{\tiny OD}}|^{2})(P_{\textsc{ \tiny OD}}(\tilde{u}^{0}_{\textsc{\tiny OD}}\eta)-\tilde{u}^{0}_{\textsc{\tiny OD }}\eta)\,\mathrm{d}x|\] \[\leq 2\beta\|P_{\textsc{\tiny OD}}(|\tilde{u}^{0}_{\textsc{\tiny OD }}|^{2})-|\tilde{u}^{0}_{\textsc{\tiny OD}}|^{2}\|\,\|P_{\textsc{\tiny OD}}( \tilde{u}^{0}_{\textsc{\tiny OD}}\eta)-\tilde{u}^{0}_{\textsc{\tiny OD}}\eta\|\] \[\lesssim H^{3}\|\tilde{u}^{0}_{\textsc{\tiny OD}}\|_{H^{2}}^{3}\| \eta\|_{H^{1}}.\]
By Lemma 4.10 and Eq. (22), the third term is bounded by
\[|D_{3}|=|\tilde{E}^{\prime}(\tilde{u}^{0}_{\textsc{\tiny OD}})[\eta]+E^{ \prime}(u^{0}_{\textsc{\tiny OD}})[\eta]|=\tfrac{1}{2}(\lambda^{0}_{\textsc{ \tiny OD}}-\tilde{\lambda}^{0}_{\textsc{\tiny OD}})\|\eta\|^{2}\leq CH^{2}\| \eta\|^{2}\leq CH^{2}\|\eta\|^{2}_{H^{1}}\leq\tfrac{1}{3}\gamma\|\eta\|^{2}_{H^ {1}}.\]
for \(H\) small enough. The last term can be estimated, using Lemma 4.9, by
\[|D_{4}|=|R(u^{0}_{\textsc{\tiny OD}},\eta)|\lesssim\|u^{0}_{\textsc{\tiny OD} }\|\,\|\eta\|^{3}_{L^{6}}+\|\eta\|^{4}_{L^{4}}\lesssim(H^{4}+H^{6})\|\eta\|_{ H^{1}}.\]
Putting the estimates of \(D_{i}\), \(i=1,\dots,4\), together and dividing by \(\|\eta\|_{H^{1}}\), we conclude for small enough \(H\), that
\[\tfrac{\gamma}{3}\|\eta\|_{H^{1}}=\frac{D_{1}-|D_{3}|}{\|\eta\|_{H^{1}}}\leq \frac{|D_{2}|+|D_{4}|}{\|\eta\|_{H^{1}}}\lesssim H^{3}+H^{4}+H^{6}.\]
The assertion on the energy then follows by Theorem 4.1, the triangle inequality, and
\[|E(\tilde{u}^{0}_{\textsc{\tiny OD}})-E(u^{0}_{\textsc{\tiny OD }})| \leq|E^{\prime}(u^{0}_{\textsc{\tiny OD}})[\eta]|+\tfrac{1}{2}|E^{ \prime\prime}(u^{0}_{\textsc{\tiny OD}})[\eta,\eta]|+\frac{\beta}{2}\int_{ \mathcal{D}}4|u^{0}_{\textsc{\tiny OD}}\eta^{3}|+\eta^{4}\,\mathrm{d}x\] \[\lesssim\lambda_{\textsc{\tiny OD}}\|\eta\|^{2}+\|\eta\|^{2}_{a}+ \|\eta\|^{2}_{H^{1}_{0}}\sum_{k=0}^{2}\|u^{0}_{\textsc{\tiny OD}}\|^{2-k}_{H^ {1}_{0}}\|\eta\|^{k}_{H^{1}_{0}}=\mathcal{O}(H^{6}).\]
Finally, with the new estimate of \(\|\eta\|_{H^{1}}\), Theorem 4.1 and the steps in Lemma 4.10 imply the third order convergence of \(|\tilde{\lambda}^{0}_{\textsc{\tiny OD}}-\lambda^{0}|\).
**Remark 4.12**.: _The statement of Theorem 4.11 is still valid, if we split \(V\) into \(V=V_{s}+V_{d}\) with \(V_{d}\in L^{2+\sigma}(\mathcal{D})\) and \(V_{s}\in H^{2}(\mathcal{D})\). Here, the bilinear form \(a\) is defined only with \(V_{d}\) - instead of the whole potential \(V\). Note that, this implies_
\[-\tfrac{1}{2}\triangle u^{0}+V_{d}u^{0}=\lambda^{0}u^{0}-\beta|u^{0}|^{2}u^{0 }-V_{s}u^{0}\in H^{2}(\mathcal{D})\cap H^{1}_{0}(\mathcal{D})\]
_and therefore \(\|u^{0}_{\textsc{\tiny OD}}-u^{0}\|_{H^{1}(\mathcal{D})}\lesssim H^{3}\) by the steps of [36, Prop. 4.2]. The remaining proof follows then the one with the complete bilinear form \(a\), where one has to consider the additional term \(\|V_{s}u^{0}_{\textsc{\tiny OD}}\|\lesssim\|V_{s}\|_{L^{\infty}(\mathcal{D})}\) in Lemma 4.6 and similar in Lemma 4.8, as well as use \(|E^{\prime\prime}(u^{0}_{\textsc{\tiny OD}})[\eta,\eta]|\lesssim\|\eta\|^{2}_{a }+\|V_{s}\|_{L^{\infty}(\mathcal{D})}\|\eta\|^{2}_{L^{2}(\mathcal{D})}\) for the bound of \(|E(\tilde{u}^{0}_{\textsc{\tiny OD}})-E(u^{0}_{\textsc{\tiny OD}})|\) in the proof of Theorem 4.11._
**Corollary 4.13**.: _Given the setting of Theorem 4.11, the modified minimal energy is guaranteed to converge with order \(\mathcal{O}(H^{4})\), i.e., \(|\tilde{E}(\tilde{u}^{0}_{\textsc{\tiny OD}})-E(u^{0})|\lesssim H^{4}\)._
Proof.: The assertion follows immediately by Lemmas 4.7, 4.8, Theorem 4.11, and
\[|\tilde{E}(\tilde{u}^{0}_{\textsc{\tiny OD}})-E(\tilde{u}^{0}_{\textsc{\tiny OD }})|=\int P_{\textsc{\tiny OD}}(|\tilde{u}^{0}_{\textsc{\tiny OD}}|^{2})-| \tilde{u}^{0}_{\textsc{\tiny OD}}|^{2})|\tilde{u}^{0}_{\textsc{\tiny OD}}|^{2} ddx=\int\big{(}P_{\textsc{\tiny OD}}(|\tilde{u}^{0}_{\textsc{\tiny OD}}|^{2})-| \tilde{u}^{0}_{\textsc{\tiny OD}}|^{2}\big{)}^{2}dx\lesssim H^{4}.\]
**Remark 4.14**.: _If the potential and the domain are sufficiently smooth, the modified minimum energy is guaranteed to converge with the optimal rate of order \(\mathcal{O}(H^{6})\), furthermore \(|\tilde{E}(\tilde{u}^{0}_{\textsc{OD}})-E(\tilde{u}^{0}_{\textsc{OD}})|\lesssim H ^{8}.\) This can be proved following the lines of Lemma 4.7, where with sufficient smoothness Eq. (15) can be replaced by the optimal (10)._
**Remark 4.15**.: _By the convergence of \(\tilde{u}^{0}_{\textsc{OD}}\) and by the inequality \(\tilde{E}(u^{0})\leq E(u^{0})\), see (19), we have the ordering_
\[\tilde{E}(\tilde{u}^{0}_{\textsc{OD}})\leq E(u^{0})\leq E(\tilde{u}^{0}_{ \textsc{OD}})\]
_for small enough \(H\). In particular, \(E(\tilde{u}^{0}_{\textsc{OD}})-\tilde{E}(\tilde{u}^{0}_{\textsc{OD}})\) is a simple estimator for the approximation error of the energy and therefore also of \(\|\tilde{u}^{0}_{\textsc{OD}}-u^{0}\|_{a}^{2}\)._
### Error estimates for SLOD spaces
In the previous subsection we analysed the \(u^{0}_{\textsc{OD}}-u^{0}\) and \(\tilde{u}^{0}_{\textsc{OD}}-u^{0}\) errors in the ideal OD space. While it is not known whether a localised basis exists for this space, the way we currently compute the basis leads to a truncation of the global, but rapidly decaying, support of the basis functions. One may well ask how this truncation affects the error bounds. In this section, we provide a qualitative answer to this question. Note that the discrete function space has three discretisation parameters: the patch size \(\ell\) (truncation parameter), the mesh width \(H\) of the piecewise linear right-hand sides, and the width \(h\) of the mesh used to discretise the responses. In the following, we make the simplification that \(h=0\) holds. For \(h>0\) the steps are equivalent, one only has to use the triangle inequality to introduce the representation error.
The SLOD space with \(h=0\) is denoted \(\mathcal{V}_{\textsc{OD},\ell}\). Let \(p_{z}^{*}\) be the minimiser of the localisation error problem (12) with respect to the node \(z\). The associated SLOD function is \(\varphi_{\textsc{OD},\ell,z}\) and \(\varphi_{\textsc{OD},z}=\mathcal{A}^{-1}p_{z}^{*}\) is the associated ideal OD function. Furthermore, we make the typical assumption for a SLOD discretisation that \(\{p_{z}^{*}\}_{z\in\mathcal{N}_{H}}\) forms a Riesz basis of \(\mathbb{P}_{H}^{1}\), i.e. there exists a \(C_{\mathrm{RB}}>0\) depending on \(H,\ell\) such that
\[C_{\mathrm{RB}}^{-1}\sum_{z\in\mathcal{N}_{H}}c_{z}^{2}\leq\Big{\|}\sum_{z\in \mathcal{N}_{H}}c_{z}p_{z}^{*}\Big{\|}_{L^{2}(\mathcal{D})}^{2}\leq C_{ \mathrm{RB}}\sum_{z\in\mathcal{N}_{H}}c_{z}^{2}\]
holds for every \(\{c_{z}\}_{z\in\mathcal{N}_{H}}\), cf. [34, Ass. 5.2]. By this assumption, the ideal OD minimiser \(u^{0}_{\textsc{OD}}\) of \(E\) can be written as \(\sum_{z\in\mathcal{N}}c_{z}^{0}\varphi_{\textsc{OD},z}\). We define \(v^{0}_{\textsc{OD},\ell}\coloneqq\sum_{z\in\mathcal{N}}c_{z}^{0}\varphi_{ \textsc{OD},\ell,z}\in\mathcal{V}_{\textsc{OD},\ell}\). Furthermore, the minimiser of \(E\) in SLOD space is denoted by \(u^{0}_{\textsc{OD},\ell}\). Then, by the convergence of \(u^{0}_{\textsc{OD}}\) and [23, Lem. 1 & Eq. (32)], we have the estimate
\[\begin{split} E(u^{0}_{\textsc{OD},\ell})-E(u^{0})\leq& \,E(v^{0}_{\textsc{OD},\ell})-E(u^{0})\\ \lesssim&\,\|u^{0}-v^{0}_{\textsc{OD},\ell}\|_{a}^{ 2}+\int_{\mathcal{D}}\big{(}(u^{0})^{2}-(v^{0}_{\textsc{OD},\ell})^{2}\big{)} ^{2}\,\mathrm{d}x\\ \lesssim&\,\big{(}1+\|u^{0}+v^{0}_{\textsc{OD},\ell }\|_{a}^{2}\big{)}\|u^{0}-v^{0}_{\textsc{OD},\ell}\|_{a}^{2}\\ \lesssim&\,\big{(}1+\|u^{0}_{\textsc{OD}}-v^{0}_{ \textsc{OD},\ell}\|_{a}^{2}\big{)}\big{(}\|u^{0}-u^{0}_{\textsc{OD}}\|_{a}^{ 2}+\|u^{0}_{\textsc{OD}}-v^{0}_{\textsc{OD},\ell}\|_{a}^{2}\big{)}\\ \lesssim&\,\big{(}1+H^{-1}\sigma^{2}(H,\ell)\big{)} \big{(}H^{6}+H^{-1}\sigma^{2}(H,\ell)\big{)}.\end{split} \tag{23}\]
Here, \(\sigma(H,\ell)\) is the maximum of the localisation errors over all nodes \(z\in\mathcal{N}_{H}\). In particular, we used
\[\|u^{0}_{\textsc{OD}}-v^{0}_{\textsc{OD},\ell}\|_{a}\lesssim\bigg{(}\frac{C_ {\mathrm{RB}}(\ell+1)^{d-1}\big{(}1+\sqrt{d}(\ell+1)H\big{)}}{H}\bigg{)}^{ \nicefrac{{1}}{{2}}}\sigma(H,\ell)\,\|u^{0}_{\textsc{OD}}\|_{H^{2}(\mathcal{D})}\]
in the last step, which follows by
\[\|e\|_{L^{2}(\partial B_{r}(z))}^{2}\leq\frac{1+\sqrt{1+dr^{2}}}{r}\|e\|_{H^{1}(B_ {r}(z))}^{2}\leq\Big{(}\frac{2}{r}+\sqrt{d}\Big{)}\|e\|_{H^{1}(B_{r}(z))}^{2},\]
cf. [32, p. 41], and by the lines of [34, Th. 6.1].
As for the effect of truncation on the modified minimiser, we expect it to be proportional to \(\sigma\) as measured in the \(\|\bullet\|_{a}\) norm. However, we can only support this statement heuristically. The projection \(P_{\textsc{{\tiny OD}}}\) in the definition of \(\tilde{E}\) must be replaced by the \(L^{2}\)-projection \(P_{\textsc{{\tiny OD}},\ell}\) onto \(\mathcal{V}_{\textsc{{\tiny OD}},\ell}\). Let us denote the minimiser of \(\tilde{E}\) in \(\mathcal{V}_{\textsc{{\tiny OD}},\ell}\) by \(\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0}\). Analogous to \(v_{\textsc{{\tiny OD}},\ell}^{0}\), we define \(\tilde{v}_{\textsc{{\tiny OD}},\ell}^{0}\) by the coefficients of \(\tilde{u}_{\textsc{{\tiny OD}}}^{0}\). Under the assumption that \(E(\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0})\leq E(\tilde{v}_{\textsc{{\tiny OD }},\ell}^{0})\), we have \(E(\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0})-E(u^{0})\leq E(\tilde{v}_{\textsc{ {\tiny OD}},\ell}^{0})-E(u^{0})\), which can be estimated similarly to \(E(v_{\textsc{{\tiny OD}},\ell}^{0})-E(u^{0})\) in (23), thus supporting the claim. However, if \(E(\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0})\leq E(\tilde{v}_{\textsc{{\tiny OD }},\ell}^{0})\) can not be asserted then an extra term according to
\[E(\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0})-E(u^{0})\] \[\leq E(\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0})-\tilde{E}_{\ell}( \tilde{u}_{\textsc{{\tiny OD}},\ell}^{0})+\tilde{E}_{\ell}(\tilde{v}_{\textsc{ {\tiny OD}},\ell}^{0})-E(\tilde{v}_{\textsc{{\tiny OD}},\ell}^{0})+E(\tilde{v} _{\textsc{{\tiny OD}},\ell}^{0})-E(u^{0})\] \[=\tfrac{\beta}{2}\big{\|}\,|\tilde{u}_{\textsc{{\tiny OD}},\ell}^ {0}|^{2}-P_{\textsc{{\tiny OD}},\ell}|\tilde{u}_{\textsc{{\tiny OD}},\ell}^{0} |^{2}\big{\|}_{L^{2}(\mathcal{D})}^{2}-\tfrac{\beta}{2}\big{\|}\,|\tilde{v}_{ \textsc{{\tiny OD}},\ell}^{0}|^{2}-P_{\textsc{{\tiny OD}},\ell}|\tilde{v}_{ \textsc{{\tiny OD}},\ell}^{0}|^{2}\big{\|}_{L^{2}(\mathcal{D})}^{2}+E(\tilde{v} _{\textsc{{\tiny OD}},\ell}^{0})-E(u^{0}),\]
would have to be estimated.
### Minimisation algorithm
It remains to choose an appropriate algorithm for computing the minimiser. There have been numerous proposals for minimisation algorithms to compute the ground states of BECs: From early work on energy-diminishing backward Euler centered finite difference (BEFD) methods and explicit time-splitting sine-spectral (TSSP) methods [4, 18] to the energy-adapted Riemannian gradient descent method with global convergence property for non-rotating BECs [37, 7], inverse iteration methods [41, 5, 51] with local quadratic convergence, Krylov preconditioned BESP [11], Preconditioned Gradient Descent [13] for rapidly rotating BECs, and Riemannian Newton methods [8]. To compute the minimiser, we consider the combined method of energy-adapted Riemannian gradient descent [37, 8] and inverse iteration, called the J-method, first introduced in the finite difference setting [41], then extended to and analysed in the finite element setting [5]. The J-method allows both selective approximation of excited states and cubic convergence in a local neighborhood of a minimum, similar to inverse iteration for linear eigenvalue problems. In addition, the gradient descent approach is energy-decreasing and is guaranteed to converge to the unique global minimiser for non-rotational cases.
## 5 Numerical experiments for stationary states
In this section we present several numerical examples to illustrate the previous results, as well as comparisons with GPELab [10], the classical LOD approach [28], and BEC2HPC [29]. All CPU times were measured on a laptop with an 11th generation Intel(r) Core(tm) i7-1165G7 @ 2.80GHz \(\times\) 8 processor and 64GB of RAM running Julia version 1.7.2 (2022-02-06). Note that the current implementation is strictly sequential. The code is available at [https://github.com/JWAER/SLOD_BEC](https://github.com/JWAER/SLOD_BEC).
### Smooth academic example in \(2\)d
We begin with a smooth test case from [28] which will allow us to compare our SLOD with the previous LOD implementation. The problem reads,
\[\min_{u\in\mathbb{S}}\int_{\mathcal{D}}\frac{1}{2}|\nabla u|^{2}+V|u|^{2}+\frac{ \beta}{2}|u|^{4}\,\mathrm{d}x,\]
with
\[V(x,y)=\frac{1}{2}(x^{2}+y^{2})+4e^{-x^{2}/2}+4e^{-y^{2}/2},\quad\beta=50, \quad\text{and}\quad\mathcal{D}=(-6,6)^{2}.\]
Since the potential is smooth, the canonical OD-space is used, i.e., the basis functions span the space \(\triangle^{-1}\mathbb{P}^{1}_{H}(\mathcal{T}_{H})\). In Fig. 3a is plotted the convergence rate versus mesh size for the SLOD with \(h=H\) and \(h=H/2\), the LOD results from [28], and the GPELab, for which an ansatz mesh size is computed as one over the number of modes in each direction. The minimal energy is computed to \(10\) digit accuracy to be \(E^{0}=7.082310561\) which differs slightly from the minimal energy of the spectral solution with periodic boundary conditions, \(E^{0}=7.082310558\). As is illustrated in Fig. 3a no order of accuracy is initially lost by representing the SLOD basis by \(\mathbb{P}^{3}_{H}\)-functions on the same mesh. However, the final data point shows that as the solution near the boundary starts to come into play, a finer representation of the basis functions becomes necessary. Strikingly the order of convergence is \(7\), as opposed to the estimated \(6\). We find that setting the truncation parameter \(\ell=2\) is sufficient to push the error to \(10\) digit accuracy. Although not illustrated, it should be mentioned that the choice \(\ell=1\) is good enough to achieve \(5\)-digit accuracy, after which the error stagnates unless \(\ell\) is increased. This is in accordance with the estimate in Section 4.2 and the computational results in Fig. 2. It is clear from Fig. 3b, in which accuracy versus CPU-time is plotted, that the improvement from the LOD-method is several orders of magnitude. Moreover, even for this analytical test case, the SLOD rivals the spectral method as both methods almost instantaneously yield \(8\)-digit accuracy. Finally we report that our minimisation algorithm converged in \(8\)-\(9\) iterations when switching to the J-method at a residual lower than \(0.1\), the corresponding number of iterations for the pure Sobolev gradient descent was reported in [28] to be around \(20\).
### Discontinuous potential in \(2\)d
Next we consider a potential with multiple discontinuities. Naturally in such cases, the difference between the SLOD and a spectral method becomes very pronounced. Not only does this experiment exemplarily illustrate this, but it poses a difficult challenge for other methods to solve to the same accuracy at similar CPU-time costs. The problem reads,
\[\min_{u\in\mathbb{S}}\int_{\mathcal{D}}\frac{1}{2}|\nabla u|^{2}+(V_{\mathrm{s }}+V_{\mathrm{d}})|u|^{2}+\frac{\beta}{2}|u|^{4}\,\mathrm{d}x,\]
there now being an additional discontinuous contribution to the potential, namely
\[V_{\mathrm{s}}(x,y)=\frac{1}{2}(x^{2}+y^{2}),\quad V_{\mathrm{d}}(x,y)=\Big{|} 5+2\sin\Big{(}\frac{\pi x}{3}\Big{)}\sin\Big{(}\frac{\pi y}{3}\Big{)}\Big{|}, \quad\text{and}\quad\beta=100,\]
where \(\lfloor\bullet\rfloor\) denotes integer part. The inner-product used to define the OD-space is chosen in accordance with Remark (4.12) to be
\[a_{\mathrm{d}}(u,v)=\int_{\mathcal{D}}\frac{1}{2}\nabla u\cdot\nabla v+V_{ \mathrm{d}}uv\,\mathrm{d}x.\]
For this example, we consistently use a refined mesh of size \(h=H/3\) with \(\mathbb{P}_{h}^{3}\) functions to represent the SLOD-basis functions and truncation parameter \(\ell=2\). The discontinuous potential is integrated to machine precision using adaptive quadrature around jumps. A very fine reference solution is used to compute the minimal energy to be, to 10-digit precision, 8.30472428538.
As illustrated in Fig. 3(a), in which the error in energy versus mesh size is plotted, the spectral method converges with linear order due to the Fourier modes of the discontinuous potential only decaying linearly whereas the SLOD-method does indeed converge with perfect 6th order as predicted by Theorem (4.11). We also note how \(\tilde{E}\), i.e., the strategy in [28], converges with \(\mathcal{O}(H^{5})\). Similarly to the previous example, around 10 iterations were required when switching to the inverse iteration at a residual of 0.1. In sum, we are able to compute the minimal energy to 8-digit accuracy in less than 1000s total computational time.
### A challenging example with fourth order convergence rate in \(\tilde{E}\)
This additional experiment illustrates that the convergence of \(\mathcal{O}(H^{4})\) of \(\tilde{E}\) in Corollary 4.13 is sharp. We change the discontinuous potential of the previous subsection to
\[V_{\mathrm{d}}(x,y)=2(\mathbb{1}_{x>0}+\mathbb{1}_{y>0}),\]
using the indicator function \(\mathbb{1}_{(\cdot)}\) but keep all other parameters equal. Again, the inner-product
\[a_{\mathrm{d}}(u,v)=\int_{\mathcal{D}}\frac{1}{2}\nabla u\cdot \nabla v+V_{\mathrm{d}}uv\,\mathrm{d}x,\]
is used. For this example we set \(h=H/2\), but observe that \(h=H\) also yielded optimal convergence rates until the last two data points (provided the mesh matched the discontinuities). In Fig. 4(a) we see that \(E(\tilde{u}_{\mathrm{OD}}^{0})\) converges slightly faster than the \(\mathcal{O}(H^{6})\) but the convergence of \(\tilde{E}(\tilde{u}_{\mathrm{OD}}^{0})\) is closer to 4. For the finest mesh size, the difference in accuracy is more than one order of magnitude. Since the potential is easy to integrate and \(h=H/2\) suffices, the CPU times for this example are roughly a tenth of the ones in the previous example, cf. Fig. 4(b).
Figure 3: Error in minimal energy versus mesh size and error in minimal energy versus CPU time for the SLOD, LOD and GPELab when solving the problem in section 5.1.
Figure 4: Accuracy versus mesh size and accuracy versus CPU times for the SLOD and GPELab in presence of the discontinuous potential of section 5.2
Figure 5: Accuracy versus mesh size and accuracy versus CPU times for the SLOD and GPELab in presence of the discontinuous potential of section 5.3.
### Fast rotation in \(2\)d
We consider the challenging rotational experiment put forward in [5], which reads
\[\min_{v\in\mathbb{S}}\int_{\mathcal{D}}|\nabla v|^{2}+V|u|^{2}+v\Omega\overline{ L_{z}v}+\frac{\beta}{2}|u|^{4}\,\mathrm{d}x, \tag{24}\]
where \(\beta=1000\), \(\Omega=0.85\), \(V(x,y)=\frac{1}{2}(x^{2}+y^{2})\) and \(\mathcal{D}=(-10,10)^{2}\). When the residual of the nonlinear eigenvalue problem is less than \(3\cdot 10^{-3}\), we switch from the Sobolev gradient descent to the inverse iteration of the J-method. We recall that the Sobolev gradient method, which in each iteration performs a line search to find the optimal step-size, has a tendency to get stuck at local minima. To circumvent this issue, the algorithm is initialized pointwise on the fine grid as \(((1-\Omega)\xi_{1}+\Omega(x+\mathrm{i}y)\xi_{2})e^{-(x^{2}+y^{2})}\), where \(\xi_{1}\) and \(\xi_{2}\) are, for each grid point, independent random numbers uniformly distributed between \([0,1]\). This is then projected onto \(\mathcal{V}_{\mathrm{OD}}\) in the \(L^{2}\)-sense. As a result the necessary number of iterations can vary greatly, cf. Table 1. On some occasions this approach converged to an excited state close to the presumptive ground state. These excited states are illustrated in Fig. 6b through 6d and were not found in [5]. It is interesting to note that the third excited state has the lowest eigenvalue and possesses the most symmetry as it is, pointwise, invariant under rotation of \(2\pi/8\) radians. Note that we do not check whether the stationary points truly are local minima, as opposed to, e.g., saddle points. This could be done using the constrained high-index saddle dynamics method as recently proposed in [51]. Curiously, we find higher than expected order of convergence w.r.t. the mesh size \(H\), namely \(\mathcal{O}(H^{7})\), cf. Table 1, instead of the expected \(\mathcal{O}(H^{6})\). The two inbuilt initial values of GPELab, Gaussian and Thomas-Fermi, converged to the energy levels \(12.03756641\) and \(10.76768160\) respectively using the Backward Euler sine pseudospectral (BESP) method with step-size 1e-2. The improved BEC2HPC, high performance spectral solver for rapidly rotating large BECs converged after \(2737\) iterations in \(5\) min to \(E=10.727491588\) and \(\lambda=15.56205302\) on an \(128\)x\(128\) grid. On a \(256\)x\(256\) grid it converged in \(905\) iterations and \(10\)min to \(E=10.71847990\) and \(\lambda=15.60414509\). As can be read off of Table 1, our method converged to the presumptive ground state already for the coarse \(60\)x\(60\) grid in a few minutes. In around \(5\)min of computation, the method computed the ground state energy to \(5\) digit accuracy on an \(80\)x\(80\) grid. However, the accuracy in terms of energy of the \(256\)x\(256\) spectral solution is approximately \(10^{-8}\), to achieve such accuracy with our method required around \(1.5\)h.
### Harmonic potentials in \(2\)d and \(3\)d
We recall that in the previous work on the method of LOD for solving the GPE [28], \(\mathcal{O}(H^{9})\) convergence was observed in \(2\)d when computing the non-rotating ground state subject to a harmonic potential. In the light of the extra convergence observed in the previous example
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(H\) & \(E\) & EOC & \(\lambda\) & N\({}^{\mathrm{o}}\) it & T \({}_{\mathrm{CPU}}\) \\ \hline \(20/60\) & \(10.7191233\) & \(7.66\) & \(15.602838\) & \(369\) & \(147\) \\ \(20/80\) & \(10.7185510\) & \(7.50\) & \(15.604177\) & \(267\) & \(218\) \\ \(20/100\) & \(10.7184932\) & \(7.23\) & \(15.604165\) & \(912\) & \(1692\) \\ \(20/120\) & \(10.7184835\) & \(7.02\) & \(15.604152\) & \(365\) & \(923\) \\ \(20/140\) & \(10.7184811\) & \(7.00\) & \(15.604148\) & \(690\) & \(2381\) \\ \(20/160\) & \(10.7184804\) & & \(15.604146\) & \(992\) & \(4464\) \\ \hline \end{tabular}
\end{table}
Table 1: Minimal energy and corresponding eigenvalue for different mesh sizes as well as the number of iterations and total CPU-times. Using Richardson extrapolation on the SLOD results, we compute a reference value \(E^{0}=10.71847995\).
Figure 6: Ground state and excited states of the minimisation problem Eq. (24)
we consider the problem of minimising the energy in presence of a purely harmonic potential, i.e.,
\[\min_{v\in\mathcal{S}}\int\frac{1}{2}|\nabla v|^{2}+V|v|^{2}+\frac{\beta}{2}|v|^ {4}\,\mathrm{d}x,\]
with \(\beta=50\), \(V(x,y)=\frac{1}{2}(x^{2}+y^{2})\). The computational domain is set to \((-10,10)^{2}\) and \((-5,5)^{3}\) in 2d and 3d, respectively. As described in [28], the stationary problem can be reduced to a 1d problem, which when solved to 14 digits accuracy yields \(E^{0}=2.896031852200792\) in 2d and \(E^{0}=2.3734292669786\) in 3d[28]. Just as in the previous example our method converges with 7th order in both 2d and 3d, see Fig. 7. This is lower than the 9th and 12th order observed in [28]. The difference in convergence can be attributed to the difference in coarse \(\mathbb{P}^{1}_{H}\)-spaces. More precisely, the mesh used in this paper consists of isosceles triangles and the type of tetrahedron illustrated in Fig. 9b whereas [28] used equilateral and nearly regular tetrahedra.
## 6 Simulation of the dynamics
This section aims to demonstrate the usefulness of the SLOD space beyond ground state calculations. Its extreme efficiency will be demonstrated in a combined ground state and dynamics problem in a physically relevant parameter regime in 3d. The code is available at [https://github.com/JAER/SLOD_BEC](https://github.com/JAER/SLOD_BEC).
### Temporal discretisation
To integrate in time, we use the continuous Galerkin in time approach introduced by Karakashian and Makridakis [42] and further analysed with the addition of potential terms and rotation in [27]. The method is energy conservative and superconvergent at the time nodes, where the order of convergence is \(\mathcal{O}(\tau^{2q})\) with the polynomial degree \(q\) in time and the time step size \(\tau\). The benefit of energy conservation in the discrete setting by choosing an appropriate time integrator has been observed in for example [39, 38]. Apart from the arbitrarily high
Figure 7: Accuracy of minimal energy versus mesh size in 2d and 3d in the case of a purely harmonic potential.
order of convergence, the method remains energy conservative, albeit in a modified sense, when replacing \(|u|^{2}\) by its projection onto the adapted operator, thus making it very fast in our setting. Moreover, the modified sense in which it is energy conservative has been rigorously proved to be only \(\mathcal{O}(H^{8})\) away from the true energy in certain situations [38]. The exact implementation is described exemplarily in [28], so we will limit this section to a self-contained and high-level outline of the ideas and properties of the approach. In an abstract seeting, the approach seeks the best approximation in the space
\[\{v\in C([0,T],H_{0}^{1}(\mathbb{C},\mathcal{D})):v|_{t\in I_{n}}\in\mathcal{V }_{\textsc{OD}}\otimes\mathbb{P}_{q}(I_{n})\},\]
where \(\mathbb{P}_{q}(I_{n})\) denotes the space of polynomials of order \(q\) on the time slab \(I_{n}=(t_{n},t_{n+1}]\). The solution can then be recursively defined on each time slab through
\[\int_{I_{n}}\langle{\rm i}\partial_{t}u^{n}_{\textsc{OD}},v\rangle -a(u^{n}_{\textsc{OD}},v)-\beta\langle P_{\textsc{OD}}(|u^{n}_{\textsc{OD}}|^{ 2})u^{n}_{\textsc{OD}},v\rangle dt=0\quad\forall\,v\in\mathcal{V}_{\textsc{ OD}}\otimes\mathbb{P}_{q-1}(I_{n}), \tag{25}\] \[\lim_{t\downarrow t_{n}}u^{n}_{\textsc{OD}}(t)=u^{n-1}_{\textsc{ OD}}(t_{n}),\]
with \(u^{0}_{\textsc{OD}}(0)=P_{\textsc{OD}}(u^{0})\). We point out the consistent replacement of \(|u^{n}_{\textsc{OD}}|^{2}\) with \(P_{\textsc{OD}}(|u^{n}_{\textsc{OD}}|^{2})\). Since \(\partial_{t}u^{n}_{\textsc{OD}}\in\mathcal{V}_{\textsc{OD}}\otimes\mathbb{P} _{q-1}(I_{n})\), we may select \(v=\partial_{t}u^{n}_{\textsc{OD}}\) to deduce energy-conservation in the sense that,
\[\tilde{E}(u^{n}_{\textsc{OD}}):=\int_{\mathcal{D}}a(u^{n}_{\textsc{OD}},a^{n} _{\textsc{OD}})+\frac{\beta}{2}P_{\textsc{OD}}(|u^{n}_{\textsc{OD}}|^{2})|u^{ n}_{\textsc{OD}}|^{2}dx=\tilde{E}(u^{0}_{\textsc{OD}}). \tag{26}\]
Although Eq. (25) is posed in a space of dimension \((q+1)\dim(\mathcal{V}_{\textsc{OD}})\), the linear part of the system of equations can be decoupled for each time slice, so that only \(q\) systems of equations of size \(\dim(\mathcal{V}_{\textsc{OD}})\) need to be solved [42]. A fixed-point iteration is then used to solve the nonlinear system at each time step.
### Optical lattice in \(3\)d
Inspired by the famous experiment of Greiner et al. [31], we study the dynamics of a BEC released from an optical lattice. The initial state is computed as an energy minimiser subject to an optical lattice and a harmonic trapping potential. For \(t>0\) the optical lattice is switched off, but the trapping potential is slightly increased to limit the computational domain. The exact parameters are given as follows:
\[E_{\textsc{gs}}=\min_{v\in\mathbb{S}}\int_{\mathcal{D}}\frac{1}{2}|\nabla v|^{ 2}+(V_{s}+V_{o})v^{2}+\frac{\beta}{2}|v|^{4}dx,\]
\[V_{s}(x,y,z)=x^{2}+y^{2}+z^{2},\ V_{o}(x,y,z)=100\sin^{2}(\pi x)\sin^{2}(\pi y )\sin^{2}(\pi z),\ \beta=100,\,\mathcal{D}=(-6,6)^{3}.\]
For the dynamics we set \(V(x,y,z)=2(x^{2}+y^{2}+z^{2})\) but keep everything else the same and compute to \(T=1\) with a time step size of \(\tau=1/128\) and a fourth order method, i.e. \(q=2\). The grid size is \(H=0.25\). The minimum energy is calculated to be \(E_{\textsc{gs}}(\tilde{u}^{0}_{\textsc{OD}})=74.585793\) (the modified energy is \(\tilde{E}_{\textsc{gs}}(\tilde{u}^{0}_{\textsc{OD}})=74.488309\)). At \(t=0\), the condensate is in a fine lattice structure that decays faster than exponentially away from the origin, cf. Fig. (a)a with the conserved energy of eq. (26) being \(\tilde{E}(u_{\textsc{OD}}(t))=39.848597\) (due to removal of the optical lattice). This pattern quickly dissipates and at \(t=0.4\) no clear structure is visible, cf. Fig. (b)b (note the different scales). At \(t=0.8\) a macroscopic lattice structure appears, cf. Fig. (c)c, which subsequently hits the confining potential wall and starts to collapse at \(t=1.0\), see
Fig. 8d. The dynamics match those of the physical experiment [31]. A well-known heuristic explanation of the observed macroscopic pattern is that it is approximately that of the Fourier transform of the Thomas-Fermi approximation of the initial value. In other words, at some later time, the momentum distribution of the initial value is observed. However, the exact influence of the nonlinearity remains an open question.
The computation of the SLOD basis (one for the interior and all with support near the boundary) took about 1h, and all precomputations were done in about \(1.7h\). The minimiser was then computed in less than \(1h\) and the time-dependent problem solved in about \(11h\), averaging about \(300s\) per time step.
|
2309.10443 | Rethinking Imitation-based Planner for Autonomous Driving | In recent years, imitation-based driving planners have reported considerable
success. However, due to the absence of a standardized benchmark, the
effectiveness of various designs remains unclear. The newly released nuPlan
addresses this issue by offering a large-scale real-world dataset and a
standardized closed-loop benchmark for equitable comparisons. Utilizing this
platform, we conduct a comprehensive study on two fundamental yet underexplored
aspects of imitation-based planners: the essential features for ego planning
and the effective data augmentation techniques to reduce compounding errors.
Furthermore, we highlight an imitation gap that has been overlooked by current
learning systems. Finally, integrating our findings, we propose a strong
baseline model-PlanTF. Our results demonstrate that a well-designed, purely
imitation-based planner can achieve highly competitive performance compared to
state-of-the-art methods involving hand-crafted rules and exhibit superior
generalization capabilities in long-tail cases. Our models and benchmarks are
publicly available. Project website https://jchengai.github.io/planTF. | Jie Cheng, Yingbing Chen, Xiaodong Mei, Bowen Yang, Bo Li, Ming Liu | 2023-09-19T09:04:10Z | http://arxiv.org/abs/2309.10443v1 | # Rethinking Imitation-based Planner for Autonomous Driving
###### Abstract
In recent years, imitation-based driving planners have reported considerable success. However, due to the absence of a standardized benchmark, the effectiveness of various designs remains unclear. The newly released nuPlan addresses this issue by offering a large-scale real-world dataset and a standardized closed-loop benchmark for equitable comparisons. Utilizing this platform, we conduct a comprehensive study on two fundamental yet underexposed aspects of imitation-based planners: the essential features for ego planning and the effective data augmentation techniques to reduce compounding errors. Furthermore, we highlight an imitation gap that has been overlooked by current learning systems. Finally, integrating our findings, we propose a strong baseline model--PlanTF. Our results demonstrate that a well-designed, purely imitation-based planner can achieve highly competitive performance compared to state-of-the-art methods involving hand-crafted rules and exhibit superior generalization capabilities in long-tail cases. Our models and benchmarks are publicly available. Project website [https://jchengai.github.io/planTF](https://jchengai.github.io/planTF).
## I Introduction
Learning-based planners are considered a potentially scalable solution for autonomous driving, supplanting traditional rule-based planners [1, 2, 3]. This has sparked significant research interest in recent years. In particular, imitation-based planners [4, 5, 6, 7, 8, 9, 10, 11, 12] are reported to achieve notable success in simulations and real-world scenarios. Nevertheless, these planners are predominantly trained and evaluated in diverse custom conditions (_e.g._ varying datasets, metrics, and simulation setups) owing to the absence of a standardized benchmark. Consequently, it becomes challenging to compare and summarize effective design choices for constructing practical learning-based systems.
Recently, the release of the large-scale nuPlan [13] dataset, alongside a standardized simulation benchmark, has provided a new opportunity for advancing learned motion planners. Enabled by this fresh benchmark, we conduct in-depth investigations on several common and critical yet not fully studied design choices of the learning-based planner, aiming to provide constructive suggestions for future research. This paper concentrates on two overarching and fundamental facets of the imitation-based planner: the requisite ego features for planning and the efficacious techniques of data augmentation.
The majority of imitation-based planning models [5, 6, 7, 8, 9, 10, 11] follow the success of prediction models and inherently incorporate the past trajectory of the autonomous vehicle (AV) as an input feature, though imitation learning (IL) has frequently been noted for its tendency to acquire shortcuts from historical observations [4, 14, 15, 16]. Our research reaffirms that the past motion of the AV leads to significant closed-loop performance degradation. The planner achieves enhanced performance by solely utilizing the AV's present state. Surprisingly, it attains better closed-loop performance purely using the AV's current pose (position and heading). This implies that additional kinematic attributes typically deemed crucial for planning, such as velocity, acceleration, and steering, lead to a performance decline. To gain deeper insights into this phenomenon, we perform a sensitivity analysis to assess the impact of the AV's states on the resulting trajectory. Our experiments reveal that the planner can learn to exploit shortcuts from its kinematic states, even when past motion data is absent. To mitigate this challenge, we introduced a straightforward yet highly effective attention-based state dropout encoder, enabling the planner that utilize kinematic states to achieve optimal overall performance.
Imitation learning is also known to have compounding errors [17]. Perturbation-based augmentations [4, 5, 6] are a commonly employed strategy to instruct the planner on recovering from deviations. We conduct comprehensive experiments exploring various augmentation techniques, including history perturbation, state perturbation, and future correction. Additionally, we demonstrate the indispensability of proper normalization for the effectiveness of augmentation. Furthermore, we identify an ignored imitation gap within current learning frameworks and illustrate its potential impact.
Finally, by combining our findings, we provide a pure learning-based baseline model that demonstrates strong performance against state-of-the-art competitors on our standardized nuPlan benchmark. Our contributions are summarized as follows:
1. We perform an in-depth investigation on necessary features for ego planning, yielding counter-intuitive results contrary to mainstream practices. Furthermore, we introduced an effective attention-based state dropout encoder that attains the highest overall performance.
2. We conducted a comprehensive array of experiments involving various augmentation techniques, thereby elucidating an effective strategy to mitigate compounding errors. Additionally, we identified an overlooked imitation gap in current learning frameworks.
3. By combining our findings, we provide an open baseline model with strong performance. All our code, benchmarks, and models will be publicly released, as a reference for future research.
## II Related Work
**Imitation-based planners** are highly favored among learning-based planners due to their ease of convergence and typical scalability with data. They can be categorized into two distinct groups based on their input types:
1) _End-to-end_. End-to-end (E2E) methods [18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] directly produce future trajectories using raw sensor inputs. Leveraging the closed-loop CARLA benchmark [30] and the collaborative efforts of the open-source community, E2E methods have achieved remarkable advancements within a short span of time: evolving from initial basic CNN-based approaches (LBC [23], CILRS [22]) to encompass multi-modal fusion (Transfuser [26], NEAT [25], MMFN [27], Interfuser [26], ThinkTwice [29]), as well as incorporating integrated perception and planning strategies (LAV [24], ST-P3 [19], VAD [21]). However, due to limitations posed by the simulated environment, these methods typically function at low vehicle speeds, and the behavior of the simulated traffic agents lacks realism and diversity. Emerging and intriguing research, such as data-driven traffic simulation [31, 32] and realistic sensor emulation [33, 34], holds the potential to mitigate these issues.
2) _Mid-to-mid_. These approaches [35, 6, 10, 11, 12, 4, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] utilize post-perception outcomes as input and can directly learn from recorded real-world data. Chauffernet [4] introduces the synthesis of perturbed trajectories to mitigate covariate shift, a practice that becomes common in subsequent studies. [5] further augment the training data with on-policy rollouts. Several works have demonstrated the capability to operate real vehicles (SafetyNet [6], UrbanDriver [7], SafePathNet [9]). Many include a post-optimizer (DIPP [11], GameFormer [10], hotplan [35], pegasus [37]) to enhance the planner's robustness. All the abovementioned methods except hotplan use AV's history motion. Our study focuses on this category and provides an in-depth investigation of several critical design choices based on standardized data and benchmarks.
**Beyond imitation.** Another line of research aims to overcome the inherent limitations [14, 15, 16] of pure imitation learning (IL), such as utilizing environmental losses [4, 5], integrating IL with reinforcement learning [38, 39, 40], and incorporating adversarial training, also known as closed-loop training [41, 42, 43]. Our work shows that the pure IL-based planner has not reached its limit and can be significantly improved with appropriate design.
## III Rethink Imitation-based Planner
We consider the task of urban navigation employing a learned planner, trained by imitating the expert trajectory from the dataset. At each planning iteration, the planner receives various inputs, such as tracking data of surrounding objects up to a 2-second historical window, the current and past kinematic states of the ego vehicle, information about traffic lights, high-definition (HD) maps, speed limits, and the designated route. The planner is tasked with generating a trajectory for the subsequent 8 seconds. It is essential to note that, unless otherwise stated, we employ the unaltered trajectory output from the planner in this paper. We intentionally avoid incorporating performance-enhancing techniques, such as rule-based emergency stops or post-optimization, to assess the planner's inherent performance.
**nuPlan**[13] is a large-scale closed-loop ML-based planning benchmark for autonomous vehicles. The dataset encompasses 1300 hours of recorded driving data collected from four urban centers, segmented into 75 scenario types using automated labeling tools.
**Simulation.** We use nuPlan's closed-loop simulator as our simulation environment. Each simulation entails a 15-second rollout at a rate of 10 Hz. It employs an LQR controller for trajectory tracking, while the control commands are utilized to update the state of the autonomous vehicle through an internal kinematic bicycle model. The behavior of background traffic varies based on the simulation mode, which can be non-reactive (log-replay) or reactive.
**Metrics.** We employ the official evaluation metrics provided by nuPlan, which include the open-loop score (OLS), non-reactive closed-loop score (NR-CLS), and reactive closed-loop score (R-CLS). R-CLS and NR-CLS share identical calculation methodologies, differing only in that R-CLS incorporates background traffic control via an Intelligent Driver Model (IDM) [44] during simulations. The closed-loop score is a comprehensive composite score, achieved through a weighted combination of factors such as traffic rule adherence, human driving resemblance, vehicle dynamics, goal attainment, and other metrics specific to the scenario. The score scales from 0 to 100. For a detailed description and calculation of the metrics, please refer to [45].
**Baseline.** As a baseline, we have adapted the motion-forecasting backbone model from our prior work [46] to address the planning task. Figure 1 provides a concise overview of the baseline model. Despite its simplicity, the architecture primarily comprises multiple Transformer encoders [47], demonstrating significant modeling capacity. We direct interested readers to the code base for details.
**Benchmark.** For all experiments, we standardize the data split for training and evaluation. For the training phase, we utilize all 75 scenario types in the nuPlan training set, limiting the total number of scenarios to 1M frames. For the
Fig. 1: A brief overview of our baseline model. Agents, map, and ego features are separately encoded and then concatenated, which are subsequently processed by a stack of transformer encoder layers. The baseline model jointly predicts traffic agents and plans for ego vehicle at the scene level.
evaluation phase, we employ 14 scenario types specified by the nuPlan Planning Challenge, each comprising 20 scenarios. We examine two scenario selection schemes: (1) **Test14-random**: scenarios are randomly sampled from each type and fixed after selection, and (2) **Test14-hard**: in order to investigate the planner's performance on long-tail scenarios, we execute 100 scenarios of each type using a state-of-the-art rule-based planner (PDM-Closed [48]), subsequently selecting the 20 least-performing scenarios of each type. Example scenarios can be found on the project page. As the online leaderboard submission is closed, all evaluations are conducted on the nuPlan public test set.
### _Input feature makes a difference_
This section aims to address the following questions: (1) _Is historical motion data essential for planning?_ (2) _If not, do all current states of autonomous vehicles contribute to improving the planner's performance?_ To address these inquiries, we conducted an investigation involving two sets of variants derived from our baseline model. The results on Test14-random and Test14-hard benchmarks are presented in Table I. Among the two historical variants, one shares its history encoder with other traffic agents, while the other employs a distinct history encoder for the ego vehicle's past motion. In the case of state-only models, we scrutinized several pivotal state variables essential for conventional planners, encompassing vehicle pose, velocity, acceleration, and steering angle. Based on the experimental results, we have the following findings:
**History is not necessary.** While models incorporating historical motion data exhibit superior off-policy evaluation performance (OLS), they manifest significantly poorer performance in closed-loop metrics compared to state-based models. This phenomenon may attributed to the well-established "copycat" problem [16] or learning shortcuts [49], wherein the planner relies on extrapolation from historical data without a comprehensive grasp of the underlying causal factors. Furthermore, the advantage in open-loop performance of historical models diminishes rapidly as the number of states increases in state-only models. Therefore, we conclude that history motions are not necessary for planning models.
**Shortcut learning in kinematic states.** Kinematic states, such as velocity and acceleration, serve as vital initial boundary conditions for ensuring safety and comfort in trajectory planning. Nevertheless, we are surprised to find that the _state3_ model, which exclusively relies on the autonomous vehicle's (AV) pose (comprising position and heading), significantly outperforms other models incorporating kinematic states in terms of CLS metrics. To gain deeper insights into this phenomenon, we study a left-turn case of the _state6_ model. As depicted in Figure 1(a), the model generates an undesired off-road trajectory w
\begin{table}
\begin{tabular}{c l|c c c|c c c} \hline \hline \multicolumn{2}{c}{Models} & \multicolumn{3}{c}{Test14-random} & \multicolumn{3}{c}{Test14-hard} \\ \hline Input feature & Variants & OLS & NR-CLS & R-CLS & OLS & NR-CLS & R-CLS \\ \hline w/ history & shared encoder & 90.20 & 56.50 & 56.28 & **88.25** & 48.60 & 51.32 \\ & seperate encoder & **90.28** & 61.02 & 59.85 & 86.77 & 51.98 & 49.34 \\ \hline & state3 (x, y, yaw) & 81.13 & **85.99** & **79.38** & 71.43 & 68.44 & **63.14** \\ & state4 (+vel.) & 86.42 & 81.32 & 75.75 & 82.30 & 68.15 & 62.51 \\ w/o history & state5 (+acc.) & 87.71 & 81.76 & 74.51 & 84.54 & **68.67** & 54.91 \\ & state6 (+steer) & 88.45 & 83.32 & 77.52 & 85.93 & 65.15 & 55.99 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Results of different input features. For models with history input, “shared” and “separate” encoders indicate whether both the agent and ego vehicles utilize a shared history encoder or distinct ones. For models without history input, \(+\) refers to the inclusion of an additional state variable compared to the preceding model in the table. Higher values indicate better performance for all metrics, with the best metric highlighted in **bold**.
Fig. 3: Illustration of the attention-based state dropout encoder.
Fig. 2: The left side shows the planning trajectory of the _state6_ model by adjusting AV’s steering angle from 0.15 to 0.5 rad. The right side illustrates the magnitude of the gradient concerning the trajectory endpoint’s position in relation to the AV’s kinematic states.
angle from 0.15 (blue) to 0.5 (red) rad. We hypothesize the model still learns false correlation from the kinematics even without the present of the past observation.
**State dropout encoder.** To confront our assumption, we propose an attention-based state dropout encoder (SDE), as shown in Figure 3. Each state variable undergoes individual embedding through a linear layer before being combined with positional encoding. A learnable query aggregates state embeddings through a cross-attention module. During training, each embedded state (except position and heading) token will be dropped with a certain probability. The encoder compels the model to unveil the root causes of behaviors by imposing partial constraints on its access to auxiliary information. Meanwhile, the model can enhance its planning capabilities when kinematic attributes are accessible. We implement the state dropout encoder in the _state5_ and _state6_ models, and the results are depicted in Table II. The results indicate that the utilization of SDE significantly enhances the closed-loop performance of the models. Importantly, when compared to _state3_, _state5_ and _state6_ models augmented with SDE exhibit not only improved closed-loop score but also substantially higher open-loop score, providing compelling evidence for the efficacy of SDE. We point out that _state3_ model is fundamentally ambiguous as it loses all kinematic information (supported by its poor OLS performance). Figure 2(a)(b) displays the comparative planning results of _state6_ model, while Figure 2(c)(d) presents comparative results for the magnitude of the gradient of the endpoint's position \((X_{T},Y_{T})\) w.r.t. the initial kinematic states \(s_{0}\). The results demonstrate that the model employing SDE is less sensitive to variations in kinematic states, resulting in more resilient planning outcomes.
### _Data augmentation and normalization_
Data augmentation is a common practice for IL-based models to learn how to recover from deviations. In this section, we conduct comprehensive experiments on different data augmentation techniques, aiming to explore effective strategies to mitigate compounding errors. Different augmentation strategies are displayed in Figure 4. In Figure 4(a), an example driving scenario is depicted, with all coordinates normalized relative to the autonomous vehicle's center. In Figure 4(b), randomly sampled noise (perturbation) is added to the AV's current state, and its history states are smoothed accordingly. In Figure 4(c), it is demonstrated that the scenario's coordinates are re-normalized with respect to the autonomous vehicle's center after perturbation. Figure 4(d) showcases the generation of a rectified future trajectory through nonlinear optimization. Essentially, both strategies depicted in Figure 4(b) and 4(d) serve the common objective
of guiding the vehicle back to the expert trajectory.
Table III displays the outcomes of experiments conducted with various augmentation strategies on four model variants. Based on these results, the following findings emerge: (1) In the case of the _history_ and _state5_ models, none of the data augmentations exhibit substantial enhancements. We postulate that the primary challenge faced by these two models is the issue of causal confusion, _i.e_. extrapolation from either historical or kinematic states. (2) For _state3_ and _state6+SDE_ models, perturbation is of great importance, but only works with proper normalization. For example, _state3_ model's NR-CLS score boosts from 71.86 to 85.99 with perturbation and re-normalization, which is much higher than solely using perturbation (74.28). This implies it is important to keep the data distribution close between training and testing. (3) Providing a corrected guiding future trajectory does not serve a positive effect. One possible reason is that the manually generated trajectory does not align with the expert's trajectory distribution. Directly using the expert trajectory as supervision is a more effective choice as it keeps the original distribution, and small deviations can be easily fixed by the tracker.
### _The hidden imitation gap_
**The imitation gap.** Within the most popular imitation learning frameworks, models imitate the logged expert's footprints from the dataset. We argue that this learning framework gives rise to a concealed gap in imitation, potentially leading to notable performance degradation. As illustrated in Figure 5, the recorded trajectory, commonly known as the expert trajectory, serves as the ground truth during the training of the imitation-based planner. The generated imitated trajectory is subsequently processed by the downstream tracker and the underlying system dynamics, yielding the final trajectory of the AV. Nevertheless, owing to the lack of knowledge about the tracker and dynamics during training, the actual trajectory may substantially deviate from the recorded trajectory, even when imitation is flawless. This assertion finds support in the experimental findings presented in Table IV. Notably, the NR-CLS of the _Log-replay_ + _LQR_ method on Test14-hard exhibits a significant decrease of \(5.65\) in comparison to perfect tracking.
**RL Adapter.** One possible solution is to directly imitate the control command rather than the trajectory points. Nevertheless, this approach is heavily reliant on the specific vehicle model, making it less generalizable and interpretable than the trajectory-based method. An alternative approach involves incorporating a differentiable kinematic model into the trajectory decoder [5, 51]. However, the kinematic model is often oversimplified to ensure differentiability. To tackle this challenge, we introduce a reinforcement learning-based trajectory adapter (RL Adapter) designed to bridge this gap. The RL Adapter transforms the imitated trajectory into the relevant control commands while accounting for the underlying dynamics. The benefits are two-folds. First, it can adapt to various vehicle models without retraining the planner. Second, it imposes no constraints on the vehicle model and remains compatible with non-differentiable vehicle models (_e.g_. high-fidelity real vehicle dynamics model). The training process of the adapter is displayed in Figure 5 and the rewards are shown in Table V. We use PPO [52] for policy optimization and the training finishes in 80K steps with a learning rate of 1e-3. As depicted in Table IV, the RL Adapter performs similarly to perfect tracking, highlighting its capacity to bridge the imitation gap. We notice that it can be integrated into the training process of the planner and leave this as future work.
## IV Comparison to State of the Art
**Implementation details.** Integrating our findings, we propose a fully learning-based baseline planning model called
\begin{table}
\begin{tabular}{l c c} \hline \hline Reward Term & Expression & Weight \\ \hline Position Tracking & \(e^{-15||\mathbf{p}_{xy}-\mathbf{p}_{xy}^{*}||}\) & 1.0 \\ Action & \(||\mathbf{u}||^{2}\) & -0.01 \\ Action Rate & \(||\mathbf{u}||^{2}\) & -0.1 \\ Lon. Acc. limit & \(\mathbbm{1}(\hat{v}>2.4)\) & -1 \\ Jerk limit & \(\mathbbm{1}(||\hat{v}||>4.0)\) & -1 \\ Yaw rate & \(\mathbbm{1}(||\hat{\theta}||>0.95)\) & -0.5 \\ \hline \hline \end{tabular}
\end{table} TABLE V: The reward terms and expression of the RL adapter. Action \(\mathbf{u}\) contains acceleration and steering rate. \(v\) and \(\theta\) refers to the longitudinal and heading angle of the AV.
\begin{table}
\begin{tabular}{l|c c|c c} \hline \hline \multicolumn{5}{c}{Test14-random} & \multicolumn{3}{c}{Test14-hard} \\ \hline Log-replay + & NR-CLS & R-CLS & NR-CLS & R-CLS \\ \hline Perfect tracking & 96.63 & 77.38 & 91.61 & 71.34 \\ \hline LQR & 94.03 & 75.86 & 85.96 & 68.80 \\ RL Adapter & 96.3 & 77.13 & 91.65 & 71.62 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Experimental results of the log-replay planner (perfect imitation) with different trackers on Test14-random and Test14-hard benchmarks. LQR is the default tracker used by the nuPlan benchmark and the RL adapter is our proposed method to address the imitation gap.
Fig. 5: Illustration of the imitation gap and the proposed RL adapter.
Planning Transformer (**PlanTF**). Specifically, we employ the _state6_ model, incorporating a state attention dropout encoder with a dropout rate of 0.75. During training, we apply state perturbation with a probability of 0.5. The model is trained using a batch size of 128 and a weight decay of 1e-4 for 25 epochs. The initial learning rate is set to 1e-3, decaying to zero in a cosine manner.
**Methods.** We compare PlanTF's performance with several state-of-the-art planners. (1) **RasterModel** is a CNN-based planner provided in [13]. (2) **UrbanDriver**[7] is a vectorized planner based on PointNet-based polyline encoders and Transformer. Here we use its open-loop re-implementation and history perturbation is employed during training. (3) **GameFormer**[10] is a DETR-like interactive prediction and planning framework based on the level-k game, which incorporates a post-optimizer to generate the final trajectory. (4) **PDM*[48]** is the winning solution of the 2023 nuPlan Planning Challenge. **PDM-Closed** is a purely rule-based approach that ensembles the IDM [44] with different hyper-parameters. **PDM-Hybrid** is a variant of PDM-closed that adds an offset predictor to improve its open-loop prediction performance. **PDM-Open** is the pure learning component without the IDM-based planner. Results are reproduced using their publicly available code and trained on our standard 1M data split.
**Results.** Table VI presents comparative results for the Test14-random and Test14-hard benchmarks. First, the proposed PlanTF significantly outperforms all other pure imitation-based methods across all metrics, particularly in terms of closed-loop performance. It is also the only learning-based method that surpasses the widely recognized IDM, highlighting the importance of proper design in IL. Second, when compared to rule-based and hybrid methods, PlanTF delivers outstanding OLS while maintaining highly competitive CLS, without the need for any tricky hand-crafted rules or strategies. Notably, our approach achieves the highest NR-CLS on the Test14-hard benchmark, indicating that although rule-based methods perform well in ordinary scenarios (Test14-random), they struggle to generalize in long-tail situations (Test14-hard). In contrast, PlanTF demonstrates stronger generalization capabilities.
## V Conclusion
In this study, we systematically examine several crucial design aspects of imitation-based planners by utilizing the standardized nuPlan benchmark. Our findings reveal that catastrophic shortcut learning generally occurs for input features, such as historical motions and single-frame kinematic states. This leads to the unexpected outcome that planning solely based on the AV's current position results in superior closed-loop performance. To mitigate this issue, we introduce a straightforward attention-based state dropout encoder (SDE) that effectively addresses the shortcut learning problem. With the implementation of SDE, the _state6_ model achieves the best overall performance. Data augmentation is another significant factor in imitation-based planners. Our results demonstrate that perturbation is vital for reducing compounding errors, but only effective with appropriate feature normalization. Furthermore, we observe that the original expert trajectory remains a reliable training ground truth, even when subjected to perturbation. In addition to these findings, we identify a neglected imitation gap caused by the model's lack of awareness of the underlying system dynamics, which considerably impacts the planner's performance. To rectify this issue, we propose a reinforcement learning-based adapter. By incorporating our findings, the proposed purely learning-based baseline model, PlanTF, demonstrates impressive performance compared to state-of-the-art approaches and is on par with methods that employ intricate rule-based strategies or post-optimization. This highlights the importance of proper design choices for imitation learning-based planners.
**Limitation and future work.** Despite pushing the boundaries of pure imitation-based planners, our method is constrained by the fundamental mismatch between _open-loop
\begin{table}
\begin{tabular}{l l|c c c|c c c|c} \hline \hline \multicolumn{2}{c}{Planners} & \multicolumn{4}{c}{Test14-random} & \multicolumn{4}{c}{Test14-hard} \\ \hline Type & Method & OLS & NR-CLS & R-CLS & OLS & NR-CLS & R-CLS & Time(ms) \\ \hline Expert & Log-replay & 100.0 & 94.03 & 75.86 & 100.0 & 85.96 & 68.80 & - \\ \hline \multirow{2}{*}{Rule-based} & IDM [44] & 34.15 & 70.39 & 72.42 & 20.07 & 56.16 & 62.26 & 32 \\ & PDM-Closed [48] & 46.32 & 90.05 & **91.64** & 26.43 & 65.07 & 75.18 & 140 \\ \hline \multirow{2}{*}{Hybrid\({}^{\dagger}\)} & GameFormer [10] & 79.35 & 80.80 & 79.31 & 75.27 & 66.59 & 68.83 & 443 \\ & PDM-Hybrid [48] & 82.21 & **90.20** & 91.56 & 73.81 & 65.95 & **75.79** & 152 \\ \hline \multirow{4}{*}{Learning-based} & RasterModel [13] & 62.93 & 69.66 & 67.54 & 52.4 & 49.47 & 52.16 & 82 \\ & UrbanDriver [7] & 82.44 & 63.27 & 61.02 & 76.9 & 51.54 & 49.07 & 124 \\ \cline{1-1} & GC-PGP [50] & 77.33 & 55.99 & 51.39 & 73.78 & 43.22 & 39.63 & 160 \\ \cline{1-1} & PDM-Open [48] & 84.14 & 52.80 & 57.23 & 79.06 & 33.51 & 35.83 & 101 \\ \cline{1-1} & PlanTF (Ours) & **87.07** & 86.48 & 80.59 & **83.32** & **72.68** & 61.7 & 155 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: Comparison with state-of-the-arts. The runtime includes feature extraction and model inference based on Python code. \({}^{\dagger}\)_indicates these methods’ final output trajectory relies on rule-based strategies or post-optimization._
training_ and _closed-loop testing_. Incorporating closed-loop information and system dynamics into the training process constitutes our future research direction.
## Appendix
**Additional results on Val14 benchmark.** We present the comparative results (Table. VII) on the **Val14**[48] benchmark. **Val14** contains 1180 scenarios from 14 scenario types.
**Ablation on the state dropout rate.** Table VIII shows the ablation study on different dropout rate the of _state6_+_SDE_ model.
|
2309.09112 | Rewriting History: Repurposing Domain-Specific CGRAs | Coarse-grained reconfigurable arrays (CGRAs) are domain-specific devices
promising both the flexibility of FPGAs and the performance of ASICs. However,
with restricted domains comes a danger: designing chips that cannot accelerate
enough current and future software to justify the hardware cost. We introduce
FlexC, the first flexible CGRA compiler, which allows CGRAs to be adapted to
operations they do not natively support.
FlexC uses dataflow rewriting, replacing unsupported regions of code with
equivalent operations that are supported by the CGRA. We use equality
saturation, a technique enabling efficient exploration of a large space of
rewrite rules, to effectively search through the program-space for supported
programs. We applied FlexC to over 2,000 loop kernels, compiling to four
different research CGRAs and 300 generated CGRAs and demonstrate a 2.2$\times$
increase in the number of loop kernels accelerated leading to 3$\times$ speedup
compared to an Arm A5 CPU on kernels that would otherwise be unsupported by the
accelerator. | Jackson Woodruff, Thomas Koehler, Alexander Brauckmann, Chris Cummins, Sam Ainsworth, Michael F. P. O'Boyle | 2023-09-16T23:58:55Z | http://arxiv.org/abs/2309.09112v1 | # Rewriting History: Repurposing Domain-Specific CGRAs
###### Abstract
Coarse-grained reconfigurable arrays (CGRAs) are domain-specific devices promising both the flexibility of FPGAs and the performance of ASICs. However, with restricted domains comes a danger: designing chips that cannot accelerate enough current and future software to justify the hardware cost.
We introduce _FlexC_, the first flexible CGRA compiler, which allows CGRAs to be adapted to operations they do not natively support. FlexC uses dataflow rewriting, replacing unsupported regions of code with equivalent operations that are supported by the CGRA. We use equality saturation, a technique enabling efficient exploration of a large space of rewrite rules, to effectively search through the program-space for supported programs.
We applied FlexC to over 2,000 loop kernels, compiling to four different research CGRAs and 300 generated CGRAs and demonstrate a 2.2\(\times\) increase in the number of loop kernels accelerated leading to 3\(\times\) speedup compared to an Arm A5 CPU on kernels that would otherwise be unsupported by the accelerator.
## I Introduction
Specialized hardware has demonstrated truly significant performance gains over general-purpose processors [1], yet despite its potential [2, 3], it faces real challenges to wider adoption [4]. The fundamental reason is that programming such accelerators is difficult [5], often requiring modification of the underlying algorithms [4]. Users are often reluctant modify their algorithms [6] raising frequency-of-use [7, 8, 9] and cost [10] as concerns.
Heterogeneous Coarse-Grained Reconfigurable Architectures (CGRAs) [11] are a class of architectures that promise to solve this problem [7]. CGRAs can achieve near-ASIC level performance [12] and provide enough flexibility to run a wider class of code [7]. Heterogeneous CGRAs use processing elements specialized to various degrees [13]. While specialization makes hardware more efficient [14, 15, 16], hardware specialization also introduces limitations on the software [17, 18, 19].
Despite aiming at flexibility, heterogeneous CGRAs are hard to use beyond the scope they were designed for. They age poorly as software evolves [20] and falls out of the scope of the narrowly designed hardware: the _domain-restriction problem_.
This problem is highlighted by existing state-of-the-art CGRA compilers such as OpenCGRA [21] which frequently fail to generate code for the specialized hardware. If code contains even a single operation that is unsupported by a particular hardware, existing techniques simply cannot accelerate it, restricting CGRAs to an overly narrow software domain. This domain-restriction poses a significant challenge and is not well understood [22]. What we need is a new approach that _automatically transforms_ user programs to fit heterogeneous CGRAs, expanding the domain of supported software without user effort.
We introduce _FlexC_, the first _flexible_ CGRA compiler that addresses the domain-restriction problem. FlexC uses a set of rewrite rules that translate unsupported operations into supported ones. This compilation strategy requires a non-trivial application of rewrites in an attempt to find a valid transformation to an expression the CGRA supports, leading to a large search space. To explore this space efficiently, FlexC uses a powerful technique called equality saturation [23, 24]. CGRA compilation presents a number of unique challenges to equality saturation including, crucially, transformation encoding and cost modelling. Overcoming these challenges enables us to efficient explore large spaces.
In summary, we contribute:
* FlexC, the first _flexible_ CGRA toolchain designed to support operationally-specialized CGRAs, increasing the number of loops that can run on a particular CGRA by a factor of 2.2\(\times\);
* a compiler designed to integrate domain-specific rewrite rules, and four sets of rewrite rules demonstrating the effective translation of code to run on CGRAs designed for different domains;
* the first large-scale benchmark suite for CGRA compilers, with more than 2,000 loops from five different projects1;
* an evaluation of these tools, demonstrating the importance of non-linear exploration techniques like equality saturation in finding working compilation sequences for real-world heterogeneous CGRAs.
## II Motivation
Heterogeneous CGRAs have the potential for better power efficiency and lower area utilization than their homogeneous counterparts [27, 14]. However, introducing this heterogeneity introduces significant compilation challenges.
### _The Software Domain-Restriction Problem_
The cost of the specialized hardware has to be justified by enough use [7] and demand [10]. To illustrate this problem, consider the loop shown in figure 1 from the FFMpeg library. We wish to compile this code to the CCA-like accelerator adapted from DSAGen [26] shown on the left of the figure. Unfortunately, the loop contains multiplication and subtraction operators which are not supported by the CGRA. Currently, no compiler technique is able to generate code that is executable on this accelerator. Our approach is able to rewrite the program into the form shown in the bottom of the figure. It no longer uses subtraction or multiplication, but instead used additions and shifts which are supported. The loop can now be executed on the CCA.
### _Limits of existing compilers_
To overcome the domain-restriction problem, we need compilers to rewrite software that uses operations not natively supported by the hardware, but existing approaches fail.
Standard compiler flowsCompiler frameworks, such as GCC, LLVM and MLIR, use canonicalization passes to transform IRs into a predictable and efficient form. Canonicalization is implemented with a set of simple fixed rewrite rules that are applied greedily. In heterogeneous CGRAs, canonicalizing, however, does not solve the domain-restriction problem, as the rewritten expressions may not be supported by the target hardware.
The Limits of Greedy Dataflow RewritingFor successful rewriting, we need to add new rewrites that allow translation to supported ones. However, we have to determine the order of, and whether to apply rewrites without searching a combinatorially large number of options. Greedy rewriting is an efficient approach but Figure 2 highlights three problems that can cause greedy rewriters to get stuck. In Figure 2(a), greedily applying the first available rules, \(r1,r6,r3\) to expression \(e_{1}\) leads to the resulting expression \(e_{10}\) which is less performant the optimal expression \(e_{4}\). In Figure 2(b), greedily applying the first available ruler leads to a cycle between \(e_{1}\) to \(e_{3}\), never reaching the solution \(e_{4}\). Finally, in Figure 2(c), the greedy rewriter gets stuck in a local minimum, \(e_{2}\) due to the cost of applying further local rewrites.
Figure 1 demonstrates these limitations in real code. To convert a - b into a + -1 ^ b + 1, the rewriter must traverse the a + (- b) state, which is no better than a - b. As there is no immediate improvement in cost, a greedy scheme would not apply such a rewrite. Equality saturation however, applies this rewrite leading to the transformed code executable on the accelerator.
### _Our approach_
FlexC automatically adapts the software, replacing unsupported operations via dataflow rewriting. FlexC adaptively chooses between traditional greedy rewriting techniques and equality saturation [23, 28]. This overcomes challenges with traditional canonicalization and greedy techniques while retaining fast execution where possible. Equality saturation uses a graph data structure, called an e-graph, to record semantically equivalent programs while space-efficiently representing their different syntactic program variations. Rewrites are directly applied to this graph, rapidly growing the set of equivalent program variations. Rewrites are either be applied until satura
Fig. 1: An example from the FFMpeg [25] library, which is part of our benchmark suite. FlexC rewrites the loop to run within the context of the CCA-like accelerator adapted from DSAGen [26]. Equality saturation is critical in this example to enable the conversion of a - b into a + -1 ^ b + 1, as the rewriter must traverse the a + (-b) state, which is no better than a - b. This is an example of the cost-trap problem (Figure 2c).
tion is reached and no rewrites can be applied any more, or until a pre-defined rewrite goal is reached [29].
Our results confirm that equality saturation enables FlexC to compile more software to the CGRA.
## III System Overview
FlexC is implemented in OpenCGRA [21], a CGRA compiler intended to target heterogeneous CGRAs. Given an input DFG, FlexC explores sequences of rewrites to eliminate the operations that are not supported by a specialized architecture from the DFG. After rewriting the DFG, FlexC uses OpenCGRA to target the hardware.
Figure 3 shows how FlexC compiles software for a CGRA. In a traditional CGRA compiler, a Data-Flow Graph (DFG) is used to generate a CGRA configuration. If the DFG does not match the target CGRA precisely, the code generation fails.
FlexC adds a rewrite system, using a set of rewrite rules dictated by the context and a cost function based on the target CGRA. After selecting the optimal graph according to the cost model (the most likely DFG to be compilable to the underlying CGRA), FlexC uses a traditional CGRA compiler to generate the final mapping.
FlexC can be applied in conjunction with any CGRA compiler -- provided that appropriate rulesets using the right instructions can be supplied. We provide FlexC under a liberal license to allow this2.
Footnote 2: Released upon publication
### _Graph Rewriting_
FlexC translates programs to domain-specific CGRAs by generating a large set of equivalent code loops and finding a suitable match if one exists. This section formally defines our inputs: a dataflow graph representing a loop, a set of rewrite rules, and a CGRA specification and our rewriting strategy.
**Definition 1**: _A data flow graph \(D\) is a finite set of nodes \(N\) corresponding to operations \(op(n_{1},...,n_{m})\), where \(op\) is an operation symbol and \(n_{i}\in N\) are children operands._
\(D\) must be a _directed acyclic graph_, meaning that a function \(id:N\rightarrow|N|\) should exist such that:
\[\forall n=op(n_{1},...,n_{m})\in N.\ \forall i.\ id(n_{i})<id(n)\]
While OpenCGRA uses Control Data Flow Graphs (CDFGs), and thus can handle branches and loops, we do not attempt to rewrite across control-flow boundaries. Instead, we break all control flow before rewriting, and restore control flow after rewriting.
**Definition 2**: _A rewrite rule \(R\) is of the form \(l\Rightarrow r\), where \(l\) and \(r\) are patterns. A pattern \(P\) is a tuple \((N_{P},O_{P})\), where \(N_{P}\) is a data flow graph that may contain variable nodes on top of operation nodes, and \(O_{P}\subseteq N_{P}\) is a list of output nodes._
\(R\) can be applied to \(D\) when \(l\) has a match \((M_{l},\sigma)\) in \(D\), where \(M_{l}\) is a list of nodes from \(D\) matching \(O_{l}\), and \(\sigma\) maps variables to matching nodes from \(D\). To produce the list of nodes \(M_{r}\) that should replace the \(M_{l}\) nodes in \(D\), the variables are substituted in \(r\), written as \(r[\sigma]=(N^{\prime}_{r},M_{r})\).
A rewrite rule must be _semantics-preserving_. This means that, \(\forall(M_{l},\sigma)\). \(M_{l}=M_{r}\) which depends on the element-wise application of a given semantic equality. The meaning of equality in this case depends on the rules provided. We will see in section IV that this may be true equality, fuzzy equality (e.g. with floating-point manipulation rules) or even weaker definitions of equality (e.g. with stochastic computing rules IV-B3).
**Definition 3**: _In a CGRA, we have an array of processing elements, PEs (\(PE_{i}\)), each of which supports a particular set
Fig. 3: FlexC system overview. A Data-Flow Graph (DFG), set of Rewrite Rules and a CGRA specification are input. FlexC first applies a hybrid-rewrite strategy and selects the most suitable candidate to pass to the CGRA compiler, which generates the configuration.
Fig. 2: Applying rewrite rules with a greedy rewriter results in dead-ends that equality saturation avoids.
of operations (\(op(n_{1},...,n_{m})\)), \(\mathit{Supported}_{i}\). We generate this set from the CGRA's specification.
Given a particular DFG \(D\), with nodes \(N\), there may be some subset of nodes \(\mathit{Unsupported}(N)\) that have operations without hardware support _anywhere_ on the CGRA. We wish to find a sequences of rewrite rules that we can apply to the DFG to produce \(D^{\prime}\) with nodes \(N^{\prime}\) such that \(\mathit{Unsupported}(N^{\prime})=\{\}\), as otherwise it will be impossible to schedule that particular code onto the CGRA. We thus define the set of operations a particular full CGRA can support as:
\[\mathit{ops}=\bigcup_{i}\mathit{Supported}_{i}\]
#### Iii-B1 Rewriting Goal
The compiler takes a dataflow graph (DFG) \(D\) as input. Numerous existing techniques attempt to find a valid mapping [30], but in heterogeneous CGRAs, the operations in the DFG may not be in the supported set for _any_ node.
The goal of a rewriting algorithm \(A(D,Rs,\mathit{ops})\) is to return \(D^{\prime}\), obtained by rewriting \(D\) using the set of rules \(Rs\), such that \(D^{\prime}\) only uses operation symbols from \(\mathit{ops}\).
We further define a cost function \(C(D,\mathit{ops})\) to minimize:
\[\sum_{op(...)\in N}1\ \mathrm{if}\ op\in\mathit{ops}\ \mathrm{else}\ 10^{6}\]
This incentivizes smaller programs by giving a cost of \(1\) to available operations, while giving a huge penalty to unavailable operations by giving them a cost of \(10^{6}\). Our CGRA specification and cost function aim to eliminate unavailable operations to successfully map the program onto the CGRA, without trying to precisely model the execution performance.
With the assumption that \(|N|<10^{6}\), rewriting successfully eliminates all unavailable operations if \(C(D^{\prime},\mathit{ops})<10^{6}\), and fails to do so if \(C(D^{\prime},\mathit{ops})\geq 10^{6}\).
#### Iii-B2 Greedy Rewriting
Listing 1 shows our greedy rewriter. Greedy rewriting is the most straightforward rewriting approach; it runs quickly but often gets stuck in local minima.
On each greedy iteration, we iterate over every rewrite rule to find matches (lines 6 to 8). If applying a rewrite for a given match leads to a cost reduction, we proceed with the rewritten program and forget about the previous program (lines 9 to 11). The local_minima variable keeps track of whether a fixed point was reached, which is the termination condition (line 3).
#### Iii-B3 Equality Saturation
Listing 2 shows our algorithm for rewriting via equality saturation. Equality saturation [23] is a more sophisticated rewriting approach; it avoids getting stuck in local minima but can be slow to execute. We leverage both the state-of-the-art Rust egg library [24] and existing work extending equality saturation to graph rewriting [31].
First, we initialize an e-graph data structure that compactly represents a space of equivalent programs by sharing equivalent sub-terms as much as possible (line 2). Then, we run the explorative phase of equality saturation using our set of rewrite rules, iteratively exploring possible rewrites in a breadth-first manner and growing the e-graph (line 3).
```
1defgreedy(d,rs,ops):
2local_minima=false
3whilenotlocal_minima:
4local_minima=true
5
6forrinrs:
7matches=find_matches(d,r)
8forminmatches:
9d2=apply_match(d,m)
10ifC(d2,ops)<C(d,ops):
11d=d2
12local_minima=false
13break
14
15returnd
```
Listing 1: Greedy rewriting algorithm
As visible in line 9, the explorative phase terminates when all possible rewrites have been explored (a fixed point, called saturation, is reached), or when another stopping criteria is reached (e.g. a timeout). On each explorative iteration, all rewrite-rule matches are collected (line 12) and applied in a non-destructive way, adding new equalities into the e-graph (line 14).
Finally, we extract the best program from the e-graph according to our cost function using Linear Programming (line 4).
#### Iii-B4 Hybrid Rewriting
FlexC uses hybrid rewriting (listing 3), which takes the best from both strategies. In hybrid rewriting, we first apply a fast greedy rewriter. If the greedy rewriter does not find a suitable candidate, FlexC falls back to the more
expensive, but more likely to succeed, equality saturation.
## IV Rewrite Rules
FlexC is a platform that can target any domain-specific CGRA, and can integrate domain-specific rules to work alongside traditional rules. Equality-saturation enables this flexibility by using the same rule exploration algorithms regardless of changes in the ruleset. We explore several different rulesets: some rules are always correct, while other rulesets may only be useful in certain domains, such as the stochastic-computing rewrite rules (section IV-B3).
In a traditional application of rewrite rules, compilers look to perform strength reduction [32], by replacing more complex rules with simpler rules -- this is typically achieved by canonicalizing towards the simplest method of representing an expression. In a traditional compiler, a rule is typically formatted as:
\[\text{Complex Operation}\rightarrow\text{Simpler Operation}\]
A typical rewrite system produces a series of independent rewrites,
\[e_{1}\underset{\text{rule1}}{\implies}\cdots\underset{\text{rule}}{\implies }e_{N}\]
to produce the best suited expression. The rules are written in such a way that they chain together, as they are in existing compilers. We stop rewriting when no more rules are applicable.
However, when compiling for a CGRA, replacing simpler operations with _more complex_ ones can be beneficial if they enable an entire region of code to be run on faster, fixed-function hardware.
As a result, for some sequence of rules
\[e_{1}\underset{\text{rule1}}{\implies}\cdots\underset{\text{rule}}{\implies }e_{i}\implies\cdots\underset{\text{rule}}{\implies}e_{N}\]
some intermediate \(e_{i}\) may be the best choice of expression, and further, rule application can occur bidirectionally. Rather than strength reduction, which implies a linear sequence of operations that become strictly simpler, the process for compiling for a CGRA is instead _rewrite exploration_.
### _Core Integer Rules_
We use a set of strength-reduction and canonicalization rules representative of those in a typical compiler. An example is:
\[\text{x * -1 => -x}\]
On the left-hand side of this rule, we require a multiplication operation, and on the right-hand side, we require a negation operation. For most compilers, the right-hand side is (almost) always the better choice, so most rewriters only apply these rules forwards
FlexC applies this rule in both directions, as some CGRAs may have multiplication-supporting PEs and other CGRAs may have negation-supporting PEs. We refer to this universally applicable ruleset as the _integer ruleset_. Some examples are shown in Table I.
### _Domain-Specific Rules_
The core integer ruleset represents a baseline of rules widely applicable to all CGRAs. However, the constrained nature of heterogeneous CGRAs, which may feature highly specialized operations, means that domain-specific relaxations of correctness typically result in better targetability. FlexC can use custom rewrite rules as input, tailored for a given accelerator.
#### Iv-B1 Floating-Point Rules
Floating-point rewrite rules are rarely bit-for-bit correct. Compilers typically use flags to allow for different levels of correctness guarantees, enabling floating-point transformations only when the programmer is willing to forgo accuracy.
When compiling floating-point operations to CGRAs, FlexC uses these rules by default (they can be turned off). This enables more rewrites at the cost of losing bit-correctness. An example of rules enabled by this assumption are shown in Table II.
#### Iv-B2 Boolean Logical Operations
Logical operations such as AND (&) and OR (--) take two different meanings: they specify bitwise operations on entire words at a time, and they specify boolean operators (where any non-zero result is true). With a compiler flag provided by a programmer to indicate these are equivalent, we can add more rewrite rules.
For example, as boolean operations, AND can be rewritten using multiplication nodes, increasing the space of programs that a CGRA without logical operator support can be used for. We supply a set of rewrite operations that assume logical operations are equivalent to boolean operations. Some examples of rewrite rules in this set are shown in Table III.
#### Iv-B3 Stochastic Computing
Stochastic computing is a computing paradigm aimed at achieving better energy efficiency than traditional computing by trading off accuracy [33]. In particular, stochastic computing allows multipliers to be replaced by logical and operators, and add operations to be replaced by muxes [34], in contexts where the absolute result is not needed, and this allows the use of simpler accelerators. Table IV shows an example ruleset.
\begin{table}
\begin{tabular}{c c c} x * y & \textless{}-> x + (-y) \\ x >> y & \textless{}-> x / (1 << y) \\ x and y & \textless{}-> not ((not x) or (not y)) \\ \end{tabular}
\end{table} TABLE I: Some example rewrite rules that can be used to change the operations an expression requires.
\begin{table}
\begin{tabular}{c c c c} x * y & \textless{}-> x + (-y) \\ x >> y & \textless{}-> x / (1 << y) \\ x and y & \textless{}-> not ((not x) or (not y)) \\ \end{tabular}
\end{table} TABLE III: Rewrite rules under the assumption that binary logical operators are boolean operators.
\begin{table}
\begin{tabular}{c c c c} x * y & \textless{}-> x + (-y) \\ x >> y & \textless{}-> x / (1 << y) \\ x and y & \textless{}-> not ((not x) or (not y)) \\ \end{tabular}
\end{table} TABLE IV: Example rewrite rule for stochastic computing
## V Experimental Setup
We implement FlexC above OpenCGRA, which is written in C++. We use the egg Rust library [24] to implement our rewriters. For equality saturation, we use an iteration limit of 10 with a node limit of 100,000 to prevent the e-graphs from growing too large. FlexC is integrated into the LLVM framework and is invoked using the opt tool using LLVM IR as input.
FlexC relies on OpenCGRA to find the loop to accelerate. OpenCGRA looks for the first loop in each provided function. We implement the architecture specification in JSON, adding a mapping from each PE to the sets of operations it will be able to support.
### _Benchmarks_
We have collected a benchmark suite of 2188 real-world open-source code loops composed of projects in the multimedia, compression and simulation domains shown in table V. Typically, CGRA compilers are evaluated on benchmark suites of a few tens of loops which do not capture the wide spectrum of loops that programmers write, and are easy to hand optimize [39]. Our benchmark suite captures a wide range of loops, without the overheads of running whole programs [40]. These loops allow us to demonstrate FlexC works on a wide range of architectures and programs.
We extract loops suitable for CGRA scheduling from the projects shown in table V. Each extracted is the innermost loop, has no internal branches or function calls, and contains at least one array access. These properties ensure our benchmarks compile to many different CGRAs using a variety of compilation techniques.
We build a custom Clang-based tool that identifies loop structures and detects required type definitions. Clang is run using the build-system rules for each project. Each loop is placed into a function skeleton so it can be compiled.
### _Alternative approaches_
We compare FlexC against three alternative approaches: OpenCGRA [21], the LLVM [41] Rewriter and our own Greedy Rewriter. OpenCGRA is the default scheme that simply maps operations to function units without any rewriting, with LLVM's rewriter disabled to enable a comparison to it. The LLVM rewriter employs the rewrite rules within the LLVM compiler infrastructure, which are intended to canonicalize the program for typical CGRA architecture. The greedy rewriter is FlexC without equality saturation fallback.
### _Existing Domain-Specific Accelerators_
We evaluate domain-specific architectures from three prior works. We consider one domain-specific CGRA work (REVAMP [14]), one more general domain-specific _accelerator_ work (DSAGen [26]), and one stochastic computing CGRA (SC-CGRA [42]).
#### V-C1 DSAGen
DSAGen [26] is a framework for generating domain-specific architectures. These architectures share many properties with CGRAs in that they expose architectural details to the compiler and present coarse-grained reconfigurable blocks. We make minor modifications3 to the architectures shown in Figure 4(b) and 4(c) in [26] so they can be represented within OpenCGRA.
Footnote 3: OpenCGRA requires more routing to be present between compute elements, so the architectures we use are more flexible than in DSAGen. OpenCGRA also does not support architectural features like distribution trees, which we have omitted. We further add the nodes required by OpenCGRA to support loop pipelining (an add and an integer compare).
#### V-C2 Revamp
REVAMP [14], a framework for generating domain-specific CGRAs provides an example of a CGRA for heterogeneous compute optimization, with nodes for addition, subtraction, multiplication and some logic operations implemented within a 6x6 CGRA.
#### V-C3 Sc-Cgra
SC-CGRA [42] is a stochastic-computing-based CGRA. Typical exact multipliers are replaced with approximate multipliers, and similarly for adders within a 4x4 CGRA. We implement this in OpenCGRA, providing approximate adders/multiplers instead of exact ones4.
Footnote 4: The authors discuss different accuracies of adder/multiplier, but do not state the number of each used, so we use a simple assignment of one multiplier and one adder per node. We also omit node-fusing, as we use OpenCGRA to target this accelerator. The operators other than the multipliers and adders are not specified completely. For this evaluation, we assume each node has logical operations, and arithmetic operations simpler than multiplication. To enable OpenCGRA to compile some things on it’s own, we add one exact adder, which is required for induction variables in almost all loops.
## VI Results
We evaluate FlexC against traditional heterogeneous-CGRA compilers, improving the number of benchmarks that can be compiled to heterogeneous CGRAs by 2.2\(\times\), and demonstrate that despite making the computation more complex, the rewrite rules do not introduce slow-downs, showing geomean speedups of 3\(\times\).
### _Existing Domain-Specific Accelerators: Compilation Rate_
We apply FlexC to four accelerators presented in section V-C, comparing to three other rewriting strategies. Figure 4 shows that FlexC increases the number of loops that these CGRAs can support by a factor of 2.2\(\times\). Figure 5 gives details split by benchmark suite for each accelerator.
#### Vi-A1 DSAGen
Figure 4 shows that using FlexC increases the number of loops that can be supported on the CCA and Maeri architectures by a factor of 2.2\(\times\) and 1.6\(\times\) respectively. Maeri does particularly well on LivermoreC (figure 5), especially once equality saturation is used, because of the workload's heavy use of floating-point operations, though it is less suited to Bzip2 than CCA because of Maeri's lack of boolean arithmetic.
\begin{table}
\begin{tabular}{l l r} \hline \hline Domain & Project & Samples \\ \hline Compression & Bzip2 [35] & 13 \\ Multimedia & Frmpeg [25] & 1852 \\ & FredImage [36] & 223 \\ Scientific Computing & DarkNet [37] & 77 \\ & LivermoreC [38] & 26 \\ \hline Sum & & 2188 \\ \hline \hline \end{tabular}
\end{table} TABLE V: Quantities of unique loops in benchmark suite.
LLVM performs well on the CCA architecture as it has a more comprehensive set of rewrite rules than have been implemented in FlexC, and on the CCA architecture, the canonicalization rules it uses are appropriately targeted. Nevertheless, FlexC outperforms it due to more comprehensive exploration of the rewrite space.
This case study, on non-CGRA architectures, reveals the generality of FlexC: while we do not claim that this comprehensively demonstrates that our rewriter can compile to different architectures (as we still rely on the OpenCGRA backend), it does demonstrate that FlexC may be applicable to more diverse computation models than CGRAs.
#### V-A2 Revamp
We implement REVAMP in the OpenCGRA framework and compile each of our benchmarks to it (fig. 4). FlexC increases the number of loops that can be supported on this CGRA by 15%, consistently across different workloads (fig. 5).
This increase is small because REVAMP's example already supports almost all the required operations for non-floating point code. We will see in other examples that FlexC becomes more important as the domain becomes more restricted.
#### V-A3 Sc-Cgra
Figure 4 shows FlexC increases the number of loops that can be supported by a factor of 5.2\(\times\).
This case study demonstrates FlexC is not only relevant within heterogeneous fabrics: if a homogeneous CGRA lacks operations that compilers typically assume to be available, FlexC's methods may still be necessary to generate working code. Bzip2 in particular (fig. 5) more than doubles the amount of targeted code once FlexC's equality saturation is used, compared to greedy-only, because otherwise it gets stuck in local minima and fails to explore the space enough to find a match.
### _Compilation Rate: Architectures Specialized for Loops_
We demonstrate that the rewriting technique used by FlexC is applicable to many different specialized CGRAs within a varied design space.
Using 300 randomly selected loops in our benchmark suite, we first build a heterogeneous CGRA designed for that loop in particular. We run FlexC across the other loops in the benchmark suite and measure which loops can and cannot be compiled. Figure 6 shows what fraction of loops can be compiled, making a distinction between loops that are in the same suite (so are often more likely to share the same class of operation) and loops from different domains.
FlexC improves the applicability of the accelerators, both within the domain they were designed for by a factor of 2.3\(\times\), and between domains by a factor of 2.9\(\times\), demonstrating the applicability of FlexC to many different types of heterogeneous CGRA. In some cases, a typical accelerator for a loop in one benchmark will actually do better on the other workloads (e,g, for freeimage and ffmpeg). This is because freeimage and ffmpeg are highly diverse, and so an accelerator designed for one loop is less likely to match others in the same diverse benchmark.
Figure 7 shows FlexC supports CGRAs across a wide range of specializations, from very specialized CGRAs with only a few operators to complex heterogeneous CGRAs. For architectures with fewer operations, equality saturation is more important, as there are fewer paths to a valid rewrite.
#### V-A1 Speedups
This section demonstrates that rewriting code in ways that at first-glance are inefficient can result in speedup by enabling accelerator utilization. Compiling to CGRA implementations typically improves performance _and_ reduces power usage. We consider speedup in this evaluation. In line with other CGRA work, we consider speedup in the case that loops are executing large numbers of iterations, so one-time overheads like offloading costs for loosely coupled accelerators are ignored.
We compare two systems with similar specifications. For a CGRA system, we take architectural parameters for ADRES [43], a 6x6 CGRA which we clock at 200 MHz. We use the initialization interval to obtain performance estimates for the CGRA. To obtain a realistic CPU baseline, we execute the loops on an Arm A5 running at 500 MHz using an Analog Devices SC-589EZKit development board [44] and methodology for generating inputs from Exebench [45]. Speedups are shown in figure 8, showing a geomean performance improvement of 3\(\times\), demonstrating that FlexC's rewrite rules are not only effective in enabling targeting of CGRAs, but also in achieving speedup on them.
### _Existing Domain-Specific Accelerators: End-to-End Evaluation_
We demonstrate that FlexC also performs well on well-known and computationally important kernels. To do this, we
Fig. 4: We consider four different architectures, adapted from DSAGen [26] (CCA and Maeri), REVAMP [14] and SC-CGRA [42]. All architectures use the integer and floating-point rulesets, and SC-CGRA uses the stochastic ruleset. The architectures are specialized to different degrees: the more specialized architectures, CCA and Maeri, benefit from FlexC more than the more generic architecture from REVAMP.
take the OpenCGRA benchmark suite [21], along with the LivermoreC benchmark suite previously explored. We use the same setup as in section VI-B1.
The results are shown in Figure 9. Compared to running on an Arm Cortex-A5, FlexC achieves a speedup on 2.0\(\times\) across all applications. This compares to the LLVM rewriter, which is only able to extract 1.5\(\times\) performance increase across all applications.
### _Using Different Rulesets_
FlexC provides a generic rewriting framework that can be applied to many different rulesets. These rulesets may be flagged by the programmer as valid for particular loops, or valid for a particular program.
We inspect four different rulesets here (covered in more detail in Section IV-A), an integer ruleset, derived of rules that may always be applied, a floating-point ruleset, derived of rules that may be applied under assumptions such as -ffast-math, a logical operations-as-binary operations ruleset that can be used to provide greater flexibility of rewrites involving logical operators and a stochastic computing ruleset that enables typical stochastic computing transformations. These secondary rulesets
Fig. 5: Results for each accelerator pairing by benchmark suite. Equality saturation often dramatically improves coverage for particular workload-accelerator combinations (e.g. bzip2 on CCA and SC-CGRA, and LivermoreC on Maeri), where otherwise the accelerator would appear entirely unsuitable. In these cases, the accelerator has the right class of operator for the tasks required (logical operators for bzip2 and floating-point operators for LivermoreC) but the code still requires transformation to fit the individual available operations.
Fig. 6: Using accelerators designed for individual loops in each benchmark suite, how much code in the same suite (red) and other suites (blue) can be compiled to these accelerators. FlexC increase the compilation rate by a factor of 2.3\(\times\) in the same suite and 2.9\(\times\) between suites.
Fig. 7: How the number of different operations in a CGRA influences compilation rate. FlexC performs consistently across many levels of generality, from very specialized accelerators with few operators to much more generic accelerators with many different operators. Equality saturation is most important for more specialized architectures.
can be activated by the programmer using a flag. Figure 10 shows how these different rulesets provide different compilation performance. Rulesets are run in combination with the int ruleset as it contains many _enabling_ rewrites for the specialized rewrites in the other rulesets. We can see that determining which rulesets are useful is architecture-specific. For example, Maeri benefits a lot from the logic-as-boolean ruleset, as it does not have logic operators, while CCA benefits from the stochastic rules as it does not have multipliers.
### _Most Frequently Applied Rewrite Rules_
Part of the power of FlexC is that the rewrite rules that need to be applied vary by architecture. By using equality saturation, FlexC is able to use one standard set of rules for all architectures and apply the relevant rules in each case. Table VI shows the most frequently applied rules for the CCA and Maeri architectures (when compiled using the integer and floating point rulesets): two architectures with nearly disjoint sets of operators.
### _Compile Time_
A challenge with Equality Saturation is in keeping the search-space manageably sized, as e-graphs can grow rapidly, causing excessive compile times and resource usage [29, 46]. We avoid these issues in FlexC by limiting the number of explorative iterations, still finding good solutions in many cases.
Figure 11 shows the time taken by FlexC to rewrite and schedule the DFG. We use a cutoff time of 300 s to avoid exploring the rewrite space fruitlessly -- we can see that the rate of successful compilations drops off rapidly after 60 s, followed first by a large number of early terminations without successful scheduling (most likely due to reaching saturation,
Fig. 8: Speedup achieved by rewriting applications to run on a low-power CGRA vs running on a comparable low-power CPU. We achieve 3\(\times\) geomean over running on a CPU.
Fig. 10: Comparing how different sets of rewrite rules improve the code coverage of an accelerator. All rulesets are run with the int ruleset. The stochastic computing rules are only applied to SC-CGRA as they require specialized hardware support not available in other accelerators.
Fig. 9: Speedups using the OpenCGRA benchmark suite and the Livermore C benchmark suite, comparing various CGRA architectures to an Arm Cortex-A5. Benchmarks that were unsupported by any architecture/compiler pairs have been omitted5. The top figure shows the speedup achieved using the LLVM rewriter to target each CGRA, and the bottom figure shows speedup via FlexC.
iteration or node limit), then by stagnation in progress for infeasibly large search spaces. These compile times are fast enough that more exhaustive CGRA schedulers will be able to incorporate this strategy within the existing order of magnitude of compilation time. For example Beidas and Anderson report ILP compile times with a geomean of 60 s [47].
We can also see the effect of using a greedy rewriter as a preliminary step here. In 10% of cases, FlexC is able to rely on the greedy rewriter and find a compiling loop rapidly. We can further see that when FlexC uses equality saturation, it is more successful early on in the exploration.
## VII Related Work
### _Existing CGRAs_
Research on CGGRs has been extensive [11, 30]. Older CGRAs [43, 48, 49] tend to provide a homogeneous grid of PEs with a programmable interconnect. However, the design-space of CGRAs is immense, including heterogeneous on-chip networks [50, 51, 52, 53, 54], decouplings of memory and compute [55, 26], unifying memories [56], and various techniques to specialize PEs [57, 58, 59, 60, 28, 27, 61, 62]. Toolchains to enable development [63, 64, 65, 66, 67, 68, 69, 70] and aid design [14, 71, 72] mean that is is relatively easy to design and build a domain-specific CGRA.
#### Vii-C1 Domain-Specific CGRAs
Domain-specific CGRAs exist for neural networks [73, 74, 75, 76], scientific kernels [77, 78, 62, 16, 79, 80], approximate computing [81, 42], stencil computations [82], HPC [52], multimedia [83, 84, 85] and streaming applications [12].
#### Vii-C2 Industrial CGRAs
Xilinx's ACAP provides a CGRA-like model of computation [86] and uses an MLIR-based toolchain [87]. Samsung have designed the SLP-URP [88] for low-power medical use-cases. Smaller companies such as Wave Computing [89] and SambaNova Systems [74] and Recore Systems [90] also involved in CGRA-design. Large-scale research projects of producing real hardware [91, 92] also use CGRAs.
### _Compiling for CGRAs_
Numerous authors address compiling branches [93, 94, 95, 96, 97, 98], nested loops [99, 96, 100], scheduling of large loops [101, 102, 103, 104, 105] and irregular memory accesses [98]. Compilation time is relevant in many fields [106, 107] and has been addressed both with faster algorithms [106] and hardware acceleration [108, 109].
CGRA scheduling can be done with binary decision diagrams [47], the polyhedral model [52, 110], SAT solvers [111, 112] and ILP models [113, 114, 115, 116]. Heuristic approaches can use information from failed placements [117, 118], rewrite rules to simplify routing [119], sharing information between placement and routing phases [120] and integration of hardware features within the compiler model [121, 122]. Machine learning can automate many approaches [123, 124, 125, 105, 126].
DSLs can be used to simplify the compilation processes, enabling greater parallelism [127, 128, 129, 130, 131, 132] but do not totally eliminate compiler challenges [133]. API interfaces can eliminate compiler challenges, but also eliminate the flexibility of CGRAs [134, 135].
### _Compiling for Hardware Accelerators_
Compiling for API programmable accelerators has been explored using equality saturation [136] and program synthesis [17]. Equality saturation has also been used to optimize tensor programs for tensor accelerators [137]. Similarly, externalizing rewrite rules to enable programs to run efficiently on hardware accelerators [138]. While these approaches could target CGRAs behind API interfaces, they do not support compiling to CGRAs directly -- and so lose flexibility. Idiomatic compilation [139] has been used to target closely-related spatial accelerators [140].
Fig. 11: Time to schedule code on a CGRA using FlexC and OpenCGRA. We cut-off rewriting at 300 s to avoid excessive exploration. After 60 s, the compilation rate is very low, so FlexC is not missing many compilations at longer timeouts.
\begin{table}
\begin{tabular}{|c|c|} \hline & CCA \\ \hline
1 & x + 2 -> x << 1 \\
2 & x + 4 -> x << 2 \\
3 & x + 1 -> x \\
4 & -x (Floating Point) -> x + 2’32 (Int) \\ \hline \hline & Maeri \\ \hline
1 & x + 1 -> x \\
2 & x << 1 -> x + 2 \\
3 & x << y -> mul(x, load(csel(y > 32, 33, y))) \\
4 & x - y -> x + -y \\ \hline \hline & REAVMP \\ \hline
1 & x + 1 -> x \\
2 & -x (Floating Point) -> x + 2’32 (Int) \\
3 & x / 2 -> x >> 1 \\
4 & x / 8 -> x >> 3 \\ \hline \hline & SC-CGRA \\ \hline
1 & x + y -> x \& y \\
2 & x + y -> ISC(x, y) \\
3 & x + 1 -> x \\
4 & -x (Floating Point) -> x + 2’32 (Int) \\ \hline \hline & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table} TABLE VI: The most commonly applied rules for each architecture. We omit LLVM-specific rewrites for SC-CGRA. As the CCA and Maeri provide nearly disjoint operators, they are examples of the need for rewrites to apply bidirectionally.
### _Equality Saturation_
Equality saturation [23, 24] has been used for a range of tasks, including: optimization and translation validation of Java bytecode and LLVM programs [141], improving accuracy of floating point expressions [142], synthesizing structured CAD models [143], optimizing linear algebra expressions [144], tensor graph superoptimization [31], vectorization for digital signal processors [145], optimizing integer multiplication on FPGAs [146], hardware datapath optimization [147].
A proposed DSLs to Accelerators (D2A) methodology [137, 136] uses equality saturation for optimization and hardware mapping of DSLs. This paper aligns with the _flexible matching_ idea from the D2A methodology, but considers mapping arbitrary C code to CGRAs to address the CGRA domain-restriction problem, and evaluates the difference between equality saturation and greedy rewriting.
While equality saturation is a powerful rewriting technique that addresses limitations of greedy rewriting, scaling to long rewrite sequences is limited as the e-graph grows quickly. The Pulsing Caviar mechanism [46] was evaluated on arithmetic expressions to balance exploration and exploitation, and compared to greedy rewriting for this purpose. Sketch-guiding [29] is another recently proposed semi-automated technique to improve scaling of equality saturation.
## VIII Conclusion
We introduce FlexC, a compiler for domain-specific CGRAs that addresses the domain-restriction problem: where CGRAs that have been designed for a particular domain are hard to apply to software outside that domain. FlexC uses equality-saturation to rewrite software from different domains so it can run on hardware not designed for it. FlexC increases the number loops that can be supported by a factor of 2.2\(\times\) over existing CGRA compilers and enables acceleration of loops leading to a geomean speedup of 2.1\(\times\).
FlexC demonstrates the potential that rewriting software to match novel hardware has: the techniques developed here are applicable to other kinds of accelerators with programmable networks. We present the first study that characterizes how different decisions surrounding heterogeneity effect the fraction of code supported by an accelerator, showing that the more specialized an accelerator is, the more important FlexC is. FlexC opens up new development possibilities by promising that even if software requirements change in a heartbeat, accelerators with a large sunk-cost can still be applied.
|
2309.16032 | Learning Dissipative Neural Dynamical Systems | Consider an unknown nonlinear dynamical system that is known to be
dissipative. The objective of this paper is to learn a neural dynamical model
that approximates this system, while preserving the dissipativity property in
the model. In general, imposing dissipativity constraints during neural network
training is a hard problem for which no known techniques exist. In this work,
we address the problem of learning a dissipative neural dynamical system model
in two stages. First, we learn an unconstrained neural dynamical model that
closely approximates the system dynamics. Next, we derive sufficient conditions
to perturb the weights of the neural dynamical model to ensure dissipativity,
followed by perturbation of the biases to retain the fit of the model to the
trajectories of the nonlinear system. We show that these two perturbation
problems can be solved independently to obtain a neural dynamical model that is
guaranteed to be dissipative while closely approximating the nonlinear system. | Yuezhu Xu, S. Sivaranjani | 2023-09-27T21:25:26Z | http://arxiv.org/abs/2309.16032v2 | # Learning Dissipative Neural Dynamical Systems
###### Abstract
Consider an unknown nonlinear dynamical system that is known to be dissipative. The objective of this paper is to learn a neural dynamical model that approximates this system, while preserving the dissipativity property in the model. In general, imposing dissipativity constraints during neural network training is a hard problem for which no known techniques exist. In this work, we address the problem of learning a dissipative neural dynamical system model in two stages. First, we learn an unconstrained neural dynamical model that closely approximates the system dynamics. Next, we derive sufficient conditions to perturb the weights of the neural dynamical model to ensure dissipativity, followed by perturbation of the biases to retain the fit of the model to the trajectories of the nonlinear system. We show that these two perturbation problems can be solved independently to obtain a neural dynamical model that is guaranteed to be dissipative while closely approximating the nonlinear system.
## I Introduction
The identification of dynamical system models for control, in both linear and nonlinear settings, is a long-studied problem [1]. Typically, nonlinear systems have been modeled using approximate linear models [2] or linear parameter varying models [3], and more recently, as high-dimensional linear approximations using Koopman operator models [4, 5] for the purposes of analysis and control design. Deep learning-based dynamical system models, such as neural ordinary differential equations (neural ODEs) [6] and physics-informed neural networks [7] to capture the dynamical behavior of nonlinear systems have also recently gained attention.
When identifying models for control, it is typically not sufficient to simply obtain a model that approximates the dynamical behavior of the system. Rather, we would ideally like to preserve essential system properties such as stability in the identified models. One such control-relevant system property that is particularly useful is dissipativity [8, 9], which provides a general framework to guarantee several crucial properties like \(\mathcal{L}_{2}\) stability, passivity, conicity, and sector-boundedness. Dissipativity has been widely exploited for scalable, distributed, and compositional control synthesis in networked systems [10]-[13], and has found applications in several domains, including but not limited to, electromechanical systems [14] robotics [15], power grids [16, 17], and process control [18].
In this paper, we consider the problem of learning a neural dynamical system model for an unknown nonlinear system that is known a priori to possess a dissipativity property. We focus our attention on neural dynamical systems for the following reason. Neural networks are universal function approximators [19]; therefore, neural dynamical models can capture nonlinear dynamical behavior well beyond the 'local' region in the vicinity of the equilibrium that is captured by linear models, allowing us to expand the validity and usefulness of our control designs. However, there are limited guarantees on control-relevant properties such as stability, robustness, or dissipativity that can be obtained using such learning-based models.
While identification of stable models has been studied for several decades, system identification approaches that preserve system dissipativity and passivity properties have only been investigated in the context of linear systems (see [20] for a comprehensive survey), linear approximations for nonlinear systems [21, 22], and Koopman operator models [23, 24]. Learning stable neural ordinary differential equation (ODE) models has been achieved through neural Lyapunov functions or Lyapunov constraints (see [25] for a compilation of works addressing this topic). There is also some recent work on learning dissipative neural dynamics limited to specific port-Hamiltonian network structures; further, these models only apply when the system inputs are constant [26]. Dissipativity verification for neural dynamical systems is also typically confined to special cases such as \(\mathcal{L}_{2}\) stability for autonomous (open-loop) systems [27]. The problem of _learning provably dissipative deep neural dynamical models_ for general nonlinear systems, especially in the closed-loop setting, remains an open problem. The key challenge lies in imposing matrix inequality constraints, such as those required to guarantee dissipativity, during deep neural network training; this is a hard problem with no known solution.
In this work, we address the particular problem of learning a dissipative neural dynamical model for a nonlinear system that is known to satisfy an incremental dissipativity property. We propose a two-stage solution to address this problem. First, we train an unconstrained feedforward deep neural ODE model using input-output trajectories from the nonlinear system. Next, we derive sufficient conditions on the weights of the neural network to guarantee incremental dissipativity of the learned model, and pose an optimization problem to minimally perturb the weights to enforce these conditions. Finally, we adjust the biases of the model, as necessary, to retain the fit of the dissipative neural dynamical model to the true system dynamics.
The key contributions of this work are as follows. First, we derive sufficient conditions to guarantee incremental dissipativity of deep neural dynamical models. Second, we propose an algorithm where dissipativity can be imposed by
perturbation of the weights alone, allowing us to independently tune the biases to retain the fit of the model to the true system dynamics without losing our dissipativity guarantee.
This paper is organized as follows. In Section II, we formulate the identification problem that will be addressed in this paper. We then present a two-stage approach to solve this problem in Section III. We demonstrate the approach through simulation on a Duffing oscillator system in Section IV and discuss directions for future work in Section V. The proofs of all results are presented in the Appendix.
_Notation:_ We denote the sets of real numbers, positive real numbers including zero, and \(n\)-dimensional real vectors by \(\mathbb{R}\), \(\mathbb{R}_{+}\) and \(\mathbb{R}^{n}\) respectively. Define \(\mathbb{Z}_{N}=\{1,\ldots,N\}\), where \(N\) is a natural number excluding zero. Given a matrix \(A\in\mathbb{R}^{m\times n}\), \(A^{T}\in\mathbb{R}^{n\times m}\) represents its transpose. A symmetric positive definite matrix \(P\in\mathbb{R}^{n\times n}\) is represented as \(P>0\) (and as \(P\geq 0\), if it is positive semi-definite). Similarly, a symmetric negative definite matrix \(P\in\mathbb{R}^{n\times n}\) is represented as \(P<0\) (and as \(P\leq 0\), if it is negative semi-definite). The standard identity matrix is denoted by \(\mathbf{I}\), with dimensions clear from the context. Given two vectors \(x,y\in\mathbb{R}^{n}\), we define the operator \(\delta(x,y)=y-x\).
## II Problem Formulation
Consider an unknown nonlinear time-invariant system
\[\dot{x}(t) =h_{1}(x(t),u(t))\] \[y(t) =h_{2}(x(t),u(t)), \tag{1}\]
where \(h_{1}:\mathcal{X}\times\mathcal{U}\rightarrow\mathcal{X}\subset\mathbb{R}^{n}\) and \(h_{2}:\mathcal{X}\times\mathcal{U}\rightarrow\mathcal{Y}\subset\mathbb{R}^{r}\), and \(x(t)\in\mathcal{X}\subset\mathbb{R}^{n}\), \(u(t)\in\mathcal{U}\subset\mathbb{R}^{m}\), and \(y(t)\in\mathcal{U}\subset\mathbb{R}^{p}\) are the state vector, input vector, and output vector at time \(t\in\mathbb{R}_{+}\) respectively. Here, we assume that \(h_{2}(x(t),u(t))=x(t)\) for all \(t\in\mathbb{R}_{+}\). The input of the system evolves as
\[\dot{u}(t)=g(x(t),u(t)), \tag{2}\]
allowing us to consider closed-loop identification in our framework with any time-invariant control input. We further stack the state and input to define \(z(t)\underline{\triangleq}\begin{bmatrix}x^{T}(t)&u^{T}(t)\end{bmatrix}^{T}= \begin{bmatrix}y^{T}(t)&u^{T}(t)\end{bmatrix}^{T},\) and rewrite (1)-(2) as
\[\dot{z}(t)=\begin{bmatrix}h_{1}(x(t),u(t))\\ g(x(t),u(t))\end{bmatrix}\triangleq f(z(t)). \tag{3}\]
We assume that the nonlinear system (1) is incrementally dissipative, with the notion defined as follows.
**Definition 1**: _The nonlinear system (1) is said to be \((Q,S,R)\)-incrementally dissipative or incrementally dissipative in short, if for all output pairs \(y_{1}(t),y_{2}(t)\in\mathcal{Y}\) and input pairs \(u_{1}(t),u_{2}(t)\in\mathcal{U}\), for all \(t\in\mathbb{R}_{+}\), we have_
\[\begin{bmatrix}\Delta y(t)\\ \Delta u(t)\end{bmatrix}^{T}\begin{bmatrix}Q&S\\ S^{T}&R\end{bmatrix}\begin{bmatrix}\Delta y(t)\\ \Delta u(t)\end{bmatrix}\geq 0, \tag{4}\]
_where \(\Delta y(t)=\delta(y_{1}(t),y_{2}(t))\) and \(\Delta u(t)=\delta(u_{1}(t),u_{2}(t))\). For the remainder of the paper, we omit the dependence of all quantities on time \(t\) for simplicity of notation._
We are interested in the problem of identifying a model for (1) that preserves its dissipativity properties, since classical \(QSR\)-dissipativity (and its incremental version in (4)) can be used to guarantee a variety of useful input-output properties (or their incremental versions) through appropriate choices of the \(Q,S,\) and \(R\) matrices such as [28]:
1. \(\mathcal{L}_{2}\) stability: \(Q=-\frac{1}{\gamma}I,S=0,R=\gamma I\), where \(\gamma>0\) is the \(\mathcal{L}_{2}\) gain of the system;
2. Passivity: \(Q=0,S=\frac{1}{2}I,R=0\);
3. Strict Passivity: \(Q=-\epsilon I,S=\frac{1}{2}I,R=-\delta I\), where \(\epsilon>0\) and \(\delta>0\);
4. Conicity: \(Q=-I,S=cI,R=(r^{2}-c^{2})I\), where \(c\in\mathbb{R}\) and \(r>0\).
5. Sector-boundedness: \(Q=-I\), \(S=(a+b)I\) and \(R=-abI\), where \(a,b\in\mathbb{R}\).
Our main objective is to learn a neural dynamical system that closely approximates the behavior of the closed loop dynamics (3) while preserving the incremental dissipativity of the unknown nonlinear system. Formally, the problem is formulated as identifying a _neural dynamical system_
\[\dot{z}=\bar{f}(z) \tag{5}\]
satisfying
\[\Delta z^{T}\begin{bmatrix}Q&S\\ S^{T}&R\end{bmatrix}\Delta z\geq 0, \tag{6}\]
where \(\Delta z=\begin{bmatrix}\Delta y\\ \Delta u\end{bmatrix}\), and \(\bar{f}\) is a feed-forward fully-connected neural network with layers \(\mathbf{L}_{i},i\in\{1,2,\ldots,l\}\), whose mapping is defined as
\[\mathbf{L}_{i}:\;\;z^{i}=\phi(v_{i})\quad\forall i\in\mathbb{Z}_{l} \tag{7}\] \[\tilde{f}(z)=z^{l},\]
where \(v_{i}=W_{i}z^{i-1}+b_{i}\), and \(\phi:\mathbb{R}\rightarrow\mathbb{R}\) is a nonlinear _activation function_ that acts element-wise on its argument. The last layer \(\mathbf{L}_{l}\) is termed the _output layer_ of the network. Our goal is to then learn the appropriate weights \(W_{i},i\in\mathbb{Z}_{l}\) and biases \(b_{i},i\in\mathbb{Z}_{l}\) that ensure that the neural dynamical system (5) closely approximates the nonlinear system (3), while guaranteeing that it is incrementally dissipative in the sense of Definition 1. Note that identifying a neural dynamical system (5) that closely approximates (3) does not automatically guarantee that it is incrementally dissipative.
## III Learning Dissipative Neural Dynamics
### _Identifying Neural Dynamics with No Constraints_
Given \(d\) system trajectories with \(N\) data points each, denoted by \(\{(\hat{y}_{ij},\hat{u}_{ij})\},i\in\mathbb{Z}_{d},j\in\mathbb{Z}_{N}\) on time interval \(t\in[0,T]\), \(T\in\mathbb{R}_{+}\) capturing the behavior of the nonlinear system (3) in a region of interest, we create a training dataset formatted as \(M\) collections \(\{(y_{j},u_{j})^{(i)}\},i\in\mathbb{Z}_{M},j\in\mathbb{Z}_{N}\), where each collection comprises of consecutive data points sampled from any of the system trajectories starting at a randomly selected time point. A standard neural ODE training algorithm such as [6] can be used to identify a neural dynamical model comprised of a feed-forward fully-connected neural network \(\bar{f}\) with parameters \(\bar{\theta}=(\bar{W}_{i},\hat{b}_{i}),i\in\mathbb{Z}_{l}\), termed here as a _baseline model_, and defined as
\[\dot{z}=\bar{f}(z(t),\theta)\quad t\in[0,T], \tag{8}\]
that approximates the dynamical behavior of (3).
As discussed earlier, there is no guarantee that the identified neural dynamical system (8) is incrementally dissipative in the sense of Definition 1 even if the unknown nonlinear system (1) is known to be dissipative. One approach to obtain a dissipative neural dynamical model is to constrain the neural network parameters \(\theta\) during training. However, typical neural ODE learning algorithms cannot directly handle constraints during training. Further, guaranteeing dissipativity properties such as (4) on the trained model requires imposing matrix inequality constraints on the training of neural ODE models; this is a complex problem for which no known algorithms exist. To address this issue, we propose an algorithm to perturb the parameters of the the baseline model post-training to guarantee incremental dissipativity, while retaining the fit of the learned model to the system dynamical behavior.
### _Dissipativity of Neural Dynamical Systems_
We begin by deriving a matrix inequality condition on the neural network weights that is sufficient to guarantee incremental dissipativity of the model. We will take advantage of a slope-restrictedness property on the activation function defined as follows.
**Assumption 1**: _For the neural network described in (7), the activation function \(\phi\) is slope-restricted in \([\alpha,\beta]\), where \(\alpha<\beta\), that is, \(\forall v_{a},v_{b}\in\mathbb{R}^{n}\), we have element-wise_
\[\alpha(v_{b}-v_{a})\leq\phi(v_{b})-\phi(v_{a})\leq\beta(v_{b}-v_{a}), \tag{9}\]
_or equivalently, we have_
\[\begin{bmatrix}v_{b}-v_{a}\\ \phi(v_{b})-\phi(v_{a})\end{bmatrix}^{T}\begin{bmatrix}pI&-mI\\ -mI&I\end{bmatrix}\begin{bmatrix}v_{b}-v_{a}\\ \phi(v_{b})-\phi(v_{a})\end{bmatrix}^{T}\leq 0, \tag{10}\]
_where \(p=\alpha\beta\) and \(m=\frac{\alpha+\beta}{2}\)._
Slope-restrictedness is satisfied by most widely-used activation functions. For example, for the ReLU, sigmoid, tanh, exponential linear functions, \(\alpha=0\) and \(\beta=1\). For the leaky ReLU function, \(\phi(x)=\max(ax,x)\), with \(a>0\), \(\alpha=\min(a,1)\) and \(\beta=\max(a,1)\)[29, Proposition 2].
We can now derive the following condition from the slope-restrictedness of the activation function in Assumption 1.
**Lemma 1**: _For the neural network (7), if there exist \(\lambda_{i}\in\mathbb{R}_{+},i\in\mathbb{Z}_{l}\) and \(\lambda\in\mathbb{R}_{+}\) satisfying (11), with \(P_{11}\), \(P_{22}\) being symmetric matrices, and \(P_{12}^{T}=P_{21}\), then_
\[\begin{bmatrix}\Delta z^{0}\\ \Delta z^{l}\end{bmatrix}^{T}\begin{bmatrix}P_{11}&P_{12}\\ P_{21}&P_{22}\end{bmatrix}\begin{bmatrix}\Delta z^{0}\\ \Delta z^{l}\end{bmatrix}\geq 0 \tag{12}\]
_holds where \(\Delta z^{0}=\delta(z_{1}^{0},z_{2}^{0})\) and \(\Delta z^{l}=\delta(z_{1}^{l},z_{2}^{l})\), where \((z_{1}^{0},z_{1}^{l})\) and \((z_{2}^{0},z_{2}^{l})\) are input-output pairs for the neural network defined in (7)._
Finally, we are ready to derive a sufficient condition for incremental dissipativity of the neural dynamics (5).
**Theorem 1**: _If there exist \(P_{11}=\begin{bmatrix}Q&S\\ S^{T}&R\end{bmatrix}\), \(P_{12}=P_{21}=0\), and \(P_{22}<0\) satisfying (11), then the neural dynamical system (5) is \((QSR)\)-incrementally dissipative in the sense of Definition 1, that is, it satisfies (4)._
The proofs of Lemma 1 and Theorem 1 are presented in the Appendix.
### _Algorithm to Learn Dissipative Neural Dynamics_
We now present the complete algorithm to learn a dissipative neural dynamical model that approximates the unknown nonlinear system (1), summarized in Fig. 1. We first train the baseline model \(\bar{f}\) with parameters \(\bar{\theta}\) satisfying (8) with no constraints as described in Section III-A. Then, we perturb the weights trained in order to enforce incremental dissipativity. We would ideally like to minimize the dissipativity-enforcing weight perturbation, in order to maintain the closeness of the learned model to the behavior of the nonlinear system. We formulate the following optimization problem to realize this step:
\[\begin{split}\hat{W}=\operatorname*{arg\,min}_{W_{1},\tilde{W}_{2 },\ldots,W_{l}}&\sum_{i=1}^{l}\|W_{i}-\bar{W}_{i}\|_{2}^{2}\\ \text{s.t.}& M_{L}\geq 0,\quad\lambda_{i}\geq 0\quad i\in \mathbb{Z}_{l},\end{split} \tag{13}\]
where \(M_{L}\) is defined in (11) and \(P_{11},P_{12},P_{21},P_{22}\) are chosen following Theorem 1.
**Remark 1:** Note that enforcing dissipativity in our model only requires constraints on weights, and the dissipativity property still holds even if the biases are changed. This is due to the fact that the incremental dissipativity property in (4) is with respect to the difference in the inputs and the biases cancel out when we derive the sufficient condition for (12) in Lemma 1 (see proof in the Appendix).
The last step is to further adjust the biases to compensate for any loss of fit to the original nonlinear system due to the perturbation in the weights of the trained model. We sample the system trajectory data again to avoid over-fitting by not using the same training data as in the first step. Then, we freeze the weights \(\hat{W}_{i}\) and train only the biases using the new sampled data. The training yields biases \(\tilde{b}_{i},i\in\mathbb{Z}_{l}\). The final model \(\tilde{f}\) has biases \(\tilde{b}_{i}\) and weights \(\tilde{W}(i)=\tilde{W}(i)\), \(i\in\mathbb{Z}_{l}\). We summarize this procedure in Algorithm 1.
Fig. 1: Approach to learn dissipative neural dynamics
\[M_{L}=\begin{bmatrix}P_{11}+\lambda_{1}\lambda\rho W_{1}^{T}W_{1}&-\lambda_{1} \lambda mW_{1}^{T}&0&...&P_{12}\\ -\lambda_{1}\lambda mW_{1}&-\lambda_{1}\lambda I+\lambda_{2}\lambda\rho W_{2}^{T }W_{2}&-\lambda_{2}\lambda mW_{2}^{T}&...&0\\ 0&-\lambda_{2}\lambda mW_{2}&\lambda_{2}\lambda I+\lambda_{3}\lambda\rho W_{3}^ {T}W_{3}&...&0\\ &&&\vdots&\\ 0&...&0&\lambda_{l-1}\lambda I+\lambda_{l}\lambda\rho W_{i}^{T}W_{l}&-\lambda_ {l}\lambda mW_{l}^{T}\\ P_{21}&0&...&-\lambda_{l}\lambda mW_{l}&P_{22}+\lambda_{l}\lambda I\end{bmatrix}\geq 0 \tag{11}\]
```
Input Two different data sets \(\mathcal{D}_{1}\) and \(\mathcal{D}_{2}\), each comprising of \(M\) collections \(\{(y,u)^{(i)}\}_{i=1}^{M}\) on fixed-length time intervals Output Incrementally dissipative neural dynamical model (5), parameters \((\hat{W}_{i},\hat{b}_{i})\), \(i\)\(\in\)\(\mathbb{Z}_{l}\)
1: Using dataset \(\mathcal{D}_{1}\), use the procedure in [6] to train a neural ODE model with no constraints for the system described in (8). Obtain a baseline model \(\bar{f}\) with weights \(\hat{W}_{i}\) and biases \(\bar{b}_{i},i\in\mathbb{Z}_{l}\).
2: Solve problem (13) with \(M_{L}\) defined as in (11) and \(P_{11},P_{12},P_{21},P_{22}\) chosen according to Theorem 1. Obtain model \(\hat{f}\) with weights \(\hat{W}_{i}\) and biases \(\bar{b}_{i},i\in\mathbb{Z}_{l}\).
3: Set weights \(\tilde{W}_{i}\leftarrow\hat{W}_{i}\).
4: Using dataset \(\mathcal{D}_{2}\), retrain only the biases in (8) and obtain \(\hat{b}_{i}\).
5:return Neural dynamical system parameters \((\tilde{W}_{i},\tilde{b}_{i}),i\in\mathbb{Z}_{l}\).
```
**Algorithm 1** Dissipative System Identification
## IV Case Study
We provide a numerical example on a second-order Duffing oscillator to illustrate the proposed learning approach.
_Second-order Duffing Oscillator_[30, Example 23]: The nonlinear dynamics of the Duffing oscillator is given by:
\[\begin{cases}\dot{x}_{1}(t)=x_{2}(t)\\ \dot{x}_{2}(t)=-ax_{2}(t)-(b+cx_{1}^{2}(t))x_{1}(t)+u(t),\end{cases} \tag{14}\]
where \(x_{1}\) and \(x_{2}\) are the state variables, \(u\) is the control input, and \(a,b\) and \(c\) are parameters, chosen as \(a=1,b=1,c=1\). From Figure 2, we observe that the system trajectory displays nonlinearity, even close to the equilibrium, making neural dynamical models an attractive candidate to capture the dynamics. Further, this system is known to be incrementally dissipative, which is a property that we would like to capture in the learned model. We implement Algorithm 1 to learn a dissipative neural dynamical model in three steps.
_Learning a baseline model:_ We begin by learning a baseline neural ODE model without any constraints. We pick inputs with \(\dot{u}(t)=(0.6e^{-0.2t}cos(\pi t)-3\pi e^{-0.2t}sin(\pi t))\mathbf{1}(t)\), where \(\mathbf{1}(t)\) is an indicator function that takes a value of 1 when \(t\) is non-negative and 0 otherwise. We utilize the algorithm in [6] to obtain our baseline model. We pad the input with a dummy variable, set to zero at all times, to make the augmented input variable have the same dimension as the state variable. We use a feed-forward fully-connected neural network with one hidden layer of 16 neurons. The weights from the input layer to the hidden layer, and the hidden layer to the output layer are denoted by \(W_{1}\) and \(W_{2}\) respectively. Note that the output layer of the neural network does not have an activation function (that is, \(\alpha=\beta=1\) for the the output layer). For data generation, we first simulate three trajectories with randomly assigned initial conditions and inputs starting from \(0\), \(0.1\) and \(0.2\) respectively. For each trajectory, we obtain 10000 evenly distributed data points. Then, we form 100 data collections by randomly selecting time intervals, each containing 6000 consecutive data points. We add Gaussian noise \(n\sim\mathbf{N}(0,0.01)\) data to the states and input to emulate noisy sensor data often encountered in practical applications. Figure 2 shows our baseline model, which closely approximates the ground truth.
_Weight perturbation to enforce dissipativity:_ Despite the nonlinear system (14) being incrementally dissipative, the baseline model fails to preserve this property. Therefore, in the second step, we solve the optimization problem described in (13) to obtain dissipative neural dynamical model. Particularly, we impose the property of incremental passivity by setting \(S=0.5\mathbf{I}\), and choosing \(R=r\mathbf{I}\), and \(Q=q\mathbf{I}\), with \(r\) and \(q\) being negative optimization variables. Additionally, \(\lambda_{1}\) and \(\lambda_{2}\) are treated as optimization variables and constrained to be positive. We choose the negative definite matrix \(P_{22}=-0.01\mathbf{I}\). Using YALMIP/Penlab [31][32] to solve (13), we obtain a dissipative neural dynamical system with \(R=-9.9168\times 10^{-6}\mathbf{I}\), \(Q=-9.9564\times 10^{-6}\mathbf{I}\), \(\lambda_{1}=10.5945\), and \(\lambda_{2}=17.9737\). The 2-norm of the perturbation on the flattened weight variables (a 128-dimensional vector) is 1.4443. The dissipative model obtained after weight perturbation is tested on the same trajectory and compared with the baseline model in Figure 3. We observe that we manage to impose dissipativity through just a small perturbation.
_Bias Adjustment:_ Despite the perturbation being small, it
Fig. 2: Trajectories of the baseline (unconstrained) neural ODE model and the ground truth for a test input.
may still drive the model away from the ground truth to some extent. This is due to the fact that the neural ODE is nonlinear, and small parametric changes may still lead to non-negligible output deviations that accumulate when the ODE is integrated to obtain system trajectories. Therefore, in the last step, we freeze the weights (which were designed to guarantee dissipativity), and adjust only on biases to compensate for any loss of fit to the ground truth. Note that the biases can be trained independently while maintaining dissipativity guarantees as discussed in Remark 1. We collect training data in a similar manner as the first step, but pick starting points for generating the three trajectories using a different random seed. We purposely do this to avoid overfitting. After bias adjustment, we demonstrate that our final model closely matches the ground truth (Figure 4), while guaranteeing incremental dissipativity.
## V Conclusion and Future Work
In this paper, we present an approach to learn neural dynamical models for nonlinear systems that preserve its dissipativity properties. Our approach involves first learning a baseline neural ODE model, followed by a minimal perturbation to enforce dissipativity, while retaining model fit. Future directions of interest include compositional approaches for weight adjustments to decrease computational cost, and design of the \(Q\), \(S\), and \(R\) matrices to strengthen closed-loop dissipativity guarantees.
## VI Appendix
We state the proofs of all the results in the paper here. An important tool is the lossless S-procedure for two quadratic forms, stated below.
**Lemma 2** (S-Procedure): _For two symmetric matrices \(F_{0}=F_{0}^{T}\) and \(F_{1}=F_{1}^{T}\), if there exists \(\lambda\in\mathbb{R}^{+}\) such that \(F_{0}\geq\lambda F_{1}\), then \(z^{T}F_{1}z\geq 0,\forall z\) implies \(z^{T}F_{0}z\geq 0,\forall z\). Additionally, if there exists a vector \(z_{0}\) such that \(z_{0}^{T}F_{1}z_{0}>0\), then the converse holds, that is, \(z^{T}F_{0}z\geq 0,\forall z\) implies \(z^{T}F_{1}z\geq 0,\forall z\)._
Now we are ready to prove Lemma 1 and Theorem 1.
_Proof of Lemma 1_: For each layer \(\mathbf{L}_{i}\), \(i\in\mathbb{Z}_{l}\), define \(\Delta z^{i}=\delta(z_{a}^{i},z_{b}^{i})\), where \(z_{a}^{i}\) and \(z_{b}^{i}\), such that \(z_{a}^{i}\neq z_{b}^{i}\), are two different inputs to the neural network layer \(\mathbf{L}_{i}\). Similarly, define \(\Delta v_{i}=\delta(v_{a}^{i},v_{b}^{i})\), where \(v_{a}^{i}\) and \(v_{b}^{i}\) are the linear transformations of \(x_{a}^{i-1}\) and \(x_{b}^{i-1}\), defined as \(v_{a}^{i}=W_{i}x_{a}^{i-1}+b_{i}\) and \(v_{b}^{i}=W_{i}x_{b}^{i-1}+b_{i}\). From (10), for any layer \(\mathbf{L}_{i}\), \(i\in\mathbb{Z}_{l}\), we have
\[\begin{bmatrix}\Delta v^{i}\\ \Delta\phi(v^{i})\end{bmatrix}^{T}\begin{bmatrix}pI&-mI\\ -mI&I\end{bmatrix}\begin{bmatrix}\Delta v^{i}\\ \Delta\phi(v^{i})\end{bmatrix}\leq 0 \tag{15}\]
Notice that \(\Delta v^{i}=(W_{i}z_{b}^{i-1}+b_{i})-(W_{i}z_{a}^{i-1}+b_{i})=W_{i}\Delta z^{ i-1}\) and \(\Delta\phi(v^{i})=\Delta z^{i}\). We can rewrite
\[\begin{bmatrix}\Delta v^{i}\\ \Delta\phi(v^{i})\end{bmatrix}=\begin{bmatrix}W_{i}&0\\ 0&I\end{bmatrix}\begin{bmatrix}\Delta z^{i-1}\\ \Delta z^{i}\end{bmatrix}.\]
Substituting in (15), we have for any \(\lambda_{i}\in\mathcal{R}_{+}\)
\[-\begin{bmatrix}\Delta z^{i-1}\\ \Delta z^{i}\end{bmatrix}^{T}\begin{bmatrix}\lambda_{i}pW_{i}^{T}W_{i}&- \lambda_{i}mW_{i}^{T}\\ -\lambda_{i}mW_{i}&\lambda_{i}I\end{bmatrix}\begin{bmatrix}\Delta z^{i-1}\\ \Delta z^{i}\end{bmatrix}\geq 0 \tag{16}\]
Stacking the inequalities for all layers in a diagonal manner, we have
\[\begin{bmatrix}\Delta z^{0}\\ \vdots\\ \Delta z^{l}\end{bmatrix}^{T}(-S_{T})\begin{bmatrix}\Delta z^{0}\\ \vdots\\ \Delta z^{l}\end{bmatrix}\geq 0 \tag{17}\]
where \(S_{T}\) is defined in (18). Using the S-procedure in Lemma 2, under mild conditions (discussed shortly), if there exists a non-negative \(\lambda\in\mathbb{R}\) such that
\[\begin{bmatrix}P_{11}&0&...&0&P_{12}\\ 0&&...&&0\\ &&\vdots&&\\ 0&&...&&0\\ P_{21}&0&...&0&P_{22}\end{bmatrix}\geq-\lambda S_{T}, \tag{19}\]
we have
\[\begin{bmatrix}\Delta z^{0}\\...\\ \Delta z^{l}\end{bmatrix}^{T}\begin{bmatrix}P_{11}&0&...&0&P_{12}\\ 0&&...&&0\\ &&\vdots&&\\ 0&&...&&0\\ P_{21}&0&...&0&P_{22}\end{bmatrix}\begin{bmatrix}\Delta z^{0}\\...\\ \Delta z^{l}\end{bmatrix}\geq 0,\]
Fig. 4: Trajectories of the final dissipative neural ODE model after bias adjustment and the ground truth.
Fig. 3: Trajectories of the unconstrained neural ODE model and the dissipative model obtained after weight perturbation.
implying (12). Notice that the condition in (19) is exactly \(M_{L}\). As mentioned earlier, according to Lemma 2, we require a mild condition, namely the existence of \(\Delta z^{i}\), \(i\in\mathbb{Z}_{l}\), such that (17) holds strictly. As \(\alpha<\beta\) and the inputs are different, there exist some \(\Delta z^{0},\Delta z^{1}\) such that (16) is strict. Then with any \(\Delta z^{i}\), \(i\in\{2,...,l\}\), (17) holds strictly.
_Proof of Theorem 1_: With \(P_{11}=\begin{bmatrix}Q&S\\ S^{T}&R\end{bmatrix}\), \(P_{12}=P_{21}=0\), and \(P_{22}<0\), we can write (12) as
\[\left(\Delta z^{0}\right)^{T}\begin{bmatrix}Q&S\\ S^{T}&R\end{bmatrix}\Delta z^{0}+(\Delta z^{l})^{T}P_{22}\Delta z^{l}\geq 0 \tag{20}\]
Note that \(P_{22}\) is negative definite, which means \((\Delta z^{l})^{T}P_{22}\Delta z^{l}<0\). Therefore, the first term in (20) is larger than 0. The conclusion directly follows with the fact that \(\Delta z^{0}=\left[\Delta y^{T}(t),\Delta u^{T}(t)\right]^{T}\).
|
2302.14557 | GRAN: Ghost Residual Attention Network for Single Image Super Resolution | Recently, many works have designed wider and deeper networks to achieve
higher image super-resolution performance. Despite their outstanding
performance, they still suffer from high computational resources, preventing
them from directly applying to embedded devices. To reduce the computation
resources and maintain performance, we propose a novel Ghost Residual Attention
Network (GRAN) for efficient super-resolution. This paper introduces Ghost
Residual Attention Block (GRAB) groups to overcome the drawbacks of the
standard convolutional operation, i.e., redundancy of the intermediate feature.
GRAB consists of the Ghost Module and Channel and Spatial Attention Module
(CSAM) to alleviate the generation of redundant features. Specifically, Ghost
Module can reveal information underlying intrinsic features by employing linear
operations to replace the standard convolutions. Reducing redundant features by
the Ghost Module, our model decreases memory and computing resource
requirements in the network. The CSAM pays more comprehensive attention to
where and what the feature extraction is, which is critical to recovering the
image details. Experiments conducted on the benchmark datasets demonstrate the
superior performance of our method in both qualitative and quantitative.
Compared to the baseline models, we achieve higher performance with lower
computational resources, whose parameters and FLOPs have decreased by more than
ten times. | Axi Niu, Pei Wang, Yu Zhu, Jinqiu Sun, Qingsen Yan, Yanning Zhang | 2023-02-28T13:26:24Z | http://arxiv.org/abs/2302.14557v2 | # GRAN: Ghost Residual Attention Network
###### Abstract
Recently, many works have designed wider and deeper networks to achieve higher image super-resolution performance. Despite their outstanding performance, they still suffer from high computational resources, preventing them from directly applying to embedded devices. To reduce the computation resources and maintain performance, we propose a novel Ghost Residual Attention Network (GRAN) for efficient super-resolution. This paper introduces Ghost Residual Attention Block (GRAB) groups to overcome the drawbacks of the standard convolutional operation, _i.e._, redundancy of the intermediate feature. GRAB consists of the Ghost Module and Channel and Spatial Attention Module (CSAM) to alleviate the generation of redundant features. Specifically, Ghost Module can reveal information underlying intrinsic features by employing linear operations to replace the standard convolutions. Reducing redundant features by the Ghost Module, our model decreases memory and computing resource requirements in the network. The CSAM pays more comprehensive attention to where and what the feature extraction is, which is critical to recovering the image details. Experiments conducted on the benchmark datasets demonstrate the superior performance of our method in both qualitative and quantitative. Compared to the baseline models,
we achieve higher performance with lower computational resources, whose parameters and FLOPs have decreased by more than _ten times_.
**Keywords:** Single Image Super-resolution, Ghost Residual Attention Block, Channel and Spatial Attention Module, Ghost Technology.
## 1 Introduction
Single Image super-resolution (SISR) is a classic ill-posed inverse problem. SISR aims to obtain a high-resolution (HR) image containing great details and textures from a low-resolution (LR)image by a super-resolution method. It can solve the problems of reduced spatial resolution of imaging results and loss of high-frequency detail information of the scene during the image acquisition process of traditional imaging equipment. Traditional super-resolution research has achieved many wonderful results from simple interpolation to statistical [1], self-similar image patch learning [2], neighborhood embedding [3], sparse coding [4], image patch regression [5], random forest [6] and Bayes [7], etc. However, SISR is still a challenging unsuitable, and pathological task because one specific LR image can correspond to multiple HRs [8]. Super-resolution is a process of forcibly recovering high-frequency information using low-frequency information.
With the wide application of deep learning technology in various fields [10; 11; 12; 13; 14; 15], methods based on neural networks have gradually become the mainstream solution to the image super-resolution, which can solve SISR problem by implicitly learning the complex nonlinear-LR-to-HR-mapping-relationship based on numerous LR-HR image pairs [16]. SRCNN [17] is the first one using deep learning technology to solve the SISR problem. Compared with many
Figure 1: An example of redundant features from the image ’baby’ in Set5 obtained by RCAN[9]. The are many similar feature pairs in a feature space.
traditional SISR methods based on machine learning, the simple structure of the SRCNN model shows remarkable performance in image super-resolution problems. Then, a large number of CNN-based models were proposed to obtain more accurate SISR results using different techniques to improve the quality of the reconstructed image: the design of the network structure with residuals [18; 19; 20; 21; 22]; generative adversarial networks [23]; neural architecture search [24; 25]; various attention mechanisms [26], and other technologies [27; 28]. With the improvement of architecture, this field has indeed made rich progress.
However, these algorithms have higher requirements for hardware resources with their increasingly complex structures, leading them not to be widely used and cannot further improve performance. Considering the assistance provided by the redundant features (see Figure 1) for image super-resolution is the same, and with increasing the depth and width of the network, the redundant features will consume a large part of the resources. In this paper, we propose a novel Ghost Residual Attention Network (GRAN), which can reduce redundant features and decrease hardware requirements while maintaining the network's performance. More specifically, we study (1) designing the lightweight block, Ghost Residual Attention Block (GRAB), and (2) designing better performance feature extraction modules, Channel and Spatial Attention Module (CSAM). In summary, our main contributions are as follows:
* We propose an image super-resolution algorithm, which can make the network structure lighter while ensuring the algorithm's effectiveness.
* We design a Ghost Residual Attention Block by applying ghost technology, which employs a series of linear operations to replace the traditional convolutions. This operation can significantly reduce redundant features, reducing the network's demand for memory and computing resources.
* We redesign a Channel and Spatial Attention Module (CSAM) based on an attention mechanism to comprehensively extract high-frequency features conducive to image reconstruction from space and dimensions.
* Quantitative experiments show that our method not only has a performance advantage, but the number of parameters and calculations also are significantly less than the state of art methods.
The rest of the paper is organized as follows: section 2 briefly concludes the related work in the area, followed by the proposed lightweight model GRAN in section 3, the experiments and analysis in section 4, and finally, the conclusion in section 5.
## 2 Related Works
With the development of deep learning, countless excellent networks for image-processing tasks have been proposed. Along with it, many SISR technologies combined with deep learning have emerged. Here, we discuss CNN-based methods, lightweight networks, and works related to attention mechanisms.
### CNN-based methods
Since neural networks were introduced into the super-resolution field, the image super-resolution model structure is mostly divided into three parts [29; 30; 31], etc., as shown in the Figure 2: 1) the feature extraction part: to obtain the shallow features of the image by a convolution operation, 2)the nonlinear mapping part: to process the shallow features to obtain the information required for reconstruction with various network structure designs, 3)and the upscale part: to reconstruct the image using a special upscale technology.
Among them, most researchers focused on improving the second part of the network to improve the algorithm's performance. SRCNN [17] is the first deep-learning work for SISR. There were three convolution layers in SRCNN, corresponding to the three steps: feature extraction, nonlinear mapping, and restoration. After SRCNN, VDSR [32] applied a very deep neural network model combined with residual learning to perform super-scores, which greatly accelerated the model's training process and made the details retained in the final output better. EDSR [29]removed some unnecessary modules in the residual structure and set up a multi-scale model, which presented a good effect in processing each single-scale super-resolution. RCAN [9] proposed a new residual dense network, RIR(Residual In Residual)technology, using the hierarchical features of all convolutional layers to improve the algorithm's performance. MCAN [33] introduced a SISR network called Matrix Channel Attention Network (MCAN) by constructing a matrix set of multi-connected channel attention blocks (MCAB), which achieved a fine performance by using fewer multiplications and parameters. [34] proposes a face image super-resolution reconstruction method based on a combined representation learning method, adopting deep residual and deep neural networks as generators and discriminators. MLRN [31] proposed a multi-scale fusion architecture to solve the problem that the existing SISR could not make full use of the characteristic information of the network middle layer. [35] proposed inheriting the merits of functional-interpolation and dictionary-based SR techniques to produce more discriminate and noise-free high-resolution (HR) images from captured noisy LR probe images is suitable for real-world low-resolution face recognition. [36] proposes a novel network for SR called a super-resolution network by fusing internal and external features (SRNIF), which could make full use of the internal features of the image and obtain the detailed texture of the input LR.
The advent of these algorithms has greatly developed the field of super-resolution. However, the redundant feature is an obvious feature of these successful CNNs, but they are rarely considered when designing the network structures. Furthermore, redundant features make the consumption of computing resources meaningless.
### Lightweight Network
The complexity of the network structure and the demanding requirements for computing resources resulted in various network Lightweight technologies: Pruning [37; 38; 39], Sparse Representation [40; 41], Knowledge distillation [42]. [38] analyzed that CNN usually had significant redundancy between different filters and feature channels. The author proposed to reduce the computational cost of CNN by trimming the Filter. [39] adopted the channel pruning to the trained model, which reduced the cumulative error and was suitable for various network structures. [41] further targeted removing useless channels for easier acceleration in practice by sparse representation technology. Specifically, [43] and [44] utilize larger models to teach smaller ones, named Knowledge distillation, which improves the performance of smaller models.
The performance of these methods usually depends on the given pre-trained model. The basic operation of these models and their architecture has further improved them.
### Attention-based Networks
Attention mechanisms have been widely used in recent computer vision tasks, such as image classification [45], image captioning [46], image recognition [47], which is an effective method for biased allocation of the valid part of the input information. [48] introduced the attention mechanism, making the entire network structure pay attention to the overall information, and the local information. [49] combined spatial attention mechanism and channel attention mechanism, which had acquired excellent performance in classification and detection. SAN [47] applied the self-attention mechanism, which greatly reduced the parameters and calculations and significantly improved the classification accuracy of the ImageNet dataset compared with the classic convolutional network ResNet.
With the popularity of the attention mechanism, more and more attention-based methods are proposed to improve SR performance. Such as, [9] proposed the residual channel attention network (RCAN) by introducing the channel attention mechanism into a modified residual block for image SR. The channel attention mechanism uses global average pooling to extract channel statistics called first-order statistics.
Figure 2: The basic structure of the networks for super-resolution.
## 3 Methodology
This section focuses on the proposed Ghost Residual Attention Network (GRAN), an end-to-end image super-resolution network. First, we introduce the overall architecture of the network in Section 3.1, and then the main modules of the network in detail in Section 3.2. Finally, we analyze the complexity of the network in Section 3.3.
### Network Structure
The proposed GRAN architecture is shown in Figure 3. The GRAN can also be decomposed into three parts: shallow feature extraction module, nonlinear mapping module, and up-scale module. At the beginning of the model, we use a single convolution layer as the shallow feature extraction module. Then, our nonlinear mapping module contains multiple Ghost Residual Attention Blocks and has two Ghost Residual Groups in the input and output of this module. Ultimately, our up-scale module combines the output of the shallow feature extraction module and nonlinear mapping module to produce the super-resolution result. In formulation, we define the input image and output image of our GRAN as \(I_{LR}\) and \(I_{SR}\), respectively. Our GRAN can reconstruct an accurate SR image \(I_{SR}\) directly from the LR \(I_{LR}\):
\[I_{SR}=H_{up}(H_{map}(H_{extract}(I_{LR})))=H_{GRAN}(I_{LR}), \tag{1}\]
where \(H_{extract}\), \(H_{map}\), \(H_{GRAN}\) represent the feature extraction module, nonlinear mapping module, and up-sample module, respectively. And here, the \(H_{map}\) represents the module composed of our GRG, which consists of many Ghost Residual Attention Blocks (GRAB), _i.e._the proposed Ghost Residual Attention Block is the backbone of our methods, in the following, we will give details for it how to work.
Figure 3: The overview of our GRAN super-resolution network architecture, stacked by multiple GRABs. Each short ship connection divides it into multiple GRABs groups, which can avoid information loss.
### Ghost Residual Attention Block (GRAB)
Inspired by GhostNet [50], we replace the convolution operation in the traditional network structure with a series of linear operations named Ghost Module, as shown in Figure 3.2. The traditional convolutional neural network directly performs convolution operations on the features of each layer. While the Ghost Module proposed by [50] uses a series of linear operations to operate the feature layer separately, which can introduce more features while reducing the number of network calculations. Here, we use a simplified version of the Ghost model to achieve lightweight and simplification of the network structure. We will conduct a detailed introduction later.
As shown in Figure 3, our network is stacked by multiple GRAB groups. The calculation of features in G-th GRAB group via:
\[F_{G}=B_{n}(B_{n-1}(\dots B_{1}(F_{i})\dots)),i=0,,N \tag{2}\]
where, the \(F_{0}\) is the shallow feature obtained by \(I_{LR}\) with a 3\(\times\)3 convolution, which is used as the input of the first GRAB. \(B_{n}\) denotes the n-th GRAB and \(F_{i}\)(except \(F_{0}\)) is its output.
We refer to the RIR structure and divide the GRABs into multiple groups [9], then connect them through short skip connection (SSC) and long skip connection (LSC), which allows more low-frequency information to be used, and the information flows better. The purpose is to deepen the depth of the network further, and at the same time, the network can better integrate into the training process [9]. The feature of each GRAB Group is calculated via:
\[F_{G}=W_{S}F_{G-1}+F_{B}, \tag{3}\]
where the \(F_{G}\) denotes the output of the \(G\)-th GRAB Group. And the \(W_{S}\) represents the weight set of SSC, which allows the main parts of the network to learn more residual information, and more abundant low-frequency information is easier obtained in the training process. The following equation can
Figure 4: The overview of GRAB. The left part is the module to remove redundant features, whose convolution operation is replaced by the Ghost module. On the right is the Channel and Spatial Attention Module (CSAM) module, which is obtained by the sequential combination of channel attention and spatial attention.
express the reconstruction process of the entire network:
\[I_{SR}=H_{up}(F_{0}+W_{L}F_{G}), \tag{4}\]
Similarly, \(W_{L}\) represents the weight set of LSC. In this paper, for the specific framework, we set the number of Ghost Residual Groups (GRG) as 10. Each GRG consists of 20 Ghost Residual Attention Blocks (GRAB).
#### 3.2.1 Ghost Module
First, we introduce the Ghost Module (shown in Figure 5) applied in this article and why it can reduce the complexity of network parameters. The Ghost technology is first proposed in [50]. The author proposes to use a variety of linear operations to replace traditional convolution, generate efficient CNNs, and verify the proposed method's effectiveness in ImageNet classification and object detection. Here, we use the Ghost idea to build the Ghost Module and combine it with the attention mechanism to achieve the lightweight and optimization of the super-resolution network structure.
Generally speaking, the original convolution operation in a traditional network used to generate \(n\) feature maps can be formulated as:
\[Y=X\otimes f+b, \tag{5}\]
where \(X\) and \(Y\) represent the input and output of the convolutional layer, \(\otimes\) represents the convolutional operation, respectively, and n is the kernel number of the convolution.
While the Ghost Module used in our work can be formulated as follows:
\[Y^{\prime}=X\otimes f^{\prime}, \tag{6}\] \[Y=ID(Y^{{}^{\prime}})+\Psi_{1}(Y^{{}^{\prime}})+\Psi_{2}(Y^{{}^ {\prime}}),\]
#### 3.2.2 Channel and Spatial Attention Module (CSAM)
Inspired by the spatial transformer proposed in STN [51] and the channel attention in SENet [45], we design a CSAM structure, an attention mechanism module that combines spatial and channel. When given the Non-Ghost Featur \(F\), CSAM sequentially performs the channel attention and spatial attention
Figure 5: The overview of Ghost Module
as illustrated in Figure 6. The overall attention process can be summarized as follows:
\[F_{ghost}^{{}^{\prime}}=M_{c}\odot F, \tag{7}\]
\[F_{ghost}^{"}=M_{s}\odot F^{{}^{\prime}},\]
where \(M_{c}\) and \(M_{s}\) denote the 1D channel attention map and 2D spatial attention map separately. Here, the \(\odot\) indicts element-wise multiplication. \(F_{ghost}^{"}\) is the final refined output. Figure 6 depicts the computation process of each attention map. [52] proposed a junction structure that combines spatial and channel attention and applied it to image classification and target detection. We reconsider the regional characteristics of the spatial attention mechanism and the global characteristics of the channel attention mechanism and propose to apply this structure to image super-resolution. The following describes the details of each attention module:
**Channel attention module.** This module focuses on selecting useful features and assigning weight to each feature map. Thus this module can help the model know which to look [45]. Each layer of the current convolutional neural network has many convolution kernels, corresponding to a feature map. In the channel dimension, different weights are learned for each feature to play its role. Compared with the spatial attention mechanism, channel attention allocates resources between each channel.
**Spatial attention module.** This module can be understood as guiding the neural network where to look. Through the attention mechanism, the network can transform the spatial information in the original image into another space and retain the key information. This kind of network is used in many existing methods [51; 53]. The network trained through the spatial attention mechanism can find out the area of the image that needs to be paid more attention. At the same time, this mechanism can have the functions of rotation, scaling, and transformation, so that the important information of the part of the picture can be extracted by the box through the transformation.
Figure 6: The overview of our CSAM
In addition, our CSAM structure is composed of channel attention and spatial attention in order. This combination order has been proved in [52], that is, the channel-first order is slightly better than the spatial-first order.
### Analysis on Complexity
Compared with RCAN our GRAN achieves higher performance. Meanwhile, our network is more efficient. In this section, we will analyze our network structure from both flops and parameters. Since the computational volume consumption mainly comes from Ghost Residual Attention Block (GRAB) in our method and Residual Channel Attention Block (RCAB) in RCAN, we compare the computation consumption of these two blocks in detail.
**FLOPs calculation.** The main difference between our GRAB and RCAB is that we replace the CNN layer with the Ghost module. Second, following the channel attention module we introduce a spatial attention module which brings negligible computation increase. Here we mainly compute the computation consumption caused by CNN and the channel layer.
For Ghost CNN, the computation process is as follows. First, perform a traditional convolution operation where the convolution kernel is \(f^{\prime}\in R^{c\times 1\times 1\times m}\) on the original feature map \(X\) to obtain \(m\) intrinsic features \(Y^{\prime}\) (Equation 6), \(m\leq n\). Next, generate \(q\) features for each feature \({y_{i}}^{\prime}\) in \(Y^{\prime}\) by a series of linear operations:
\[{y_{i,j}}={\Psi_{i}}({y_{i}}^{\prime}),\forall i=1,...,m,j=1,...,q, \tag{8}\]
in this way, each \({y_{i}}^{\prime}\in{Y_{i}}^{\prime}\) can get one or more feature maps \(\{{y_{i,j}}\}_{j=1}^{q}\) by \(\Psi_{i}\), the linear operation for generating the \(j\)-th ghost feature map \({y_{i,j}}\).
Finally, we can obtain \(n=m\cdot q\) feature maps \(Y=[y_{11},y_{12},\)\(\cdots,y_{ms}]\) as the output data of Ghost Module. In actual processing, we use identity as the first \(\Psi_{1}\) to perform identity mapping on the original feature map. So, we have \(m\cdot(q-1)=\frac{n}{q}\cdot(q-1)\) linear operations. Thus, we can get the FLOPs of Ghost CNN as \(m\cdot h^{\prime}\cdot w^{\prime}\cdot c\cdot k\cdot k+(q-1)\cdot m\cdot h^{ \prime}\cdot w^{\prime}\cdot d\cdot d\). The FLOPs of a CNN can be simply calculated as \(n\cdot h^{\prime}\cdot w^{\prime}\cdot c\cdot k\cdot k\). Considering channel attention spatial attention module brings much lower computation consummation. We simply ignore them. Then we can get the speed ratio \(r_{q}\) as:
\[\begin{array}{l}r_{q}=\frac{n\cdot h^{\prime}\cdot w^{\prime}\cdot c\cdot k \cdot k}{m\cdot h^{\prime}\cdot w^{\prime}\cdot c\cdot k\cdot k+(q-1)\cdot m \cdot h^{\prime}\cdot w^{\prime}\cdot d\cdot d}\\ =\frac{q\cdot c\cdot k\cdot k}{c\cdot k\cdot k+(q-1)\cdot d\cdot d}\end{array} \tag{9}\]
Since we set \(d\approx k\), and \(q\gg c\), we can have:
\[r_{q}\approx\frac{q\cdot c}{c+(q-1)}\approx q. \tag{10}\]
Here we show the main reason for speedup. In addition, because CSAM is a lightweight module that can be seamlessly integrated into any CNN architecture, it has almost no impact on efficiency and computing power.
**Number of parameters.** We mainly analyze the number of parameters used in our Ghost Module and the ordinary convolution layers and ignore the
effect of channel attention and spatial attention module because of their simplicity. As introduced in the previous section, the formulation for the original convolution operation can be described as Equation 5.
Assuming that the size of the convolution kernel is \(k\times k\), the input channel is \(M\), the output channel is \(N\), and the number of parameters required for a traditional convolution is :
\[Sum_{conv}=k\times k\times M\times N. \tag{11}\]
Here, we ignore the number of biases, which is the same as the number of output channels).
The ghost module used to replace the convolution operation is expressed as:
\[X^{{}^{\prime}}=priCONV(X), \tag{12}\]
\[Y=Identity(X^{{}^{\prime}})+\Psi_{1}(X^{{}^{\prime}})+\Psi_{2}(X^{{}^{\prime} }), \tag{13}\]
where \(priCONV\) represent the traditional \(1\times 1\) convolution. Denote \(k_{1}\times k_{1}\) and \(k_{2}\times k_{2}\) as the kernel size of \(\Psi_{1}\) and \(\Psi_{2}\) respectively. For the same \(M\) and \(N\), the parameters of the ghost module should be:
\[Sum_{ghost} =(1\times 1\times M\times\frac{N}{3})+(k_{1}\times k_{1}\times\frac{ N}{3}\times\frac{N}{3}) \tag{14}\] \[+(k_{2}\times k_{2}\times\frac{N}{3}\times\frac{N}{3})\] \[=\frac{N}{3}(M+k_{1}\times k_{1}\times\frac{N}{3}+k_{2}\times k_{ 2}\times\frac{N}{3})\] \[\ll Sum_{conv},\quad if(k>k_{1},k>k_{2})\]
In most network structures, the values of M and N are relatively large, such as 128, 256, 512, or even 1024. If \(k\) is larger than both \(k_{1}\) and \(k_{2}\)(Set to 1 and 3 respectively in our experiment), \(Sum_{ghost}\) will be much smaller than \(Sum_{conv}\).
## 4 Experiment Results
### Implement details
**Datasets and Evaluation Metrics.** Our GRAN is designed to super-resolve LR images, so we use the DIV2K [54] as the model's training set, which contains 800 2K high-resolution images for training and 100 images for both validation and testing. For testing, we make comparisons across four scaling tasks (\(\times 2\), \(\times 3\), \(\times 4\), \(\times 8\)) on datasets: Set5 [55], Set14 [56], B100 [57], Urban100 [58], Manga109 [59]. The evaluation metrics we used are PSNR and SSIM on the Y channel in the YCbCr color space. In addition, we also experiment on real-world LR images (from RealSR [60] dataset) and compare the results qualitatively. Ultimately, we perform some ablation studies to investigate the influence of every component of our GRAB.
**Training Settings.** Before training the network, we first perform data augmentation on the 800 training images, by 90\({}^{\circ}\), 180\({}^{\circ}\), 270\({}^{\circ}\) rotation, and horizontal flip. For training, we set the batch size to 12 with the size of 48\(\times\)48 extracted as inputs, and we apply Adam (\(\beta 1=0.9\), \(\beta 2=0.999\), and = 10-8) to train our model. The initial learning rate is set to \(1\times 10^{-4}\), halved every \(2\times 10^{-5}\) steps of back-propagation. And we perform three linear functions to take place of the original, namely identity,\(\Psi_{1}\) (d=3),\(\Psi_{2}(d=5)\). In addition, We use the PyTorch framework to implement our models with a GeForce RTX 1080Ti GPU.
### Comparisons with State-of-the-art Algorithms
We compare our method with 11 state-of-the-art SISR methods: SRCNN [17], VDSR [32], CARN [61], MSRN [30], RCAN [9], SISR-CA-OA [62], MLRN [31], MCAN [33], DBPN [63], HRAN [64], ARCAN [65]. These methods are some representative and outstanding methods since 2016, especially since their network structure is becoming more and more complex for improving performance.
**Quantitative comparison on PSNR/SSIM.** The quantitative results of different methods on benchmark datasets are provided in Table 1, from which we can find our proposed method has better performance in the current state of the art. We did a lot of experiments on the general datasets for \(\times 2\), \(\times 3\), \(\times 4\), and \(\times 8\) SR, and use three colors of red, green, and blue to mark the first, second, and third place experimental results. To be fair, all tests were done on a 1080Ti GPU. Some methods have no public code, and the test results are obtained from their original article.
As shown in Table 1. The Bicubic method just interpolates the image directly, so the result is not good. SRCNN is the first method to apply neural networks for super-resolution. VDSR, MSRN, MLRN, SISR-CA-OA, MCAN, DBPN, HRAN, ARCAN, and RCAN, are all excellent super-resolution algorithms from 2017 so far. They have made different improvements by designing the deeper structure, new network unit, feature fusion mode, or connection between network layers, but they all have ignored the redundant features' impact. Therefore, even if the network performance is slightly improved, the parameters and the computation are still shortcomings. In addition, the method HRAN obtains a super-resolution network with excellent performance by searching the network structure, which has stricter consumption of computing resources and time.
Based on the consideration of redundant features, we propose our lightweight model GRAN, which uses Ghost technology to replace the traditional convolution to reduce the generation of redundant features, thereby reducing the number of parameters and calculations of the algorithm model. In addition, We also add the currently popular attention mechanism (CSAM) to our model to achieve the purpose of super-resolution by fully utilizing the spatial and dimensional information of the image.
In the case of various data sets and different reconstruction sizes, though the value of individual indicators obtained by our method is slightly lower than the optimal algorithm RCAN, HRAN, our parameter quantity has obvious advantages. See from Table 2. We have done a lot of comparisons on the benchmark data set for the indicators PSNR and SSIM on the x2, x3, x4, and x8 scales. Red, green, and blue represent the first, second, and third places respectively. Obviously, we can see that our method is better.
**Qualitative Visual comparison.** We also show visual comparison results in Figure 7. The leftmost column is the input LR of the algorithm, and the four columns on the right, except the HR (Ground Truth), are the output results, SR, of the partial comparison algorithm, corresponding to the magnification of the red box in LR. The results show that our method has a certain degree of
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Scale} & \multicolumn{2}{c|}{Set5} & \multicolumn{2}{c|}{Set14} & \multicolumn{2}{c|}{B100} & \multicolumn{2}{c|}{Urban100} & \multicolumn{2}{c}{Manga109} \\ \cline{3-13} & & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM & PSNR & SSIM \\ \hline Bicubic & \(\times\)2 & 33.66 & 0.9299 & 30.24 & 0.8688 & 29.56 & 0.8431 & 26.88 & 0.8403 & 30.80 & 0.9339 \\ SRCNN & \(\times\)2 & 36.66 & 0.9542 & 32.45 & 0.9067 & 31.36 & 0.8879 & 29.50 & 0.8946 & 35.60 & 0.9663 \\ VDR & \(\times\)2 & 37.53 & 0.9590 & 33.05 & 0.9130 & 31.90 & 0.8960 & 30.77 & 0.9140 & 37.22 & 0.9750 \\ CARN & \(\times\)2 & 37.72 & 0.9590 & 33.52 & 0.9166 & 32.09 & 0.8978 & 31.92 & 0.9256 & 38.36 & 0.9761 \\ MSRN & \(\times\)2 & 38.07 & 0.9065 & 37.4 & 0.9170 & 32.22 & 0.9013 & 32.22 & 0.936 & 38.82 & 0.9868 \\ MLRN & \(\times\)2 & 37.90 & 0.9601 & 33.49 & 0.9140 & 32.11 & 0.8899 & 31.87 & 0.9290 & \\ SBRCA-OA & \(\times\)2 & 37.97 & 0.9065 & 33.42 & 0.9158 & 32.15 & 0.8993 & 31.57 & 0.9226 & 38.38 & /0.9755 \\ MCAN & \(\times\)2 & 37.91 & 0.9597 & 33.69 & 0.9183 & 32.18 & 0.8994 & 32.46 & 0.9303 & \\ DBPN & \(\times\)2 & 38.08 & 0.9601 & **38.35** & 0.9190 & 32.27 & 0.9001 & 32.92 & 0.9310 & 39.28 & 0.9770 \\ HRAN & \(\times\)2 & **38.21** & **0.9613** & **33.85** & **0.9200** & **23.39** & **0.9016** & **32.65** & **0.9357** & **39.11** & **0.9780** \\ ARCAN & \(\times\)2 & 38.01 & 0.9055 & 33.54 & 0.9173 & 32.15 & 0.8992 & 32.13 & 0.9276 & 38.70 & 0.9750 \\ RCAN & \(\times\)2 & **38.27** & **0.9614** & **34.12** & **0.9216** & **32.41** & **0.9027** & **33.34** & **0.9384** & **39.44** & **0.9786** \\ GRAN(ours) & \(\times\)2 & **38.16** & **0.9654** & **33.85** & **0.9238** & **32.35** & **0.9042** & **32.46** & **0.9388** & **39.16** & **0.9815** \\ \hline Bicubic & \(\times\)3 & 30.39 & 0.8682 & 27.65 & 0.7742 & 27.27 & 27.7835 & 24.46 & 0.7349 & 26.95 & 0.8556 \\ SRCNN & \(\times\)3 & 32.75 & 0.9090 & 29.30 & 0.8215 & 28.41 & 0.7863 & 26.24 & 0.7989 & 30.48 & 0.9117 \\ VDSR & \(\times\)3 & 33.67 & 0.9210 & 29.78 & 0.8320 & 28.83 & 0.7970 & 27.14 & 0.8290 & 32.01 & 0.9340 \\ CARN & \(\times\)3 & 34.29 & 0.9542 & 0.9542 & 0.8407 & 29.06 & 0.8034 & 28.06 & 0.8493 & 33.50 & 0.9360 \\ MSRN & \(\times\)3 & 34.38 & 0.9262 & 30.34 & 0.8395 & 29.08 & 0.8041 & 28.08 & 0.8554 & 33.44 & 0.9427 \\ MLRN & \(\times\)3 & 34.18 & 0.9254 & 30.22 & 0.8390 & 29.01 & 0.8033 & 27.88 & 0.8469 & 0.8403 & 32.92 & 0.9391 \\ SBRCA-OA & \(\times\)3 & 34.425 & 0.9271 & 30.43 & 0.8433 & 29.14 & 0.8060 & 28.47 & 0.8580 & & \\ HRAN & \(\times\)3 & **34.69** & 0.9292 & **0.544** & **0.8464** & **29.25** & **0.8089** & **28.76** & **0.8645** & **34.08** & **0.9479** \\ ARCAN & \(\times\)3 & 34.36 & **0.9542** & 30.30 & **0.8412** & 29.07 & 0.8045 & 24.18 & 0.8514 & 33.50 & 0.9439 \\ RCAN & \(\times\)3 & **34.74** & **0.9299** & **0.3615** & **0.8482** & **29.32** & **0.8119** & **29.09** & **0.8702** & **34.44** & **0.9499** \\ GRAN(ours) & \(\times\)3 & **34.58** & **0.9302** & **30.53** & **0.8445** & **29.29** & **0.8070** & **28.31** & **0.8623** & **33.96** & **0.9476** \\ \hline Bicubic & \(\times\)4 & 28.42 & 0.8104 & 26.00 & 0.7027 & 25.96 & 0.6675 & 23.14 & 0.6577 & 24.89 & 0.7866 \\ SRCNN & \(\times\)4 & 30.48 & 0.6828 & 27.05 & 0.7613 & 26.90 & 0.7101 & 24.52 & 0.7221 & 27.58 & 0.8555 \\ VDRR & \(\times\)4 & 31.35 & 0.8830 & 28.02 & 0.7680 & 27.29 & 0.7251 & 25.18 & 0.740 & 28.83 & 0.8870 \\ CARN & \(\times\)4 & 32.13 & 0.8937 & 28.60 & 0.7806 & 27.58 & 0.7349 & 26.07 & 0.7837 & 30.47 & 0.9084 \\ MSRN & \(\times\)4 & 32.07 & 0.8903 & 28.60 & 0.775 & 27.52 & 0.7273 & 26.04 & 0.7896 & 30.17 & 0.9034 \\ MLRN & \(\times\)4 & 31.92 & 0.8911 & 28.43 & 0.7748 & 27.49 & 0.7334 & 25.78 & 0.7763 & & \\ SISRCA-OA & \(\times\)4 & 31.88 & 0.8900 & 28.31 & 0.7740 & 27.45 & 0.7303 & 25.56 & 0.7670 & 29.
advancement in image information reconstruction. This improvement can be clearly seen in the enlarged color patches, especially in terms of details. Such as the 'Image004' and 'Image076' from Urban 100, which show our method has an obvious effect on line restoration.
**Evaluation on real data.** We further conduct experiments on real-world LR images to demonstrate the effectiveness of our method. We adopt the new RealSR [60] dataset, which is used in the NTIRE2019 competition [66]. This dataset contains raw images captured by DSLR cameras. Multiple images of the same scene have been captured with different focal lengths. Images taken with longer focal lengths contain finer details and can be considered as HR counterparts for images with shorter focal lengths. Although RealSR provides images on different scales, it is really hard to obtain image pairs that are totally aligned. That is because of complicated misalignment between images and changes in the imaging system introduced by adjusting the focal length. As a result, we only consider a visual comparison of SR results. We use images captured with 28mm focal length as LR inputs, and Figure 8 shows the visual comparison of 2x upsampling on the RealSR dataset, by which we can find that our method has achieved good visual effects.
### Ablation Study
To analyze the contribution of the proposed GRAB and CSAM for our lightweight model GRAN, we compare our method with the ablations of various versions. Actually, our GRAN consists of multiple GRAB groups, and each GRAB consists of Ghost Module and Channel and Spatial Attention Mechanism. Therefore, our ablation study expands by validating each part in GRAB, _i.e_.Ghost Module, channel attention, and spatial attention. Here we
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & RCAN & MSRN & DBPN & HRAN & Ours \\ \cline{2-5} Param & 22.89M & 5.67M & 10.04M & 8.01M & **2.24M** \\ \cline{2-5} FLOPs & 29.96G (52.30G) & 97.3G & 1438.26G & 170G & **4.95G** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Parameters and FLOPs comparison
\begin{table}
\begin{tabular}{c|c|c|c|c|c} \hline \hline & ab\_1 & ab\_2 & ab\_3 & ab\_4 & ab\_5 \\ \hline Standard Conv & \(\surd\) & \(\times\) & \(\times\) & \(\times\) & \(\times\) \\ Ghost Module & \(\times\) & \(\surd\) & \(\surd\) & \(\surd\) & \(\surd\) \\ Channel Attention & \(\surd\) & \(\surd\) & \(\surd\) & \(\times\) & \(\times\) \\ Spatial Attention & \(\times\) & \(\times\) & \(\surd\) & \(\surd\) & \(\times\) \\ Param & 22.89M & 2.21M & 2.24M & 2.07M & 2.04M \\ FLOPs & 52.30G & 4.88G & 4.95G & 4.95G & 4.88G \\ avg\_PSNR & 27.77 & 27.54 & 27.84 & 27.47 & 27.06 \\ avg\_SSIM & 0.7436 & 0.7146 & 0.7276 & 0.7214 & 0.7114 \\ \hline \hline \end{tabular}
\end{table}
Table 3: The results of ablation studies on B100(x4). The ‘ \(\times\) ’ in this table denotes the corresponding operation is not used and the ‘ \(\surd\) ’ denotes the corresponding operation is used.
Figure 8: Experimental results on real data.
Figure 7: Visual comparison with bicubic degradation model.
name the original convolution operation in the traditional network mentioned in Section 3.2.1 as standard convolution. As shown in Table 3, we designed 5 versions of the experimental setup for ablation:
**ab1** First, we adopt standard convolution operation and channel attention mechanism to build up the basic block. Actually, it equals the basic block in RCAN [9], _i.e_.RCAB.
**ab2** For verifying the effect of standard convolution and Ghost Module on the number of parameters, the amount of computation, and the model's performance. We adopt Ghost Module, a channel attention mechanism, to build up the basic block. The results in Table 3 reveal that the Ghost Module significantly reduces the number of parameters and calculations of the model compared with RCAN [9].
**ab3** We adopt Ghost Module, Channel, and Spatial Attention Mechanism (CSAM) to build up the basic block, _i.e_.GRAB used for constructing our GRAN. Compared with **ab2**, **ab3** further proves that our designed CSAM contributes to improved performance.
**ab4** In addition, to verify the influence of the Channel and Spatial Attention Mechanism on the number of parameters, the amount of computation, and the performance of the model. We remove the Channel Attention.
**ab5** Finally, we continue to move off the spatial Attention on the basis of **ab4**. Compared with **ab3**, when only channel attention or only spatial attention is used or both not though the parameters of the models are approximately equal, both PSNR and SSIM have significantly decreased.
The above ablation studies indeed reflect the effectiveness of the proposed GRAB and CSAM. That is to say, the Ghost Module using linear operation can indeed reduce the complexity of the network compared with traditional convolution, and the attention mechanism that mixes space and channel can also help to rebuild a higher-quality image.
## 5 Conclusion
In this paper, we propose an efficient and lightweight network named GRAN for SISR. It consists of multiple GRAB groups with Ghost Technology and Channel and Spatial Attention Mechanism to reduce redundant features and decrease memory and computing resource requirements. Experiments conducted on the benchmarks show the superior performance of our method in qualitative and quantitative. We achieve higher performance with lower computational resources, whose parameters have decreased by more than ten times compared to the previous SoTA methods. Currently, we can handle low-resolution images well. However, the proposed method may fail when dealing with more realistic scenarios, such as blur and noise in low-resolution images. Therefore, we will focus on solving the super-resolution problem under complex degradation in the future.
## Declarations
* Funding This work was funded in part by the Project of the National Natural Science Foundation of China under Grant 61901384 and 61871328, the Natural Science Basic Research Program of Shaanxi under Grant 2021JCW-03, as well as the Joint Funds of the National Natural Science Foundation of China under Grant U19B2037.
* Conflict of interest/Competing interests (check journal-specific guidelines for which heading to use) Not applicable
* Ethics approval
* Consent to participate
* Consent for publication
* Availability of data and materials The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
* Code availability After the paper is accepted, the code will be open-sourced.
* Authors' contributions Conceptualization, A.N, and Q.Y.; methodology, A.N.; software, A.N, P.W.; validation, A.N., P.W.; formal analysis, A.N.; investigation, P.W.; resources, Y.Z.; data curation, J.S.; writing--original draft preparation, A.N.; writing--review and editing, A.N., P.W., Y.Z, S.J.; visualization, P.W.; supervision, S.J.; project administration, A.N., Y.Z. All authors have read and agreed to the published version of the manuscript.
|
2309.12254 | Variational Quantum Harmonizer: Generating Chord Progressions and Other
Sonification Methods with the VQE Algorithm | This work investigates a case study of using physical-based sonification of
Quadratic Unconstrained Binary Optimization (QUBO) problems, optimized by the
Variational Quantum Eigensolver (VQE) algorithm. The VQE approximates the
solution of the problem by using an iterative loop between the quantum computer
and a classical optimization routine. This work explores the intermediary
statevectors found in each VQE iteration as the means of sonifying the
optimization process itself. The implementation was realised in the form of a
musical interface prototype named Variational Quantum Harmonizer (VQH),
providing potential design strategies for musical applications, focusing on
chords, chord progressions, and arpeggios. The VQH can be used both to enhance
data visualization or to create artistic pieces. The methodology is also
relevant in terms of how an artist would gain intuition towards achieving a
desired musical sound by carefully designing QUBO cost functions. Flexible
mapping strategies could supply a broad portfolio of sounds for QUBO and
quantum-inspired musical compositions, as demonstrated in a case study
composition, "Dependent Origination" by Peter Thomas and Paulo Itaborai. | Paulo Vitor Itaboraí, Tim Schwägerl, María Aguado Yáñez, Arianna Crippa, Karl Jansen, Eduardo Reck Miranda, Peter Thomas | 2023-09-21T16:58:35Z | http://arxiv.org/abs/2309.12254v1 | Variational Quantum Harmonizer: Generating Chord Progressions and Other Sonification Methods with the VQE Algorithm
###### Abstract
This work investigates a case study of using physical-based sonification of Quadratic Unconstrained Binary Optimization (QUBO) problems, optimized by the Variational Quantum Eigensolver (VQE) algorithm. The VQE approximates the solution of the problem by using an iterative loop between the quantum computer and a classical optimization routine. This work explores the intermediary statevectors found in each VQE iteration as the means of sonifying the optimization process itself. The implementation was realised in the form of a musical interface prototype named Variational Quantum Harmonizer (VQH), providing potential design strategies for musical applications, focusing on chords, chord progressions, and arreggsios. The VQH can be used both to enhance data visualization or to create artistic pieces. The methodology is also relevant in terms of how an artist would gain intuition towards achieving a desired musical sound by carefully designing QUBO cost functions. Flexible mapping strategies could supply a broad portfolio of sounds for QUBO and quantum-inspired musical compositions, as demonstrated in a case study composition, "Dependent Origination" by Peter Thomas and Paulo Itaborai.
## 1 Introduction
There are several attempts to define what a Musical Instrument is. When considering acoustic instruments from a more physical standpoint, Fletcher and Rossing state the following.
_"In most instruments, sound production depends upon the collective behavior of several vibrators, which may be weakly or strongly coupled together. This coupling, along with nonlinear feedback, may cause the instrument as a whole to behave as a complex vibrating system, even though the individual elements are relatively simple vibrators."_[1]
The interesting point being made here is that complex behaviour could arise from simple coupled systems, which when applied to sound could provide music expressivity.
Beyond acoustic media and mechanical vibrating systems, electronic and digital technologies have also become musical instruments. In fact, the flexibility of current music programming languages (such as SuperCollider[2], Pure Data[3], Max/MSP[4], etc.) allows the adventurous to design and build hundreds of new musical interfaces. In programming environments, performers can build, reform, and play musical interfaces in the time span of a performance [5]. Regarding this new musical interfacing ecosystem, Miranda and Wanderley [6] arrive at a concise representation model to visualise the fundamental constituting elements of a generic Digital Musical Instrument (Fig. 1).
According to this representation, a typical Digital Musical Instrument is consisted of three parts. First, there is a Control Interface from which the performer interacts with the instrument and expresses musical ideas. Second, there is a mapping process, in which the interface parameters are transformed and combined into control parameters. Finally, the control parameters drive a synthesis stage in which sound is generated and amplified.
It is possible to extrapolate this definition to include quantum technologies, allowing the hybridisation of quantum and classical media at the controlling and mapping layers of musical interfaces [7]. It is arguable that the introduction of emerging technologies in music and artistic experimentation has often led to new forms or musical expression [8]. Similarly, Quantum Computing can shed new light on quantum-inspired art and quantum aesthetics [7][9]. Also, music has historical contributions to scientific investigation [10] - and it should be no different with
Figure 1: Constituting elements of a digital musical instrument [6]
quantum computing.
Although quantum processing units deal with quantum information and apply quantum logical operations (quantum gates), they are strongly reliant on classical computation. For example, Noisy Intermediate Scale Quantum Computers (NISQ)[11] are currently available on cloud services, and the respective instructions and algorithms are written in and controlled from classical computers. Additionally, the results from the quantum computation are also analysed and post-processed in classical computers.
In this work, we introduce the Variational Quantum Harmonizer (VQH) musical interface. It enables composition in terms of a Quadratic Unconstrained Binary Optimization (QUBO) problem and utilises the Variational Quantum Eigensolver (VQE) algorithm to solve the QUBO. The paper is structured as follows: In section 2 we explain QUBO and how to solve it using the VQE algorithm. In this context, we introduce quantum circuits and classical optimization routines, the main constituents of the hybrid quantum-classical VQE algorithm. In section 3, we illustrate how to encode chords in the solution of QUBO and define a sonification of the VQE's optimization process. In section 4, we elaborate on the connection between QUBO and the mathematically equivalent Ising Hamiltonian. This enables an alternative approach to composing with VQH and the use of the Ising model to extend QUBO. In section 5, we explain the VQH's user interface and in section 6 alternative mapping strategies for sonification are presented. The possibility of using VQH in live performances is assessed in section 7. We conclude and propose ways to extend VQH in section 8. We assume that the reader has basic knowledge of quantum computing, especially the gate-based model of computations.
## 2 The Variational Quantum Harmonizer algorithm
The _Variational Quantum Harmonizer_ prototype is designed as a sonification-based musical instrument. Typically, the control interface of this class of instrument contains a dataset or database obtained from a scientific experiment or a simulation. The nature of the data is often related to a poetic, aesthetic or artistic motivation of the instrument designer. This data is customarily transformed directly into a control signal, that could be used to modulate a synthesis parameter of choice or generate symbolic musical notation. However, it is also true that the mapping stage could use data from different origins and contain multiple layers of smaller mappings and data processing before it becomes a control signal. This usually increases the complexity and expressivity of the musical gestures. In simple terms, the VQH control interface has two main layers. It does not contain a readily available dataset, as usual. Instead, the user _designs the experiment itself_, with the aim of achieving an approximate musical result (e.g., chords, chord progressions, and other compositional gestures). The problem is then solved with a variational method using a hybrid quantum-classical algorithm, creating a respective dataset. Finally, the dataset is sonified.
Composing with the VQH works as follows: First, the user chooses the coefficients of a square matrix that defines a QUBO problem (Sec. 2.1. QUBO is a common formulation of a variety of optimization problems from Logistics to Machine Learning. Then, the VQH sonifies the process of solving the QUBO. One could, for example, construct a QUBO such that its solution leads to a simple chord. To solve the QUBO on a gate-based quantum computer using the Variational Quantum Eigensolver (VQE), it is transformed to a mathematical equivalent Ising Hamiltonian. Details on VQE and on the transformation from QUBO to an Ising Hamiltonian are found in 2.2. Then, VQE optimizes the parameters of a quantum circuit to approximate the ground state of the Hamiltonian and therewith the QUBO problem using an iterative loop between a quantum computer and a classical optimization routine. The results obtained during the quantum algorithmic process are analyzed, post-processed and finally used as sonification data to drive various synthesis parameters, as specified by the artist.
### Quadratic Unconstrained Binary Optimization
The Quadratic Unconstrained Binary Optimization problem can be defined in terms of a cost function (eq. 1) that contains linear and quadratic terms. Individual agents (\(n_{i}\in\{0,1\}\)) have both an individual contribution to the system (linear coefficients \(a_{i}\)), as well as interfere with other agents (quadratic coefficients \(b_{ij}\)). Solving this problem means finding (or approximating) a configuration that _minimizes_ the cost function.
\[Q(n)=\sum_{i}^{N}a_{i}n_{i}+\sum_{i,j}b_{i,j}n_{i}n_{j};\quad n\in\{0,1\}^{N} \tag{1}\]
Currently, quantum computing approaches are being investigated for solving QUBO, for example, in Quantum Machine Learning, using the VQE algorithm [12].
### The Variational Quantum Eigensolver
VQE is a direct application of the variational method used in quantum mechanics. It was proposed in 2014 by Peruzzo [13] to approximate the ground state energy of molecules. The expectation value of a Hamiltonian \(H\) for a state \(|\psi\rangle\) may be expressed as a weighted sum of its eigenvalues \(\lambda_{i}\), where \(|\psi_{i}\rangle\) are the corresponding eigenvectors:
\[\langle H\rangle_{\psi}\equiv\langle\psi|H|\psi\rangle=\sum_{i=1}^{N}\lambda_ {i}|\langle\psi_{i}|\psi\rangle|^{2} \tag{2}\]
Since \(|\langle\psi_{i}|\psi\rangle|^{2}\geq 0\), the expectation value will always be at least the minimum eigenvalue:
\[\lambda_{\text{min}}\leq\langle H\rangle_{\psi} \tag{3}\]
By applying a parameterized circuit \(U(\vec{\vartheta})\) to some arbitrary starting state \(|\psi\rangle\), VQE provides an estimate \(\lambda_{\vec{q}}\) bounding \(\lambda_{\text{min}}\):
\[\lambda_{\text{min}}\leq\lambda_{\vec{q}}\equiv\langle\psi(\vec{ \vartheta})|H|\psi(\vec{\vartheta})\rangle \tag{4}\] \[\text{with}\quad U(\vec{\vartheta})|\psi\rangle\equiv|\psi(\vec{ \vartheta})\rangle\]
The estimate is then iteratively optimized by a classical optimizer updating the parameters \(\vec{\vartheta}\). In practice the expectation value \(\lambda_{\vec{q}}\) is approximated as the sample mean of repeatedly preparing \(|\psi(\vec{\vartheta})\rangle\) and measuring \(H\) in this state \(K\) times:
\[\lambda_{\vec{q}}\approx\frac{1}{K}\sum_{k=1}^{K}\lambda_{\vec{q},k} \tag{5}\]
When using VQE to solve QUBO one transforms the function \(Q(T)\) to a computational equivalent Ising Hamiltonian by mapping the binary variables \(n_{i}\) to Pauli \(Z_{i}\) operators:
\[H(Z)=\sum_{i}^{N}a_{i}^{VQE}Z_{i}+\sum_{i}^{N}\sum_{j<i}^{N}b_{ij}^{VQE}Z_{i}Z _{j} \tag{6}\]
The difference between binary variables and the eigenvalues of the Pauli \(Z_{i}\) operators can be accounted for by transforming the coefficients:
\[n_{i}\Rightarrow Z_{i}=1-2n_{i} \tag{7}\] \[b_{ij}\Rightarrow b_{ij}^{VQE}=b_{ij}\] (8) \[a_{i}\Rightarrow a_{i}^{VQE}=-2a_{i}-\sum_{j\neq i}^{N}b_{ij} \tag{9}\]
### Parameterized Quantum Circuits
As explained in the previous section, a main constituent of the VQE algorithm is a parameterized circuit \(U(\vec{\vartheta})\) that prepares a trial state \(|\psi(\vec{\vartheta})\rangle\). There are two paradigms to design the circuit, problem specific circuits and hardware efficient circuits. While the former encodes knowledge of the problem into the circuit, the latter uses alternating layers of rotation gates and entangling gates. This structure results in shallow circuits that can be executed on NISQ computers. One prominent example is the EfficientSU2 circuit, displayed in figure 2 for 5 qubits. The initial layer of \(R_{y}\) rotations is sufficient to express the solution of QUBO, because it is a computational basis states. However, introducing additional layers of entangling \(CNOT\) gates and \(Ry\) gates enables the circuit to reach states that lie in larger Hilbert spaces. The user can specify the number of layers and the entanglement structure to produce different sounds. The VQH implementation in this work has used EfficientSU2 circuits with two layers (the initial layer and one entanglement layer).
### Classical Optimizers
Another key ingredient of the VQH algorithm is the choice of the classical optimization routine. From a conceptual point of view, one differs gradient-based and gradient free optimizers. For our study, however, the most audible effect is due to how the optimizer updates the parameters. For instance, SPSA [14] perturbates all parameters at once, whereas optimizers such as NFT [15] change only one parameter at a time. VQH supports all optimizers in Qiskit [16]. For explanations on how these optimizers work, we refer to Qiskit and the original implementation of many optimizers in Scipy [17].
## 3 Sonifying the VQE algorithm: Chords and Chord progressions
In a previous work conducted by Clemente, Crippa, Jansen and Tuyssiz [18], the implemented mapping strategy is to transform quantities computed with a quantum variational approach into audible frequencies. In this work, they considered the Ising model and studied the Hamiltonian ground state with VQE and excited states with the Variational Quantum Deflation (VQD)[19, 20] algorithm. The first approach is to apply the VQD algorithm, compute the energy eigenvalues and convert them to audible frequencies. Using the coupling \(h_{x}\) as a '_time variable_' (see eq. 24 at the conclusion), they followed the behaviour of the corresponding frequencies and play them through an output device. They also proposed to measure the observables during the optimization itself and thus play sounds from variational algorithms. In the last part, they followed again the approach of assigning the role of 'time variable' to the external magnetic field. Frequencies can then be computed from the energy eigenvalues and intensities from the magnetization measured in the corresponding eigenstates.
In contrast, this study adopts a different perspective on the mapping strategy, focusing the attention on the variational method itself, as well as emphasising the quantum states associated with the computed energies. Specifically, our approach incorporates the statevector sonification of each iteration of the optimization process, as detailed in this section. As a result, this method may enable an unique auditory insight of the intricate interplay between different classical optimizers and the way they navigate and reach a QUBO solution when assisted by a quantum algorithm.
For instructive purposes, the mapping strategy will be explained in this section using a simple example.
Figure 2: An EfficientSU2 circuit for 5 qubits. The initial \(R_{y}\) rotations are referred to as the 0-th layer. The 1st layer here introduces linear entanglement using \(CNOT\) gates and additional rotations.
### Qubit words as chords: the QUBO Harp
Imagine a special _harp_ with 12 "binary" strings. Each string is tuned for a specific frequency, e.g. 1 octave of the chromatic scale. A convention was established where 0 represents a resting string - or _silence_ - while \(1\) reciprocally indicates a _sounding note_. As a result, a 12-digit binary word can be used to represent notes, intervals, or more generally, _Chords_ (For simplicity, all cases will be referred to as "chords" for the remaining of this text). For instance, a C Major chord could be represented as shown below.
However, the strings of this figurative harp can also be _damped_ and/or _coupled_ together, leading to some kind of constructive or destructive interference that can be described as coefficients of eq. 1. A cost function is designed, assuming that a sounding CMajor chord is a configuration that minimizes the QUBO function \(Q(n)\).
This QUBO then is mapped to an equivalent Hamiltonian as in eq. 6. In other words, we assign \(\ket{0}\) to silence and \(\ket{1}\) to a sounding note, and have \(\ket{Cmaj}\) be the system's _ground state_.
#### 3.1.1 Using the VQE to minimize (and sonify) the QUBO Problem
After designing the operator coefficients, the VQE algorithm can be used to navigate the state space and approximate the solution using a hardware-efficient SU(2) ansatz (Fig. 2). Evidently, in this case the expected result for the minimum energy eigenstate would be \(\ket{Cmaj}\), _as designed_. This apparently did not provide any new information apart from a floating point number representing a step-by-step estimation of the expectation value for the ground state energy \(\bra{Cmaj}H\ket{Cmaj}\). However, the intention of this process is to investigate _how_ the result was approached.
Consequently, in our methodology, at each optimization step of the VQE, the current statevector is being sampled, stored, and used as input of the next iteration. In other words, our approach is to keep track of how the quantum state is being transformed as the algorithm looks for an optimal solution.
The sampled state vectors of each VQE iteration (in addition to the corresponding partial estimation results of the expectation value) represents the data used for sonification.
#### 3.1.2 The Statevector sonification
The main idea of this approach is to focus on the sonification of the sampled statevectors, conveying a sounding chord where each qubit in the \(\ket{1}\) state represents a playing note. However, since qubits may be in superposition states, note that a Cmajor-sounding chord can be achieved with many different quantum states. For example, \(\ket{Cmaj_{1}}\), \(\ket{Cmaj_{2}}\) and \(\ket{Cmaj_{3}}\) below will all _sound_ as a C major to our hypothetic harp.
\[\ket{Cmaj_{1}}=\ket{10001001000}\]
\[\ket{Cmaj_{2}}=\frac{1}{\sqrt{2}}\left[\ket{10001000000}+\ket{00010010000}\right]\]
\[\ket{Cmaj_{3}}=\sqrt{\frac{3}{4}}\ket{100000000000}+\sqrt{\frac{3}{16}}\ket{ 000010000000}\]
\[+\sqrt{\frac{1}{16}}\ket{100000010000} \tag{10}\]
As a result, all possible states in which a note is playing need could be somehow grouped together. To achieve this, the Marginal Distribution of each sampled statevector was obtained. The Marginal Distribution refers to the probability of each qubit being in the state \(\ket{1}\) independently of the other qubits. By collecting the coefficients of each state contributing to a specific note to be playing, we introduce a notion of a note's relative _loudness_ within that chord. As a result, an _additive synthesis_ perspective is achieved and taken as the initial strategy to sonify the 12 coefficients of the marginal distribution.
### Example 1: Linear Cmajor Chord
As an initial example, the CMajor cost function below is designed to use only linear coefficients 11. The desired notes (C, E, G) contribute to lower the cost, whereas the remaining notes give energy penalties when they are played. In our harp analogy, this would mean to _damp_ unwanted notes while favouring the resonance of preferred notes.
\[Q(n_{Cmaj})=-n_{C}-n_{E}-n_{G}+\sum_{k\notin Cmaj}n_{k} \tag{11}\]
By running the VQE - using the COBYLA optimizer - to solve the problem above, the marginal distribution evolution obtained over 150 iterations is depicted in fig. 4.
It is possible to observe how the COBYLA optimizer handled the minimization process, and how it guided the configuration space calculated by the quantum circuit. Most notably, the algorithm "perturbs" each note individually. Using the harp analogy again, it looks like the algorithm is "sweeping" through the strings (or unwilling them). This can be heard objectively when an additive synthesis approach is applied to sonify fig. 4. A spectrogram visualization of the sonified result is shown in fig 5.
### Example 2: I-IV-V-I Linear Chord Progression
The VQH Implementation also enables the concatenation of QUBOs. By inputting a list of QUBO matrices and a
Figure 3: A C Major Chord represented in binary
number of iterations, the VQH will proceed as follows: First, the initial QUBO is solved, as usual. Then, the resulting statevector is used as the initial point for the next QUBO optimization, and so on. In other words, the structure of the system _changes over time_, affecting the optimal solution. The intended result is to create progressions of distinct chords, or create a more (musically) dynamic output.
For instance, expanding from the initial example, consider a I-IV-V-I progression. It can be defined as a set of four QUBOS (eqs 12-15). Using the same VQE configurations as in Example 1 - using 150 iterations for each chord - leads to results as depicted in Figs. 6 and 7.
\[Q_{1}(n^{I}) =-n_{C}-n_{E}-n_{G}+\sum_{k\notin Cmaj}n_{k} \tag{12}\] \[Q_{2}(n^{IV}) =-n_{F}-n_{A}-n_{C}+\sum_{k\notin Fmaj}n_{k}\] (13) \[Q_{3}(n^{V}) =-n_{G}-n_{B}-n_{D}+\sum_{k\notin Gmaj}n_{k}\] (14) \[Q_{4}(n^{I}) =Q_{1}(n^{I}) \tag{15}\]
Note in fig. 6 how some notes persist through the progression (e.g., the E sounding in the second chord, or the B sounding in the last one), as well as completely new "_intrusive_" notes appear (e.g, the A# in the last chord). More importantly, compare the first and the last chords. Even though their QUBOS are identical, the starting point is different, leading to a new path on the configuration space, and encountering potential barren plateaus.
## 4 QUBOs and Ising Hamiltonians
Equation 6 explicits a close structural proximity between the QUBO function and the Ising Model. It is possible to take advantage of this proximity to start framing the QUBO matrix both as an optimization problem or as a physical system that can exhibit quantum phenomena, bringing quantum-inspired ideas to the VQH context. The _harp_ analogy could be reframed as a _spin lattice_.
### The Ising Harp
Consider a 1D Ising problem, containing 12 spins. Then, a unique musical note is assigned to each spin in the lattice (such as the chromatic scale). By convention, a _spin down_\(\downarrow\) state represents silence, and _spin up_\(\uparrow\) reciprocally represents a _sounding note_ (Fig. 8).
The initial strategy (Examples 1 and 2) was to achieve this desired Hamiltonian focusing on the linear part of the model, as in the QUBO function with only linear coefficients. The energy required for a desired note to be playing is favoured in relation to a specific alignment with an external longitudinal magnetic field in the Z direction, while the energies of the remaining spins are penalized, forcing them to align in the opposite direction (eq. 16).
Figure 4: Marginal Distribution Evolution for Example 1
Figure 5: Visualization of Example 1 sonified with additive synthesis.
Figure 8: Spin configuration of a C Major Chord
Figure 6: Marginal Distribution Evolution for Example 2
Figure 7: Visualization of Example 1 sonified with additive synthesis.
\[H(\sigma)=h_{z}\Big{(}\sum_{k\in Cmaj}\sigma_{k}^{Z}-\sum_{k\notin Cmaj}\sigma_{k}^ {Z}\Big{)} \tag{16}\]
### Example 3: Coupled-Spin Cmajor Chord
A second strategy would consider only the coupling between neighbouring spins, to benefit either a ferromagnetic-like (\(\uparrow\uparrow\), \(\downarrow\downarrow\)) or anti-ferromagnetic-like (\(\uparrow\downarrow\), \(\downarrow\uparrow\)) alignment. A QUBO function is sketched using only the quadratic terms (eq. 17). Figure 9 illustrates how the coefficients were designed for this example, taking into account their positivity.
\[Q(n^{Cmaj_{Quad}})=\sum_{k,l}b_{kl}n_{k}n_{l} \tag{17}\]
\[b_{kl}=\begin{cases}>0,&k\in Chord;\,l\notin Chord\\ &k\notin Chord;\,l\in Chord\\ <0,&\text{otherwise}\end{cases} \tag{18}\]
Note the optimization of eq. 17 leads to an (apparently) unexpected solution (Fig. 10). Moreover, the result seems to have arrived at the _opposite_ of the intentioned \(C_{maj}\) chord, having all other notes but C, E, and G playing. The designed system eq. 17 focussed on spin couplings, but did not account for differences in global orientation. As a result, the state \(\ket{antiC_{maj}}\ket{\downarrow\uparrow\uparrow\uparrow\downarrow\uparrow \downarrow\uparrow\uparrow\uparrow}\) became a solution for the problem.
In fact, it is possible to verify (using eqs. 6-9) that the originally intended chord is Not a solution to the problem, since the QUBO quadratic coefficients also impact the Ising's linear coefficients (eq. 9):
\[\begin{split}& H=-\sum_{i}\left(2\mathsf{e}_{i}\overset{\mathsf{+ }}{+}\sum_{i\neq j}b_{ij}\right)Z_{i}+\sum_{i\neq j}b_{ij}Z_{i}Zj\\ &\bra{\downarrow\uparrow\uparrow\uparrow\downarrow\uparrow \uparrow\uparrow\uparrow\uparrow\uparrow\uparrow}H\ket{\downarrow\uparrow \uparrow\uparrow\downarrow\uparrow\uparrow\uparrow\uparrow}=\\ &=(-12)-12=-24\end{split} \tag{19}\]
### Example 4: Chords in Superposition
In the attempt of correcting the last example, the QUBO linear coefficients (\(a_{i}\)) will be re-introduced, with the intention of closing the energy gap between \(\ket{\uparrow\downarrow\downarrow\downarrow\uparrow\uparrow\downarrow \uparrow\downarrow\downarrow\downarrow\downarrow}\) and \(\ket{\downarrow\uparrow\uparrow\downarrow\uparrow\uparrow\downarrow\uparrow \uparrow\uparrow}\). The strategy was to reinforce the orientation of selected qubits (Fig. 11), aiming to negate the influence of non-zero \(b_{ij}\) coefficients in equation 9. This leads to the QUBO coefficients described in 21
\[\begin{split}& b_{kl}=\begin{cases}1,&k\in Chord;\,l\notin Chord \\ -1,&\text{otherwise}\end{cases}\\ & a_{k}=-\frac{1}{2}\sum_{k\neq l}b_{kl}\end{split} \tag{21}\]
Now, it can be verified that \(\bra{H}_{C_{maj}}=\bra{H}_{antiC_{maj}}=-12\), converting them into equally attractive results. As a result, different runs of the VQE optimization for this QUBO might arrive in \(\ket{C_{maj}}\), \(\ket{antiC_{maj}}\), or a mixed state, as shown by Figure 12.
In the first example, 12a, it is verified that the \(C_{maj}\) chord can be achieved with this QUBO. In 12b, a different initial condition was given. The VQH tends to converge to "\(antiC_{maj}\)"; however, the presence of the 'G' note creates difficulties in the optimization. Most interestingly, fig. 12c displays a less common result, where the chord starts approaching a \(C_{maj}\), but then completely _flips_ the chord after a "stubborn" 'C' note remains unplayed.
### Adiabatically Navigated VQE
To improve the efficacy of the VQE algorithm, particularly when searching for the ground state of a quantum system, recent research has proposed the introduction of the adiabatic theorem for navigating towards the solution. The method proposes to have a time-dependent Hamiltonian that is gradually (and slowly) moving towards an asymptotic equilibrium. For instance, consider a system that starts
Figure 11: Spin configuration of a C Major Chord
Figure 10: VQH result for Example 3
Figure 9: Spin configuration of a C Major Chord
at an eigenstate \(\psi_{0}\) of a given Hamiltonian \(H_{0}\). The system evolves in time and is subjected to an adiabatic process, until it reaches a final eigenstate \(\psi_{1}\) of a different Hamiltonian, \(H_{1}\).
Matsura et.al. [21] proposes a method of an adiabatically navigated quantum eigensolver, which involves a time-dependent Hamiltonian using the same approach adopted in the Example 2. A simple approximation of an adiabatic process (eq. 22) was used to _smooth_ the transition between different VQE. From an initial Hamiltonian \(H_{initial}\), VQE is used to find it's eigenstate - which becomes the initial condition for the next VQE. Then, the time \(t\) is updated, leading to a slightly different configuration. Eventually, after \(m\) updates, the system arrives at \(H_{final}\)_adiabatically_.
\[H(t)=(1-t)H_{initial}+tH_{final} \tag{22}\]
### Example 5: Adiabatic Chord Transitions
The same principle can be applied to chord progressions. Instead of making sharp perturbations and transitions to different chords (as in Example 2), one could start playing an "Ising Harp" at a \(|C_{maj}\rangle\) configuration, and make an _adiabatic chord transition_ to, say, \(|B7/D\#\rangle\) (Fig. 13). For simplicity, both chords used a linear QUBO function (as in Example 1).
## 5 Using the VQH as a Musical Interface
For a scientifically driven approach to sonification, the above investigation provides more insight about the inner workings of the VQE algorithm, as well as more intuition about how changing specific parameters leads to different results. The next question arises: How to use the VQE _musically_?
The current VQH implementation is presented as a CLI software implemented in Python, that is used to both run VQE simulations with user-provided specifications and also redirect the results to a synthesis engine (in this case, SuperCollider) for sonification. The user can interface with the system by changing the VQE specifications (e.g., number of iterations, classical optimizer, amount of hamiltonians for chord progressions, etc.) and updating QUBO coefficients. This can be done by either using a text editor, or a mapped MIDI controller.
### Interfacing with a text editor
The QUBO coefficients are stored and read from a.csv file in matrix form, as depicted in Fig. 14a. The first row
Figure 12: Different VQH results for Example 4
Figure 13: Example 5: Adiabatic Chord Progression
gives labels for the notes, providing a namespace that is used in the sonification mapping. The diagonal terms correspond to the linear terms \(a_{i}\) of eq.1, whereas the non-diagonal terms define couplings between notes. A separate.json file contains other relevant VQE configuration, such as the number of iterations.
After updating and saving the changes made in both files, the user can use the runvqe function at the VQH prompt to generate new data, followed by the play function to trigger a sonification.
This workflow is specially appealing for _Live Coding_ performances, where artists implement or change software configurations on stage, leading to on-the-fly changes on the music. The artist's screens and text editors are commonly displayed to the audience, where they can track the changes being made, and how they affect the resulting sounds.
#### 5.1.1 Example 6: Obtaining Musical Variations by Changing the Classical Optimizer
Notice how the music can be radicallly changed by simply switching between classical optimizers. In Fig. 15, the QUBO from Example 1 was optimized using different techniques. In this work, COBYLA (15a) seems to provide eventual _note sweeps_ and persistent mild attacks on all notes, mimicking a "drum roll". SPSA (15b), on the other hand, takes longer (or struggles) to converge, meaning that it usually onsets and keeps all the notes sounding simultaneously, until penalised notes progressively fade out. In contrast, the NFT algorithm (15c) provides a periodic, rythmic pattern that can be exploited.
### Using a MIDI interface to Manipulate QUBOs
A possible approach for an interface to control the QUBO matrix through a MIDI device is now presented. The proposal consists of an eight by eight grid of buttons divided into four different sections. A simple scheme of the interface is shown in Fig. 16. A device that provides this structure is, for example, is the Launchpad X.
When a button is pressed, a MIDI note is sent to the main program for further processing. The interface incorporates a visual feedback mechanism, utilizing lights, to inform the user about the information being transmitted and the activated buttons. Each section of the grid serves a different purpose. The top left four by four grid is allocated for the diagonal terms of the QUBO matrix, which
Figure 14: Text-Based VQH Interface
Figure 15: Comparison between Optimizers
correspond to the 12 notes. Each button within this section functions as a toggle, allowing the spin value to vary between 1 and -1. When a note is activated, its corresponding button is pressed, and its value changes from -1 to 1. It is worth noting that the last row of this section is distinct, as the grid size is four by four, while there are only 12 notes. This last row is designated as a discrete fader, and the same approach is employed for the last row of the remaining three sections. These faders are essential for the bottom left section, which is responsible for handling the coupling terms of the second approach of the QUBO matrix. Since there are only 11 coupling terms, whereas there are 12 buttons in this section, the first and last spins of the one-dimensional Ising model can be interconnected to include an additional coupling term. In this section, if a button is held and the user slides through the faders, the value of the coupling term associated with that button can be adjusted. The range and discrete values for the coupling values must be specified.
The top right corner of the grid is utilized for transpositions. Specifically, four buttons are allocated to apply octave and semitone transpositions, allowing both pitch increases and decreases. The remaining eight buttons in this section can be used for implementing other transpositions, such as fourth and fifth transpositions, since these intervals are also generators of the equal temperment's structure. Finally, the bottom right corner is left unassigned, paniding room for additional controllers dedicated to other parameters, such as an external magnetic field (see Sec.8.1).
## 6 Sonification Mapping Strategies
After exploring the VQH interface with potential design strategies for musical applications, it is time to investigate different mapping strategies that can lead to more complex and abstract musical results. As mentioned, this is known as a _mapping problem_ in the sonification literature. The data being sonified, as explained in section 3.1.1, are streams of marginal distribution coefficients with a respective expectation value.
The initial ideas for mapping methods for these data are discussed in this section. The chord progression from Example 2 will be used as a guideline. The synthesizers for each approach mentioned in this work were implemented in SuperCollider.
### Additive Synthesis
There is a natural additive framework that arises when interpreting qubits as individual notes, as discussed in section 3.1. For the 12-qubit examples shown so far, there would be twelve oscillators with well-defined frequencies (such as the chromatic scale). The Marginal distribution coefficients are then used as the amplitudes of each oscillator. Oscillators could contain custom frequencies and waveforms (such as sawtooth, square, or custom samples).
#### 6.1.1 Inharmonicity Approach
Another additive aproach to statevector sonification focuses on modulating frequency, instead of amplitude. For example, starting from a 12-note harmonic series, it is possible to define a proportional _shift_\(c_{n}(t)\) of a given harmonic term (eq. 23). This introduces dynamic inharmonities that can be explored musically.
\[f_{n}(t)=(n-c_{n}(t))f_{1} \tag{23}\]
### Subtractive Synthesis
To improve the sonification timbre, a possible follow-up approach is usually to "broaden up" the frequency bands of the individual oscillators. A practical solution for this is to invert the synthesis and use a subtractive or filtering approach instead. In simple terms, a white noise is generated and then filtered through narrow bandpass filters centered at the desired frequencies. Additionally, the expectation values can be used to control the quality factor of the bandpass filters, in a way that if the estimated energy decreases, the filter narrows down. In other words, the farther away the VQE is from the exact result, the noisier the sound gets.
### Arpeggios
Arpeggiation is a mapping strategy that can be utilized to enhance the perception of dynamics using a temporal expansion. The marginal distributions are again mapped as
Figure 16: LaunchD X MIDI Interface Design
Figure 17: Additive Synthesis Mapping
amplitudes. However, at each iteration, the notes are expanded into an arpeggio and sorted by amplitude. In other words, the most intense note is played last in the arpeggio. Furthermore, the expectation values can be assigned as the _expansion rate_ of the arpeggio. In other words, the closer from the exact result, the faster and denser the arpeggio will be.
## 7 Using the VQH in live performances and composition
### Dependent Origination: Dynamically changing the sonification mapping on stage
Dependent Origination is a piece composed by Peter Thomas and Paulo Vitor Itaborai. It has been performed twice to date, at the IKLECTIK Arts Lab in London[22] and ICFO in Barcelona (see institutional video[23]. The VQH interface appears at 0:12) - using Zen, a live coding application designed and built by Thomas[24]. In this section, we will briefly describe the Zen system, outline the aesthetic motivations behind the composition, followed by a more technical explanation of how the VQH was used to generate data for sonification.
Zen is a JavaScript library for expressing multidimensional, musical patterns using single line expressions, with a particular emphasis on pattern interference. In addition to the language, the Zen ecosystem is further comprised of an integrated development environment (IDE) - including a code editor, pattern visualiser and synthesis engine (Oto). Each component is built for the web, resulting in a holistic performance tool that requires no installation beyond a modern browser.
Through the textual interface, Zen allows the user to succinctly express causal relationships between musical layers. A composer is able to define discrete patterns, then identify musical or sonic parameters between patterns that should interfere with each other; for example, pitch, amplitude, timbral or spatial parameters. Composing in this way leads to surprising, often unintended results, challenging the common portrayal of inspiration as a form of guiding agent; instead, seeking to stimulate ideas through processes that go beyond the mind, as a means of creating original work; see Fell [25, p. 21] and Magnusson and McLean [26, p. 262] for further exposition upon the value of harnessing processes that go beyond the imagination of the composer.
Dependent Origination is a pillar of Buddhist philosophy that emphasises the interrelation of all phenomena. Everything exists in dependence upon its particular causes and conditions. Everything is impermanent, and no phenomena arise from nothing. This interpenetration of phenomena has poetic echoes in quantum entanglement; a fundamental aspect of quantum mechanics and quantum computing. Entangled particles may affect each other even when separated by long distances, even without mechanical contact. In addition to expressing causal relationships between musical layers within the Zen code, we therefore also sought to explore pattern inference within the design of the VQH interface.
Zen allows the user to map musical and sonic parameters across a three-dimensional canvas. Eight separate streams can be moved independently around the space, with the parameters of sonic events being determined by their current position in time and space. In an eight-qubit system, we mapped the marginal coefficients to the XY-positions of the separate streams on the Zen canvas. Each stream was assigned to a separate instrument; which included drum samplers and FM and granular synthesizers. Sonic events were triggered using the 8-bit binary string returned at each iteration, assigning one bit to each stream and triggering an event when the value was equal to 1. The resulting rhythms and spatial positioning was used as the basis for improvisation in real-time, by assigning different
Figure 19: Arpeggio Mapping
Figure 20: Performance of Dependent Origination at IKLECTIK
sonic parameters to each axis to provide variety, offsetting layers and swapping out the underlying quantum data as the composition progressed.
## 8 Concluding Thoughts
Sonification, as a cross-disciplinary practice, is a fertile ground for exploring and combining different approaches, methodologies and interpretations for the emerging quantum computer field.
Artists, driven by creative expression and aesthetic exploration, approach sonification as a means to evoke emotional responses and convey subjective experiences. They prioritize the artistic _interpretation_ of the data, focusing on the qualities of the resulting soundscapes. Artists often employ metaphorical or abstract representations, manipulating and transforming data to create unique sonic experiences that engage listeners on a visceral and emotive level. Their aim is to provoke thought, inspire contemplation, or challenge established perceptions of the world. In other words, the data is incorporated into a musical instrument.
In contrast, scientists approach sonification with a primary focus on data analysis, comprehension, and scientific discovery. They aim to enhance the understanding of complex data sets by utilizing sound as an additional modality for conveying information. Scientists tend to prioritize fidelity and accuracy in sonification, striving to maintain a direct and transparent mapping between the data and the resulting auditory representation.
This work has attempted to collide both approaches into a resulting Musical Interface, that can both be a tool for enhancing data visualization and creating artistic pieces.
### Future Work Considerations
There are a few paths in which the authors envision that this work could unfold. Initially, the QUBO function is applied in a large set of problems, and researchers might take interest on exploring sonifications of their own particular problems using QUBO and VQH. Secondly, the Ising problem used in this work considered only fields (linear coefficients) pointing in the same direction of the spin lattice (\(Z_{i}\)), which effectively behaves as a classical model. However, quantum behaviour start to appear when a Transverse Magnetic Field is applied (eq. 24), (as in Clemente's et.al. work [18]), leading spins to be in states such as \(|\leftarrow\rangle\) and \(|\rightarrow\rangle\), which are inherently quantum. Additionally, the intensity of the field \(h_{x}\) will dictate the magnetization phase of the system. As a result, \(h_{x}\) becomes a potential control parameter for the VQH instrument.
\[H(\sigma)=\sum_{i}^{N}h_{x}\sigma_{i}^{X}+\sum_{i}^{N}\sum_{j<i}^{N}b_{ij} \sigma_{i}^{Z}\sigma_{j}^{Z} \tag{24}\]
Furthermore, quantum circuit design of the variational form (sec. 2.3) will likely provoke a significant impact on the optimization process and produce completely different sounds, as consequential as changing the classcal optimizers. A thorough sonification comparison between a larger collection of classical optimizers and ansatze could provide important insights.
Additionally, from a mapping perspective, researchers could propose new encoding strategies of the QUBO problem (different from the _one-note-per-qubit_ approach taken), and a myriad of sonification/musification strategies for the data generated.
Finally, the VQH interface itself can be improved (source code available on GitHub at the time of publication [27]) to allow new music expressivity and quantum-computer-assisted composition.
## 9 Acknowledgements
We would like to thank Y. Chai and S. Kuhn for fruitful discussions. We also thank the IKLECTIK Arts Lab for programming the concert in which _Dependent Origination_ was premiered to the general public.
K.J. and A.C. are supported with funds from the Ministry of Science, Research and Culture of the State of Brandenburg within the Centre for Quantum Technologies and Applications (CQTA).
K.J.'s work is funded by the European Union's Horizon Europe Framework Programme (HORIZON) under the ERA Chair scheme with grant agreement no. 101087126.
|
2309.03359 | Compact Representation of n-th order TGV | Although regularization methods based on derivatives are favored for their
robustness and computational simplicity, research exploring higher-order
derivatives remains limited. This scarcity can possibly be attributed to the
appearance of oscillations in reconstructions when directly generalizing TV-1
to higher orders (3 or more). Addressing this, Bredies et. al introduced a
notable approach for generalizing total variation, known as Total Generalized
Variation (TGV). This technique introduces a regularization that generates
estimates embodying piece-wise polynomial behavior of varying degrees across
distinct regions of an image.Importantly, to our current understanding, no
sufficiently general algorithm exists for solving TGV regularization for orders
beyond 2. This is likely because of two problems: firstly, the problem is
complex as TGV regularization is defined as a minimization problem with
non-trivial constraints, and secondly, TGV is represented in terms of
tensor-fields which is difficult to implement. In this work we tackle the first
challenge by giving two simple and implementable representations of n th order
TGV | Manu Ghulyani, Muthuvel Arigovindan | 2023-09-06T21:02:29Z | http://arxiv.org/abs/2309.03359v1 | # Compact Representation of \(n^{th}\) order TGV
###### Abstract
Although regularization methods based on derivatives are favored for their robustness and computational simplicity, research exploring higher-order derivatives remains limited. This scarcity can possibly be attributed to the appearance of oscillations in reconstructions when directly generalizing TV-1 to higher orders (3 or more). Addressing this, Bredies et. al introduced a notable approach for generalizing total variation, known as Total Generalized Variation (TGV). This technique introduces a regularization that generates estimates embodying piece-wise polynomial behavior of varying degrees across distinct regions of an image.Importantly, to our current understanding, no sufficiently general algorithm exists for solving TGV regularization for orders beyond 2 (i.e., \(\geq 3\)). This is likely because of two problems: firstly, the problem is complex as TGV regularization is defined as a minimization problem with non-trivial constraints, and secondly, TGV is represented in terms of tensor-fields which is difficult to implement. In this work we tackle the first challenge by giving two simple and implementable representations of \(n^{th}\) order TGV
Department of Electrical Engg., Indian Institute of Science, Bengaluru-12, Karnataka, India.
[email protected] & [email protected]
## 1 Notation and Preliminaries
1. (Permutation) In this work, \(\pi:\{1,2,3,...,k\}\rightarrow\{1,2,3,...,k\}\) is an (invertible) map called as a permutation. With this definition we can
define a map \(f_{\pi}:\mathbb{R}^{k}\rightarrow\mathbb{R}^{k}\), such that for any \(\mathbf{v}=(v_{1},v_{2},...,v_{k})^{T}\in\mathbb{R}^{k}\), \(f_{\pi}(\mathbf{v})\overset{def}{=}(v_{\pi(1)},v_{\pi(2)},...,v_{\pi(k)})^{T}\). For example, let \(k=3\), and \(\pi\) is defined such that: \(\pi(1)=2,\pi(2)=3\) and \(\pi(3)=1\). Then, for any \(\mathbf{v}=(v_{1},v_{2},v_{3})^{T},f_{\pi}(\mathbf{v})=(v_{\pi(1)},v_{\pi(2)},v_{\pi(3)})^{T}=(v_{2},v_{3},v_{1})^{T}.\) We denote the set of of permutations of k-letters as \(S_{k}\).
2. (Binary representation) In this work, \(b:\{0,1,2,3,...,2^{k}-1\}\rightarrow\{0,1\}^{k}\) gives the \(k\) letter binary code i.e, \(b\) gives a vector of \(0s\) and ones for any non-negative integer less than \(2^{k}\), and \(b^{-1}\) is its inverse which returns a non-negative integer for any vector in \(\{0,1\}^{k}.\) For example, if \(k=3\), then \(b(5)=[1,0,1]^{T}\) and similarly, \(b^{-1}([0,1,1]^{T})=3\).
3. (symmetric index vector) For any non-negative integer \(j\leq 2^{k}-1\), we can define an index vector \(\mathbf{t}^{j,k}\) as \(\mathbf{t}^{j,k}=[b^{-1}f_{\pi_{1}}b(j),b^{-1}f_{\pi_{2}}b(j),...,b^{-1}f_{\pi_ {kl}}b(j)]\). Here, \(\pi_{1},..,\pi_{k!}\) are the elements of \(S_{k}\) in any fixed order. As an example, consider \(k=2\) and \(j=2\). In this case there are two permutations, \(\pi_{1}\) is the identity map and \(\pi_{2}\) is defined such that: \(\pi_{2}(1)=2\) and \(\pi_{2}(2)=1\). Therefore, \(\mathbf{t}^{2,2}=[2,1]^{T}\).
4. We define a linear operator, \(\Pi^{(k)}:\mathbb{R}^{N\times 2^{k}}\rightarrow\mathbb{R}^{N\times 2^{k}}\). For any \(P\in\mathbb{R}^{N\times 2^{k}}\), the \((i,j)^{th}\) element of \((\Pi^{(k)}(P))\) is given as \[(\Pi^{(k)}(P))_{i,j}\overset{def}{=}\frac{P_{i,\mathbf{t}^{j,k}_{1}}+P_{i, \mathbf{t}^{j,k}_{2}}+...+P_{i,\mathbf{t}^{j,k}_{kl}}}{k!}.\] Here, \(\mathbf{t}^{j,k}=[\mathbf{t}^{j,k}_{1},\mathbf{t}^{j,k}_{2},...,\mathbf{t}^{ j,k}_{kl}]^{T}\) is the symmetric index vector as defined above. It can be observed that the \(j^{th}\) column of \((\Pi^{(k)}P)\) is the sum the columns of the matrix \(P\) given by the column indices \(\{b^{-1}f_{\pi}(b(j))\}_{\pi\in S_{k}}\) for different permutations \(\pi\).
5. Consider a scanned image containing \(N\) ordered pixels, then \(\mathbf{D}_{x}\) and \(\mathbf{D}_{y}\) are matrix-equivalent of discrete derivatives in \(x\) and \(y\) directions respectively applied directly on the scanned image. For example, let \(\mathbf{u}\) be an image having \(N\) pixels, then \(\mathbf{D}_{x}\mathbf{u}\) is the derivative image in \(x\) direction, and similarly \(\mathbf{D}_{y}\mathbf{u}\) in y direction. As we consider images in scanned form we have \(\mathbf{u}\in\mathbb{R}^{N}\); therefore, \(\mathbf{D}_{x}\) and \(\mathbf{D}_{y}\) are \(N\times N\) block circulant matrices with circulant blocks (BCCB), and the multiplication of these matrices with any vector in \(\mathbb{R}^{N}\) is implemented by \(2D\) convolution with filters \((-1,1)\) and \((-1,1)^{T}\) respectively.
6. The iterated derivative operator \(\mathcal{D}_{k}:\mathbb{R}^{N\times(k+1)}\rightarrow\mathbb{R}^{N\times(2k+2)}\), for any \((\mathbf{y}_{0},...,\mathbf{y}_{k})\in\mathbb{R}^{N\times(k+1)}\) is defined as, \[\mathcal{D}_{k}([\mathbf{y}_{0},\mathbf{y}_{1},...,\mathbf{y}_{k}])\stackrel{{ def}}{{=}}[\mathbf{D}_{x}\mathbf{y}_{0},\mathbf{D}_{y} \mathbf{y}_{0},\mathbf{D}_{x}\mathbf{y}_{1},\mathbf{D}_{y}\mathbf{y}_{1},..., \mathbf{D}_{x}\mathbf{y}_{k},\mathbf{D}_{y}\mathbf{y}_{k}].\]
7. We need a scaling linear operator (\(\mathcal{A}_{k}\)) to define the compact form of TGV, \(\mathcal{A}_{k}:\mathbb{R}^{N\times(2k+2)}\rightarrow\mathbb{R}^{N\times(k+2)}\), such that, \(\mathcal{A}_{k}(\mathbf{z})\stackrel{{ def}}{{=}}\mathbf{z}\cdot M _{k}\), where \[M_{k}=\left[\begin{array}{cccccc}1&0&0&...&0&0\\ 0&\frac{\sqrt{{}^{k}C_{1}}}{\sqrt{{}^{k+1}C_{1}}}&0&...&0&0\\ 0&\frac{\sqrt{{}^{k}C_{0}}}{\sqrt{{}^{k+1}C_{1}}}&0&...&0&0\\ 0&0&\frac{\sqrt{{}^{k}C_{2}}}{\sqrt{{}^{k+1}C_{2}}}&...&0&0\\ 0&0&\frac{\sqrt{{}^{k}C_{1}}}{\sqrt{{}^{k+1}C_{2}}}&...&0&0\\ \vdots&\vdots&\vdots&...&\vdots&\vdots\\ 0&0&0&...&\frac{\sqrt{{}^{k}C_{k}}}{\sqrt{{}^{k+1}C_{k}}}&0\\ 0&0&0&...&\frac{\sqrt{{}^{k}C_{k-1}}}{\sqrt{{}^{k+1}C_{k}}}&0\\ 0&0&0&...&0&1\end{array}\right].\]
8. Let \(\mathcal{A}\) be a linear operator, then \(\mathcal{N}(\mathcal{A})\) denotes the null space of \(\mathcal{A}\) and \(\mathcal{R}(\mathcal{A})\) denotes the range space of \(\mathcal{A}\).
9. We need the definition of proximal operator to define the image restoration algorithm. For any lower-semi-continuous, convex and closed function \(f:\mathbb{R}^{N}\rightarrow\mathbb{R}\). The proximal of \(f()\) is given as: \[prox_{f}(\mathbf{z})\stackrel{{ def}}{{=}}\inf_{x}f(\mathbf{x})+ \frac{1}{2}\|\mathbf{x}-\mathbf{z}\|^{2}.\]
10. The mixed norm (\(\|\cdot\|_{1,2}\)) for any \(\mathbf{M}\in\mathbb{R}^{N\times p}\) is defined as: \[\|\mathbf{M}\|_{1,2}=\sum_{j=1}^{N}(\sum_{i=1}^{p}\mathbf{M}_{j,i}^{2})^{1/2}.\]
11. Proximal of \(\|\|_{1,2}\) norm: Consider \(\mathbf{A}\in\mathbb{R}^{M\times N}\) then \[prox_{\|\cdot\|_{1,2}}(\mathbf{A}) =\operatorname*{arg\,min}_{\mathbf{B}}[\|\mathbf{B}\|_{1,2}+\frac{ 1}{2}\|\mathbf{B}-\mathbf{A}\|^{2}]\] (1) \[=\operatorname*{arg\,min}_{\mathbf{B}}\sum_{i=1}^{M}\big{[}\sum_{ j=1}^{N}((\mathbf{B}-\mathbf{A})_{i,j})^{2}+(\sum_{j=1}^{N}\mathbf{B}_{i,j}^{2} )^{\frac{1}{2}}\big{]}\] (2) The above optimization is separable in \(i\) (row index). Hence, \(i^{th}\) row of the minimizer of the above expression \(\mathbf{A}^{*}\) which is same as \(prox_{\|\cdot\|_{1,2}}(\mathbf{A})\) can be given as: \[\mathbf{A}_{i,:}^{*}=prox_{\|\cdot\|_{2}}(\mathbf{A}_{i,:})\] (3) \[=\max(0,1-\frac{1}{\sqrt{\sum_{j}\mathbf{A}_{i,j}^{2}}})\mathbf{ A}_{i,:}.\] (4) Therefore, the proximal can be obtained by performing proximal of \(l_{2}\) norm on each row.
## 2 Introduction to Total Generalized Variation (TGV)
One of the most important regularizations for image restoration is Total Variation (TV) [1]. TV penalizes the sum of \(l_{2}\) norm of the gradient of the image. Therefore, the resultant image is piece-wise constant. In our notation,
\[TV^{1}(\mathbf{u})=\|\mathcal{D}_{0}\mathbf{u}\|_{1,2}=\|(\mathbf{D}_{x} \mathbf{u},\mathbf{D}_{y}\mathbf{u})\|_{1,2}.\]
The above concept can be extended to second-order derivatives also as follows:
\[TV^{2}(\mathbf{u})=\|\mathcal{D}_{1}(\mathcal{D}_{0}(\mathbf{u}))\|_{1,2}=\|( \mathbf{D}_{x}\mathbf{D}_{x}\mathbf{u},\mathbf{D}_{y}\mathbf{D}_{x}\mathbf{u},\mathbf{D}_{x}\mathbf{D}_{y}\mathbf{u},\mathbf{D}_{y}\mathbf{D}_{y}\mathbf{ u})\|_{1,2}. \tag{5}\]
Analogously, one can define \(TV^{n}\). Research has indicated, particularly for 1-dimensional signals, that employing the aforementioned \(TV^{n}\) regularization yields a solution that takes the form of a linear combinations of polynomials with a fixed degree of \(n-1\)[2]. However, images are typically better
characterized by piece-wise smooth polynomials, which might possess varying degrees across different regions, rather than adhering strictly to a fixed polynomial degree of \(n-1\) across the entire image. Consequently, there arises a necessity for a more adaptable and robust extension of the Total Variation (TV) concept.
TGV, an influential contribution by [3], demonstrates how the restored image is represented as a linear combination of polynomials with varying degrees across distinct image sections. In other words, TGV has the capability to generate solutions that manifest as piece-wise polynomials of diverse degrees in different parts of the image. For a thorough mathematical exploration, readers can refer into the details presented in [3].
Original formulations of TGV are rooted in a continuous domain rather than a discrete pixel grid. In this context, the formulation of second-order TGV, denoted as \(TGV^{2}\), can be expressed as follows:
\[TGV^{2}(u)= \tag{6}\] \[\inf_{v_{1},v_{2}\in\mathcal{S}_{\Omega}}\underbrace{\int_{ \Omega}\big{[}\big{(}\frac{\partial u}{\partial x}-v_{1}\big{)}^{2}+\big{(} \frac{\partial u}{\partial y}-v_{2}\big{)}^{2}\big{]}^{1/2}dxdy}_{A}+\] \[\underbrace{\int_{\Omega}\big{[}\big{(}\frac{\partial v_{1}}{ \partial x}\big{)}^{2}+\big{(}\frac{\partial v_{2}}{\partial y}\big{)}^{2}+ \frac{1}{2}\big{(}\frac{\partial v_{1}}{\partial y}+\frac{\partial v_{2}}{ \partial x}\big{)}^{2}\big{]}^{1/2}dxdy}_{B}. \tag{7}\]
In the given equation, observe that component "A" effectively matches the partial derivatives of \(v_{1}\) and \(v_{2}\), while component "B" further refines the matched partial derivatives (\(v_{1}\) and \(v_{2}\)) by incorporating a second-order total variation regularization. This pattern can be extended generally by iteratively fitting the \((n-1)\)-th order derivative and then applying regularization using the \(n\)-th order derivative. The original continuous TGV formulation [3] (eq. 3.6) is given as [3]:
\[TGV^{n}(g)=\inf_{u_{i}\in\mathbb{S}_{\Omega}^{(i)},u_{0}=g,u_{n}=0}\sum_{i=0} ^{n-1}\|\epsilon u_{i}-u_{i+1}\|. \tag{8}\]
Here, \(\mathbb{S}_{\Omega}^{i}\) is the space of \(i\) dimensional symmetric tensor fields on \(\Omega\) having bounded deformation and \(\epsilon\) is the symmetric derivative. Compared with the formulation given in this paper, \(\epsilon\) is analogous to \(\Pi^{(i)}\circ\mathcal{D}_{2^{i}-1}\). It can be noted that \(\Pi^{(i)}\circ\mathcal{D}_{2^{i}-1}\) varies with \(i\) (the derivative order) while \(\epsilon\) does not as the
authors have overloaded the operator for various derivative orders, but we have defined a different operator for each derivative order (\(i\)). This does not create any difference in the formulation. Also, \(\mathcal{S}_{\Omega}\) is the set of functions on the set \(\Omega\) having bounded deformation [3]. For details, the reader can refer to section 3.
For \(n=2\), the above continuous formulation can be written in discrete form as :
\[TGV^{2}(\mathbf{u})=\inf_{\mathbf{p}\in\mathbb{R}^{N\times 2}}\alpha_{1}\| \mathcal{D}_{0}\mathbf{u}-\mathbf{p}\|_{1,2}+\alpha_{0}\|\Pi^{(2)}\mathcal{D} _{1}\mathbf{p}\|.\]
In the given equation, the parameters \(\alpha_{1}\) and \(\alpha_{0}\) control the regularization. When \(\alpha_{1}\) becomes exceedingly large, the regularization effectively becomes equivalent to \(TV^{2}\). Conversely, when \(\alpha_{0}\) tends towards infinity, the regularization behaves like \(TV^{1}\). This intriguingly means that the Total Generalized Variation (\(TGV\)) approach has the flexibility to emulate both \(TV^{1}\) and \(TV^{2}\) regularization based on different choices of parameters, all while adapting spatially due to the variable \(\mathbf{p}\). This is because, in regions where \(\mathbf{p}=\mathbf{0}\), the regularization takes on the characteristics of \(TV^{1}\), while in regions where \(\mathbf{p}=\mathcal{D}_{0}\mathbf{u}\), the regularization functions as \(TV^{2}\).
The continuous version of TGV (eq. (8)) needs to be discretized for practical implementation. Although TGV is originally defined for continuous images (smooth functions on \(\mathbb{R}^{2}\)) and tensor fields; tensors are not necessary to describe the discretized TGV (as we show in this work). In this work, we give matrix-based formulation of TGV. Removing this additional hurdle to learning the tensor machinery makes this accessible to a wider audience.
There are many algorithms that solve image restoration with TGV regularization. Originally, TGV was proposed by [3]. In this work, they gave a primal-dual based algorithm that could solve the image-denoising problem up to third-order TGV. They also gave many structural and theoretical properties of TGV. In another work [4], the authors gave a primal-dual scheme to perform image decompression, zooming, and image reconstruction using second-order TGV. MRI reconstruction with second-order TGV was proposed in [5].
### Tensor-free and Compact representation of TGV
It is important to note that because of the complexity of TGV for orders (\(\geq 3\)), higher-order TGV is mainly unexplored for non-trivial inverse problems. As 2-tensors can be easily understood as vector images, discretizing and
implementing TGV-2 is simple and easier to implement, hence, most works focus only on TGV-2. Only the work by [3] focuses on TGV order 3 for the denoising problem. To the best of our knowledge, the work described in this article is the first work that gives a general algorithm to solve a linear inverse problem for any order of TGV. Now, we give a theorem that established the direct matrix version of TGV.
**Theorem 1**.: _(Direct Tensor-free Representation of TGV) Let \(\mathbf{g}\) be an image having N pixels. Then, the total generalized variation in discrete form is given as:_
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{n}=\mathbf{0}, \mathbf{p}_{i}\in\mathbb{R}^{N\times(2^{i})},\mathbf{p}_{i}\in\mathcal{R}( \Pi^{(i+1)})}\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\Pi^{(i+1)}\mathcal{D}_{2^{i}-1} \mathbf{p}_{i}-\mathbf{p}_{i+1}\|_{1,2}. \tag{9}\]
_Here, \(\alpha_{0},...,\alpha_{n-1}\) are the regularization parameters and \(\Pi^{(i+1)}\) and \(\mathcal{D}_{2^{i}-1}\) are as given in the section 1._
As an example: \(TGV^{3}\) is given as :
\[TGV^{3}(\mathbf{u})=\inf_{\mathbf{p}_{1}\in\mathbb{R}^{N\times 2},\mathbf{p}_{2} \in\mathbb{R}^{N\times 4},\mathbf{p}_{2}\in\mathcal{R}(\Pi^{(2)})}\alpha_{2}\| \mathcal{D}_{0}\mathbf{u}-\mathbf{p}_{1}\|_{1,2}+\alpha_{1}\|\Pi^{(2)} \mathcal{D}_{1}\mathbf{p}_{1}-\mathbf{p}_{2}\|+\alpha_{0}\|\Pi^{(3)}\mathcal{ D}_{3}\mathbf{p}_{2}\|.\]
The aforementioned direct form is impractical for implementation due to two primary reasons. Firstly, it entails matrices (as minimization variables) of size (\(\mathbb{R}^{N\times 2^{n-1}}\)) that grows exponentially with the TGV order (\(n\)). Secondly, it imposes a constraint that \(\mathbf{p}_{i}\in\mathcal{R}(\Pi^{(i)})\), further adding complexity to the problem. To address these challenges, we present a more concise expression for TGV:
**Theorem 2**.: _(Compact tensor-free representation of TGV) Let \(\mathbf{g}\) be an image having N pixels. Then, the expression given in the theorem 1 can also be written as::_
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{n}=\mathbf{0}, \mathbf{p}_{i}\in\mathbb{R}^{N\times(i+1)}}\sum_{i=0}^{n-1}\alpha_{n-i-1}\| \mathcal{A}_{i}\mathcal{D}_{i}\mathbf{p}_{i}-\mathbf{p}_{i+1}\|_{1,2}. \tag{10}\]
_Here, \(\alpha_{0},...,\alpha_{n-1}\) are the regularization parameters and \(\mathcal{A}_{i}\) and \(\mathcal{D}_{i}\) are as given in the section 1._
For proof, see section 3. Note that both problems encountered in the direct formulation are addressed: (1) the issue of optimization variables (\(\mathbf{p}_{i}\)'s) growing in size exponentially has been solved. More precisely, now \(\mathbf{p}_{i}\) are of the size \(\mathbb{R}^{N\times(i+1)}\) instead of \(\mathbb{R}^{N\times 2^{i-1}},\) and (2) the constraint on the variables \(\mathbf{p}_{i}\)'s is now absent from the formulation. These challenges are eliminated through the reformulation provided in the theorem above.
Furthermore, it's important to highlight that readers aiming to understand and implement this formulation do not need any additional background knowledge on tensors. This significantly enhances the accessibility of the method.
**Remark 2.1**.: (Direct and Compact expressions of TGV are equivalent) It can be noted that proof of the compact form is derived from the original TGV definition. But, one can arrive the compact form starting from the direct form with the help of the following statement. For any \(\mathbf{g}\in\mathbb{R}^{N},\ \inf\{\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\Pi^{(i+1)} \mathcal{D}_{2^{i}-1}\mathbf{u}_{i}-\mathbf{u}_{(i+1)}\|_{1,2}\ |\ \mathbf{u}_{n}= \mathbf{0},\mathbf{u}_{0}=\mathbf{g},\mathbf{u}_{i}\in\mathcal{R}(\Pi^{(i)}) \ \text{for}\ i=1,...,n-1\,\}=\inf\{\,\sum_{i=0}^{n-1}\alpha_{n-i-1}\| \mathcal{A}_{i}\mathcal{D}_{i}\mathbf{p}_{i}-\mathbf{p}_{i+1}\|_{1,2}\ |\ \mathbf{p}_{n}= \mathbf{0},\mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{i}\in\mathbb{R}^{N\times(i+1)} \ \text{for}\ i=1,...,n-1\,\}\)
The proof of the above statement is deferred to the end. As a result of the above remark one can use the compact representation of TGV for all purposes.
## 3 Proof of theorem 1 and theorem 2
### Preliminaries on tensors and tensor based TGV
**Definition 3.1**.: (Tensors) A function \(\mathcal{P}:\underbrace{\mathbb{R}^{2}\times\mathbb{R}^{2}.....\times \mathbb{R}^{2}}_{k\ \ times}\rightarrow\mathbb{R}\) is called a k-tensor on \(\mathbb{R}^{2}\) if it satisfies the following:
1. \(\mathcal{P}(\mathbf{v}_{1},\mathbf{v}_{2},...,\alpha\mathbf{v}_{i},..., \mathbf{v}_{k})=\alpha\mathcal{P}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v} _{i},...,\mathbf{v}_{k})\) for any \(\alpha\in\mathbb{R}\) and any \(i\in\{1,2,...,k\}\). Here, \(\mathbf{v}_{i}\in\mathbb{R}^{2}\) for any \(i\in\{1,2,...,k\}\).
2. \(\mathcal{P}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{i}+\mathbf{w},..., \mathbf{v}_{k})=\mathcal{P}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{i},...,\mathbf{v}_{k})+\mathcal{P}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{w},...,\mathbf{v}_{k})\) for any \(\mathbf{w}\in\mathbb{R}^{2}\) and any \(i\in\{1,2,...,k\}\).
For example, the function \(\mathcal{F}:\mathbb{R}^{2}\rightarrow\mathbb{R},\) such that \(\mathcal{F}(\mathbf{v})=\mathbf{a}^{T}\mathbf{v},\) where \(\mathbf{a}\) is any fixed vector in \(\mathbb{R}^{2},\) is a 1-tensor in \(\mathbb{R}^{2}.\) Similarly, the function \(\mathcal{G}:\mathbb{R}^{2}\times\mathbb{R}^{2}\rightarrow\mathbb{R},\) such that \(\mathcal{G}(\mathbf{v}_{1},\mathbf{v}_{2})=\mathbf{v}_{1}^{T}\mathbf{v}_{2}\) is a 2-tensor in \(\mathbb{R}^{2}.\)
**Remark 3.1**.: (Space of \(k-\)Tensors-\(\mathbb{C}^{(k)}\))
It can be verified that the set of all k-tensors in \(\mathbb{R}^{2}\) constitutes a vector space. We denote this vector space by \(\mathbb{C}^{(k)}.\)
**Definition 3.2**.: (Tensor Product) Consider a k-tensor \(\mathcal{P}\) and an l-tensor \(\mathcal{Q}.\) Then, the tensor product \((\otimes)\) of \(\mathcal{P}\) and \(\mathcal{Q}\) is a k+l tensor which is given as:
\[(\mathcal{P}\otimes\mathcal{Q})(\mathbf{v}_{1},\mathbf{v}_{2},..,\mathbf{v}_{ k+l})=\mathcal{P}(\mathbf{v}_{1},...,\mathbf{v}_{k})\cdot\mathcal{Q}(\mathbf{v}_{k+1},...,\mathbf{v}_{k+l}).\]
**Definition 3.3**.: **(Permutation)** In this work, \(\pi:\{1,2,3,\ldots,k\}\rightarrow\{1,2,3,\ldots,k\}\) is an (invertible) map known as a permutation. With this definition, we can define a map \(f_{\pi}:\mathbb{R}^{k}\rightarrow\mathbb{R}^{k},\) such that for any \(\mathbf{v}=(v_{1},v_{2},\ldots,v_{k})^{T}\in\mathbb{R}^{k},\)\(f_{\pi}(\mathbf{v})\stackrel{{ def}}{{=}}(v_{\pi(1)},v_{\pi(2)},\ldots,v_{\pi(k)})^{T}\). For example, let \(k=3,\) and \(\pi\) is defined such that: \(\pi(1)=2,\)\(\pi(2)=3,\) and \(\pi(3)=1\). Then, for any \(\mathbf{v}=(v_{1},v_{2},v_{3})^{T},\) we have \(f_{\pi}(\mathbf{v})=(v_{\pi(1)},v_{\pi(2)},v_{\pi(3)})^{T}=(v_{2},v_{3},v_{1}) ^{T}\). We denote the set of all permutations of k-letters as \(S_{k}.\)
**Definition 3.4**.: (Symmetric k-tensors) A k-tensor \(\mathcal{S}\) is symmetric if \(\mathcal{S}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{k})=\mathcal{S}( \mathbf{v}_{\pi(1)},...,\mathbf{v}_{\pi(k)})\) for all \(\pi\in S_{k}.\) Here, \(\pi\) is any permutation of k letters.
It can be noted that the set of symmetric tensors is a sub-space of the space of k-tensors. We denote this sub-space by \(\mathbb{S}^{(k)}.\) Since, the symmetric k-tensors forms a subspace a projection (\(|||\)) on the sub space of symmetric k-tensors can be defined.
**Definition 3.5**.: The projection \(|||^{(k)}:\mathbb{C}^{(k)}\rightarrow\mathbb{S}^{(k)}\) is defined as:
\[|||^{(k)}\mathcal{P}(\mathbf{v}_{1},\mathbf{v}_{2},...,\mathbf{v}_{k})=\frac{ 1}{k!}\sum_{\pi\in S_{k}}\mathcal{P}(\mathbf{v}_{\pi(1)},...,\mathbf{v}_{\pi( k)}).\]
The above expression can be interpreted as an average of all \(k!\) permutations. We need the following theorem regarding the basis of the space of k-tensors in \(\mathbb{R}^{2}.\)
**Literature Theorem 3.1**.: (Standard basis for the space of \(k-\)tensors) Consider \(\mathbf{w}_{0}=\begin{pmatrix}1\\ 0\end{pmatrix}\) and \(\mathbf{w}_{1}=\begin{pmatrix}0\\ 1\end{pmatrix}.\)\(\{\mathbf{w}_{0},\mathbf{w}_{1}\}\) is basis of \(\mathbb{R}^{2}\) and let \(\omega_{0}=\mathbf{w}_{0}^{T}\) and \(\omega_{1}=\mathbf{w}_{1}^{T}\) be the corresponding dual basis. Here, \(\omega_{0}\) and \(\omega_{1}\) are linear operators on \(\mathbb{R}^{2}.\) For example, if \(\mathbf{v}=(v_{1}\ v_{2})^{T}\), then \(\omega_{0}(\mathbf{v})=\mathbf{w}_{0}^{T}\mathbf{v}=v_{1}\) and \(\omega_{1}(\mathbf{v})=\mathbf{w}_{1}^{T}\mathbf{v}=v_{2}.\)Then, \(\{\omega_{i_{1}}\otimes\omega_{i_{2}}\otimes...\otimes\omega_{i_{k}}|\ \ i_{p}\in\{0,1\}\) for all \(p\ \in\{1,2,...k\}\}\) is the basis of \(\mathbb{C}^{(k)}.\)
As for example, it can be verified that
\[\mathcal{G}(\mathbf{v},\mathbf{w}) =\mathbf{v}^{T}\mathbf{w}\] \[=v_{1}w_{1}+v_{2}w_{2}\] \[=(\omega_{0}(\mathbf{v}))(\omega_{0}(\mathbf{w}))+(\omega_{1}( \mathbf{v}))(\omega_{1}(\mathbf{w}))\] \[=(\omega_{0}\otimes\omega_{0}+\omega_{1}\otimes\omega_{1})( \mathbf{v},\mathbf{w}).\]
**Remark 3.2**.: With this basis we can represent any k-tensor, \(\mathcal{P}\) in \(\mathbb{C}^{(k)}\) as a summation of the all possible \(2^{k}\) basis vectors as \(\mathcal{P}=\sum_{\mathbf{i}\in\{0,1\}^{k}}p_{\mathbf{i}}\boldsymbol{\omega}_{ \mathbf{i}}\).. Here, \(\mathbf{i}=(i_{1},i_{2},...,i_{k})\) is the vector index lying in the set \(\{0,1\}^{k}\), therefore each \(i_{j}\) takes the value either \(0\) or \(1\). Hence, there are \(2^{k}\) coefficients ( namely \(p_{\mathbf{i}}s\)) corresponding to all \(\mathbf{i}\) that are in the set \(\{0,1\}^{k}\). Here, \(\boldsymbol{\omega}_{\mathbf{i}}\) is defined as \(\boldsymbol{\omega}_{\mathbf{i}}\overset{def}{=}\omega_{i_{1}}\otimes... \otimes\omega_{i_{k}}\).
**Remark 3.3**.: Let \(\mathbf{i}\) be any vector in \(\{0,1\}^{k}\), then the sum, \(s(\mathbf{i})\) is defined as \(s(\mathbf{i})\overset{def}{=}\sum_{j=1}^{k}i_{j}\).
**Literature Theorem 3.2**.: (Orthonormal basis for the space of symmetric tensors) An orthonormal basis for the subspace \(\mathbb{S}^{(k)}\) can be given by the set \(\{\mathbf{e}_{0}^{(k)},\mathbf{e}_{1}^{(k)},...,\mathbf{e}_{k}^{(k)}\}\)[3] where
\[\mathbf{e}_{i}^{(k)}=\frac{1}{\sqrt{k^{k}C_{i}}}\sum_{\mathbf{j}\in\{0,1\}^{k },s(\mathbf{j})=i}\boldsymbol{\omega}_{\mathbf{j}}.\]
**Remark 3.4**.: It can be deduced from the above theorem that the dimension of \(\mathbb{C}^{(k)}\) is \(2^{k}\). Since, any finite dimensional vector space is isomorphic to the euclidean space (of same dimension), we can conclude that \(\mathbb{C}^{(k)}\) is isomorphic to \(\mathbb{R}^{2^{k}}\). Explicitly, there is an isomorphism \(\psi:\mathbb{C}^{(k)}\rightarrow\mathbb{R}^{2^{k}}\), such that for any \(\mathcal{P}=\sum_{\mathbf{i}\in\{0,1\}^{k}}p_{\mathbf{i}}\boldsymbol{\omega}_ {\mathbf{i}}\in\mathbb{C}^{(k)}\), \(\psi(\mathcal{P})\overset{def}{=}(p_{\mathbf{i}^{(1)}},p_{\mathbf{i}^{(2)}},...,p_{\mathbf{i}^{(2^{k})}})\in\mathbb{R}^{2^{k}}\). Here, \(\mathbf{i}^{(1)},...,\mathbf{i}^{(2^{k})}\) are the elements of \(\{0,1\}^{k}\) arranged in increasing order, i.e, \(b(\mathbf{i}^{(1)})<b(\mathbf{i}^{(2)})<...<b(\mathbf{i}^{(2^{k})})\).
**Remark 3.5**.: (Formula Projection of a tensor given in standard basis on \(\mathbb{S}^{(k)}\)) In definition 3.5 we gave the definition of the projection on the space of symmetric tensors, now we give a formula for the coefficient of the projected tensor (given in standard basis) corresponding to the basis vectors
\(\{\mathbf{\omega_{j}}\}_{\mathbf{j}\in\{0,1\}^{k}}\). Consider an element in \(\mathbb{C}^{(k)}\) that is represented in standard basis as \(\mathcal{Q}=\sum_{\mathbf{j}\in\{0,1\}^{k}}q_{\mathbf{j}}\mathbf{\omega_{j}}\in \mathbb{C}^{(k)}\), then \((||||^{(k)}\mathcal{Q})=\sum_{\mathbf{j}\in\{0,1\}^{k}}\left(\frac{1}{k!}\sum_ {\pi\in S_{k}}q_{f_{\pi}(\mathbf{j})}\right)\!\mathbf{\omega_{j}}\). Recall that \(f_{\pi}\) permutes the elements of the vector \(\mathbf{j}\) according to the permutation map \(\pi\).
We also need the definition of norm in \(\mathbb{C}^{(k)}\).
**Definition 3.6**.: (inner product)Recall that \(\mathbf{w}_{0}=(1,0)^{T}\) and \(\mathbf{w}_{1}=(0,1)^{T}\). Consider two tensors \(\eta\) and \(\zeta\) in \(\mathbb{C}^{(k)}\), then the inner \(\langle\eta,\zeta\rangle\) product is given by:
\[\langle\eta,\zeta\rangle=\sum_{\mathbf{i}\in\{0,1\}^{k}}\eta(\mathbf{w}_{i_{1 }},\mathbf{w}_{i_{2}},...,\mathbf{w}_{\mathbf{i_{k}}})\zeta(\mathbf{w}_{i_{1 }},\mathbf{w}_{i_{2}},...,\mathbf{w}_{\mathbf{i_{k}}}).\]
Consequently, we can define the norm of any k-tensor \(\eta\in\mathbb{C}^{(k)}\) as
\[\|\eta\|=\sqrt{\langle\eta,\eta\rangle}.\]
**Remark 3.6**.: (Norm of a tensor) The norm \((\|\cdot\|)\) of any tensor \(\alpha=\sum_{\mathbf{i}\in\{0,1\}^{k}}\alpha_{\mathbf{i}}\mathbf{\omega_{i}}\) can also be given as: \(\|\alpha\|=(\sum_{\mathbf{i}\in\{0,1\}^{k}}\alpha_{\mathbf{i}}^{2})^{1/2}.\) To obtain this relation, one can use the identity that \(\omega_{i_{1}}\otimes....\otimes\omega_{i_{k}}(\mathbf{w}_{j_{1}},\mathbf{w}_ {j_{2}},...,\mathbf{w}_{j_{k}})=1\) if \(i_{l}=j_{l}\) for all \(l\in\{1,2,..,k\}\) and zero otherwise.
It can be verified that for any \(\beta=\sum_{i=0}^{k}\beta_{i}\mathbf{e}_{i}\), \(\|\beta\|=(\sum_{i=0}^{k}\beta_{i}^{2})^{1/2}.\) In the context of derivative based regularization, we need the definition of symmetric tensors (representing derivatives) defined at any point in the space. For this, we define the concept of tensor fields. Since, we are dealing with images, we will call them as tensor images as they define a tensor at each location in the 2D space.
**Definition 3.7**.: (Continuous Tensor Images (fields)) A continuous k-tensor image (field) \(\alpha:\mathbb{R}^{2}\to C^{(k)}\) assigns a tensor at each point of the two dimensional space. For e.g. a function \(f:\mathbb{R}^{2}\rightarrow\mathbb{R}\) is a 0-dimensional tensor field. Similarly, instead of the complete 2-D space, we can define tensor fields confined to an open subset \(\Omega\subset\mathbb{R}^{2}\) as follows:
A continuous k-tensor image (field) \(\alpha:\Omega\to C^{(k)}\) assigns a tensor at each point of \(\Omega\). In order to define the discrete total generalized variation, it is required that a tensor is defined at each pixel location. Analogously, we can also define symmetric k-tensor fields. We denote the set of continuous symmetric k-tensor fields on \(\Omega\) as \(\mathbb{S}_{\Omega}^{(k)}\).
**Definition 3.8**.: (Discrete Tensor Images) A discrete k-tensor image (of \(N\) pixels) \(\alpha:\{1,2,...,N\}\to C^{(k)}\) assigns each pixel with a k-tensor. Similarly, we can define discrete symmetric tensor images. We denote the set of discrete symmetric k-tensor fields on \(N\) ordered pixel locations as \(\mathbb{S}_{N}^{(k)}\).
**Definition 3.9**.: A discrete symmetric k-tensor image (of \(N\) pixels) \(\alpha:\{1,2,...,N\}\to S^{(k)}\) assigns each pixel with a **symmetric** k-tensor. We denote the set of discrete symmetric k-tensor fields on \(N\) ordered pixel locations as \(\mathbb{S}_{N}^{(k)}\).
**Definition 3.10**.: (Symmetric derivative \((\mathcal{E}^{(k)})\)) As TGV involves iterated derivative, we need to define the symmetric derivative \(\mathcal{E}^{(k)}:\mathbb{S}_{\Omega}^{(k)}\rightarrow\mathbb{S}_{\Omega}^{(k+ 1)}\). Consider any pixel location \((p,q)\)in the set \(\Omega.\) For this, let \(\mathbf{\eta}\in\mathbb{S}_{\Omega}^{(k)}\) such that \(\mathbf{\eta}(p,q)=\sum_{(p,q)\in\Omega}\eta_{\mathbf{i}}(p,q)\mathbf{\omega_{\mathbf{ i}}}\). The symmetric iterated derivative in continuous domain is defined as:
\[(\mathcal{E}^{(k)}(\mathbf{\eta}))(p,q)=||^{(k+1)}\Big{[}\sum_{\mathbf{i}\in\{0,1 \}^{k}}\Big{(}(\frac{\partial\eta_{\mathbf{i}}}{\partial x})(p,q)\mathbf{\omega_{ \mathbf{i}}}\otimes\omega_{0}+(\frac{\partial\eta_{\mathbf{i}}}{\partial y})(p, q)\mathbf{\omega_{\mathbf{i}}}\otimes\omega_{1}\Big{)}\Big{]}.\]
In the above equation, \(\eta_{\mathbf{i}}:\mathbb{R}^{2}\rightarrow\mathbb{R}\) are real valued functions. Therefore, the partial derivatives \(\frac{\partial}{\partial x}\) and \(\frac{\partial}{\partial y}\) are clearly defined.
**Definition 3.11**.: (Symmetric derivative in discrete form \((\epsilon_{k})\)) In order to define the symmetrized derivative in discrete form, we first consider discrete tensor image (of \(N\) pixels) \(\mathbf{\eta}\in\mathbb{S}_{N}^{(k)}\) such that \(\mathbf{\eta}(j)=\sum_{\mathbf{i}\in\{0,1\}^{k}}\eta_{\mathbf{i}}(j)\mathbf{\omega}_{i}\) for \(j\in\{1,2,...N\}\), we replace \(\partial/\partial x\) with \(\mathbf{D}_{x}\), and similarly \(\partial/\partial y\) with \(\mathbf{D}_{y}\). As in continous setting, \(\eta_{\mathbf{i}}:\{1,2,...,N\}\rightarrow\mathbb{R}\) are real valued functions (denoting grayscale images.) Therefore, \(\mathbf{D}_{x}\) and \(\mathbf{D}_{y}\) are clearly defined, and \(\mathbf{D}_{x}\eta_{\mathbf{i}}\) and \(\mathbf{D}_{y}\eta_{\mathbf{i}}\) are real valued derivative images.
\[(\epsilon_{k}(\mathbf{\eta}))(j)\stackrel{{ def}}{{=}}||||^{(k+1)} \Big{[}\sum_{\mathbf{i}\in\{0,1\}^{k}}\Big{(}(\mathbf{D}_{x}\eta_{\mathbf{i}}) (j)\mathbf{\omega_{\mathbf{i}}}\otimes\omega_{0}+(\mathbf{D}_{y}\eta_{\mathbf{i}} )(j)\mathbf{\omega_{\mathbf{i}}}\otimes\omega_{1}\Big{)}\Big{]}. \tag{11}\]
In the above definition, \(\epsilon_{k}\) was defined on the standard basis. We extend the above definition to be defined on the orthogonal basis for the symmetric tensors.
**Lemma 1**.: _Consider any \(\mathbf{\beta}\in\mathbb{S}_{N}^{(k)}\) given as \(\mathbf{\beta}(i)=\sum_{r=0}^{k}\beta_{r}(i)\mathbf{e}_{r}^{(k)}\) for any pixel location \(i\). The operation of symmetric derivative on \(\beta\) can be written as:_
\[(\epsilon_{k}(\mathbf{\beta}))(i)=||^{(k+1)}\Big{[}\sum_{r=0}^{(k)}\Big{(}( \mathbf{D}_{x}\beta_{r})(i)\mathbf{e}_{r}^{(k)}\otimes\omega_{0}+(\mathbf{D}_ {y}\beta_{r})(i)\mathbf{e}_{r}^{(k)}\otimes\omega_{1}\Big{)}\Big{]}. \tag{12}\]
Proof.: We prove the above result from the definition 3.11. Consider any symmetric tensor image \(\boldsymbol{\eta}(r)=\sum_{\mathbf{i}\in\{0,1\}^{N}}\eta_{\mathbf{i}}(r) \boldsymbol{\omega_{\mathbf{i}}}\in\mathbb{S}_{N}^{(k)}.\) Since this tensor image is symmetric we can write \(\boldsymbol{\eta}(r)\) as a linear combination of the orthogonal basis vectors of \(\mathbb{S}^{(k)}\) (the space of symmetric tensors) as \(\boldsymbol{\eta}(r)=\sum_{j=0}\eta_{j}^{\prime}(r)\mathbf{e}_{j}^{(k)}\) for each pixel location \(r\). Now we can divide \(\{0,1\}^{k}\) into disjoint sets \(T_{j}\) where \(T_{j}\stackrel{{ def}}{{=}}\{\,\mathbf{i}\in\{0,1\}^{k}\mid s( \mathbf{i})=j\,\}.\) Also, it can be observed that \(\{0,1\}^{k}=\cup_{j=0}^{k}T_{j}.\) With this the symmetric tensor \(\boldsymbol{\eta}=\sum_{r=0}^{k}\sum_{\mathbf{i}\in T_{j}}\eta_{\mathbf{i}} \boldsymbol{\omega_{\mathbf{i}}}.\) As \(\boldsymbol{\eta}\) is symmetric, \(\eta_{i}\) remains same over the set \(T_{r}\) for any \(r\), i.e. if \(\mathbf{i}^{(1)}\) and \(\mathbf{i}^{(2)}\) both belong to \(T_{r}\) then \(\eta_{\mathbf{i}^{(1)}}=\eta_{\mathbf{i}^{(2)}}.\) With this we can define, \(\eta_{j}=\eta_{\mathbf{i}}\) for each \(\mathbf{i}\in T_{j}.\) Now, we can write \(\boldsymbol{\eta}(r)=\sum_{r=0}^{k}\eta_{j}(r)\sum_{\mathbf{i}\in T_{r}} \boldsymbol{\omega_{\mathbf{i}}}.\) Using the definition of \(\mathbf{e}_{j}^{(k)}\) we can conclude that: \(\eta_{j}(r)\sqrt{k}C_{j}=\eta_{j}^{\prime}(r)\) for any r. Now, we invoke the definition of symmetric derivative:
\[(\epsilon_{k}(\boldsymbol{\eta}))(r) =||^{(k+1)}\Big{[}\sum_{\mathbf{i}\in\{0,1\}^{k}}\Big{(}(\mathbf{ D}_{x}\eta_{\mathbf{i}})(r)\boldsymbol{\omega_{\mathbf{i}}}\otimes\omega_{0}+( \mathbf{D}_{y}\eta_{\mathbf{i}})(r)\boldsymbol{\omega_{\mathbf{i}}}\otimes \omega_{1}\Big{)}\Big{]} \tag{13}\] \[=||^{(k+1)}\Big{[}\sum_{j=0}^{k}\sum_{\mathbf{i}\in T_{j}}\Big{(} (\mathbf{D}_{x}\eta_{\mathbf{i}})(r)\boldsymbol{\omega_{\mathbf{i}}}\otimes \omega_{0}+(\mathbf{D}_{y}\eta_{\mathbf{i}})(r)\boldsymbol{\omega_{\mathbf{i}} }\otimes\omega_{1}\Big{)}\Big{]}\] (14) \[=||^{(k+1)}\Big{[}\sum_{j=0}^{k}\Big{(}(\mathbf{D}_{x}\eta_{j})(r )\sum_{\mathbf{i}\in T_{j}}\boldsymbol{\omega_{\mathbf{i}}}\otimes\omega_{0}+ (\mathbf{D}_{y}\eta_{j})(r)\sum_{\mathbf{i}\in T_{j}}\boldsymbol{\omega_{ \mathbf{i}}}\otimes\omega_{1}\Big{)}\Big{]}\] (15) \[=||^{(k+1)}\Big{[}\sum_{j=0}^{k}\Big{(}(\mathbf{D}_{x}\eta_{j}^{ \prime})(r)\mathbf{e}_{j}^{(k)}\otimes\omega_{0}+(\mathbf{D}_{y}\eta_{j}^{ \prime})(r)\mathbf{e}_{j}^{(k)}\otimes\omega_{1}\Big{)}\Big{]} \tag{16}\]
**Definition 3.12**.: The Total Generalized Variation (TGV) in continuous form as given by [3] can be written as:
\[\mathcal{TGV}^{n}(\mathbf{g})=\inf_{u_{i}\in\mathbb{S}_{N}^{(i)},u_{0}=( \mathbf{g}),u_{n}=0}\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\mathcal{E}^{(i)}u_{i}-u_{ i+1}\|. \tag{17}\]
**Definition 3.13**.: The above definition of Total Generalized Variation (TGV) can be discretized by replacing \(\mathcal{E}^{(i)}\) with its discrete counterpart \(\epsilon_{i}\) as:
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{u}_{i}\in\mathbb{S}_{N}^{(i)},\mathbf{u}_{0} =(\mathbf{g}),\mathbf{u}_{n}=\mathbf{0}}\sum_{i=0}^{n-1}\alpha_{n-i-1}\| \epsilon_{i}\mathbf{u}_{i}-\mathbf{u}_{i+1}\|. \tag{18}\]
### Some results on tensors used for deriving TGV formulations
The following result relates the two basis defined in the previous section.
**Proposition 1**.: _(Relation between standard basis of \(\mathbb{C}^{(k)}\) and orthonormal basis of \(\mathbb{S}^{(k)}\)) Consider \(\alpha_{j}=\boldsymbol{\omega}_{\mathbf{p}}=\omega_{p_{1}}\otimes...\otimes \omega_{p_{k}}\in\mathbb{C}^{(k)},\) such that \(s(\mathbf{p})=\sum_{l=1}^{k}p_{l}=j\) for some \(j\in\{0,1,...,k\}\). Then, \(|||^{(k)}(\alpha_{j})=\frac{1}{\sqrt{{}^{k}C_{j}}}\)\(\mathbf{e}_{j}^{(k)}.\)_
Proof: By literature Theorem 3.2 we have,
\[\mathbf{e}_{j}^{(k)}=\frac{1}{\sqrt{{}^{k}C_{j}}}\sum_{\mathbf{m}\in\{0,1\}^{ k},s(\mathbf{m})=j}\boldsymbol{\omega}_{\mathbf{m}}.\]
Applying the (linear) operator \(|||^{(k)}\) on both sides we get,
\[|||^{(k)}\mathbf{e}_{j}^{(k)}=\frac{1}{\sqrt{{}^{k}C_{j}}}\sum_{\mathbf{m}\in \{0,1\}^{k},s(\mathbf{m})=j}|||^{(k)}\boldsymbol{\omega}_{\mathbf{m}}.\]
Now, from definition 3.5 it can be seen that all elements inside the summation are equal. Therefore, we get,
\[|||^{(k)}\mathbf{e}_{j}^{(k)}=\frac{1}{\sqrt{{}^{k}C_{j}}}{{}^{k}C_{j}}|||^{( k)}\boldsymbol{\omega}_{\mathbf{m}}.\]
Using the fact that \(|||^{(k)}\) is the projection and \(\mathbf{e}_{j}^{(k)}\) is a symmetric tensor gives the result.
The following proposition identifies tensor images with matrices which allows us to represent TGV in a tensor-free form.
**Proposition 2**.: _The set of discrete tensor images \(S_{N}^{(k)}\) is a vector space (over \(\mathbb{R}\)) of dimension \(N\times(k+1)\) and therefore, isomorphic to \(\mathbb{R}^{N\times(k+1)}.\)_
Proof.: To see that \(\mathbb{S}_{N}^{(k)}\) is a vector space, one can verify that the linear combination of any two elements of \(\mathbb{S}_{N}^{(k)}\) is in \(\mathbb{S}_{N}^{(k)}\).
To show that \(\mathbb{S}_{N}^{(k)}\) is isomorphic to \(\mathbb{R}^{N\times(k+1)}\), we show that the dimension of \(\mathbb{S}_{N}^{(k)}\) is \(N\times(k+1)\).To this end we prove that the basis of \(\mathbb{S}_{N}^{(k)}\) is the set of tensor images \(\{f_{i,j}:\{1,2,...,N\}\rightarrow\mathbb{S}^{(k)}|i\in\{1,2,...,N\},j\in\{0, 1,...,k\}\}\) (of \(N\) pixels) defined for each pixel \(r\) as: \(f_{i,j}(r)=\mathbf{e}_{k}^{(j)}\) if \(r=i\) and \(f_{i,j}(r)=0\) if \(r\neq i.\) To
show that the set spans, consider any \(\boldsymbol{\beta}\in\mathbb{S}_{N}^{(k)}\) given as \(\boldsymbol{\beta}(r)=\sum_{j=0}^{k}\beta_{j}(r)\mathbf{e}_{j}^{(k)}\) in the orthogonal basis for the space of symmetric k-tensors. Now, for any \(r\),
\[\boldsymbol{\beta}(r)=\sum_{j=0}^{k}\beta_{j}(r)\mathbf{e}_{j}^{(k)}.\]
By definition of \(f_{i,j}s\) we can write:
\[\boldsymbol{\beta}(r)= \sum_{j=0}^{k}\beta_{j}(r)f_{r,j}(r) \tag{19}\] \[= \big{[}\sum_{i=1}^{N}\sum_{j=0}^{k}\beta_{j}(i)f_{i,j}\big{]}(r). \tag{20}\]
Therefore, any element in \(\mathbb{S}_{N}^{(k)}\) can be written as a linear combination of \(f_{i,j}\)s. To show that they are linearly independent we consider the linear combination \(\sum_{i=1}^{N}\sum_{j=0}^{k}a_{i,j}f_{i,j}=\mathbf{0}.\) Choose any \(r\in\{1,2,...,N\}.\) For this \(r\) we have: \(\sum_{i=1}^{N}\sum_{j=0}^{k}a_{i,j}f_{i,j}(r)=\mathbf{0}\). By the definition of \(f_{i,j}\), \(\sum_{j=0}^{k}a_{r,j}f_{r,j}(r)=\mathbf{0}.\) This means,\(\sum_{j=0}^{k}a_{r,j}\mathbf{e}_{j}^{(k)}=\mathbf{0}\). Since, \(\mathbf{e}_{j}^{(k)}\)s are linearly independent, \(a_{r,j}=0\) for \(j=0,1,..,k.\) As \(r\) was arbitrarily chosen, \(a_{r,j}=0\) for all \(r\in\{1,2,...N\}\) and \(j\in\{0,...,k\}\).
**Remark 3.7**.: (Isomorphism between tensor images and matrices) As a result of the above theorem, any discrete symmetric tensor field \(\boldsymbol{\alpha}\in\mathbb{S}_{N}^{(k)}\) can be represented by a matrix of size \(N\times(k+1).\) Now, we explicitly define the isomorphism between the two vector spaces. Consider any symmetric tensor field \(\boldsymbol{\alpha}\), defined for any pixel index \(r\in\{1,2,...,N\}\) as \(\boldsymbol{\alpha}(r)=\sum_{j=0}^{k}\alpha_{j}(r)(\mathbf{e}_{j}^{(k)})\).Here, \(\alpha_{j}(r)\) denotes the coefficient for the basis \(\mathbf{e}_{j}^{(k)}.\) The isomorphism \(\phi_{k}:\mathbb{S}_{N}^{(k)}\rightarrow\mathbb{R}^{N\times(k+1)}\) is given as: \(\phi_{k}(\boldsymbol{\alpha})=\begin{bmatrix}\alpha_{0}(1)&...&\alpha_{k}(1)\\ \alpha_{0}(2)&...&\alpha_{k}(2)\\ \vdots&\vdots&\vdots\\ \alpha_{0}(N)&...&\alpha_{k}(N)\end{bmatrix}.\)
**Remark 3.8**.: The set of tensor images \(\mathbb{C}_{N}^{(k)}\) is a vector space (over \(\mathbb{R}\)) of dimension \(N\times 2^{k}.\) Therefore, it is isomorphic to \(\mathbb{R}^{N\times 2^{k}}\), and we denote the isomorphism as \(\psi_{k}:\mathbb{C}_{N}^{(k)}\rightarrow\mathbb{R}^{N\times 2^{k}}.\)
The proof of the above remark is similar to the proof of remark 3.7. Hence, we skip the proof here.
**Definition 3.14**.: The mixed norm \((\|\cdot\|_{1,2})\) for any \(\mathbf{M}\in\mathbb{R}^{N\times p}\) is defined as:
\[\|\mathbf{M}\|_{1,2}=\sum_{j=1}^{N}(\sum_{i=1}^{p}\mathbf{M}_{j,i}^{2})^{1/2}.\]
**Remark 3.9**.: For any \(\boldsymbol{\alpha}\in\mathbb{S}_{n}^{(k)},\|\alpha\|\stackrel{{ def}}{{=}}\sum_{i=1}^{n}\|\alpha(i)\|\). With this definition we get,\(\|\boldsymbol{\alpha}\|=\|\phi_{k}(\boldsymbol{\alpha})\|_{1,2}\).
Proof.: To prove this we need to show that for any \(\beta=\sum_{r=0}^{k}\beta_{r}\mathbf{e}_{r}^{(k)}\in\mathbb{S}^{(k)},\quad\| \beta\|=\big{(}\sum_{r=0}^{k}\beta_{r}^{2}\big{)}^{\frac{1}{2}}.\) This follows from the fact that \(\langle\mathbf{e}_{j}^{(k)},\mathbf{e}_{l}^{(k)}\rangle=1\) if \(j=l\) and \(0\) else.
### Organization of Proofs
To establish the results regarding the representation of Total Generalized Variation (TGV), we start by considering the discrete definition of TGV (see definition 3.13), which corresponds to a discretized rendition of the definition presented in [3]. Through the utilization of proposition 3 (that gives the iterated gradient in its matrix form), we substantiate the theorem that presents the direct representation of TGV (refer to theorem 1). Furthermore, starting with the same definition, we harness the insights provided by lemma 1 to facilitate the expression from definition 3.13 in terms of the basis of symmetric tensors. This effort culminates in proving the theorem that formulates the compact representation (see theorem 2). Importantly, remark 2.1 independently demonstrates the equivalence of both of these forms. Hence, an alternative approach to establishing the compact form involves proving the direct form and using remark 2.1.
### Proofs of the theorem for representation of Total Generalized Variation
As we need to give a tensor free representation, we give a matrix equivalent of \(\epsilon_{k}\) (symmetric derivative operator) with the help of the following proposition.
**Proposition 3**.: _For any tensor image \(\mathbf{\eta}\in\mathbb{S}_{N}^{(k)},\) we have \(\psi_{k+1}\epsilon_{k}\mathbf{\eta}=\Pi^{(k+1)}\mathcal{D}_{2^{k}-1}\psi_{k}(\mathbf{ \eta}).\)_
Proof.: We will prove this by proving the following for each index \((r,l)\):
\[(\psi_{k+1}\epsilon_{k}\mathbf{\eta})(r,l)=(\Pi^{(k+1)}\mathcal{D}_{2^{k}-1}\psi_{k }(\mathbf{\eta}))(r,l).\]
We start with definition 3.11, the definition of symmetric gradient \(\epsilon\). For any pixel location \(r\in\{1,..,N\}\) we can write \(\mathbf{\eta}\) as a linear combination of the basis vectors \(\mathbf{\omega_{\mathrm{i}}}\) with coefficients \(\eta_{\mathrm{i}}\) as:
\[\epsilon_{k}\mathbf{\eta}(r)\stackrel{{ def}}{{=}}|||^{(k+1)}\Big{[} \sum_{\mathbf{i}\in\{0,1\}^{k}}\Big{(}(\mathbf{D}_{x}\eta_{\mathrm{i}})(r)\bm {\omega_{\mathrm{i}}}\otimes\omega_{0}+(\mathbf{D}_{y}\eta_{\mathrm{i}})(r) \mathbf{\omega_{\mathrm{i}}}\otimes\omega_{1}\Big{)}\Big{]}.\]
Let \(\mathbf{w_{j}}\) be the tuple \((\mathbf{w}_{j_{1}},...,\mathbf{w}_{j_{k}})\) (recall the definitions \(\mathbf{w}_{0}=(0,1)^{T}\) and \(\mathbf{w}_{1}=(1,0)^{T}\)) for some \(\mathbf{j}=(j_{1},...,j_{k})\in\{0,1\}^{k}\). Also, observe that the \((r,l)\) element of \(\psi_{k+1}\epsilon_{k}\mathbf{\eta}\) is \((\epsilon_{k}(\mathbf{\eta})(r))(\mathbf{w_{j}})\), where \(\mathbf{j}=b(l).\) With this we have,
\[\big{(}\epsilon_{k}\mathbf{\eta}(r)\big{)}(\mathbf{w_{j}})=\Big{[}\sum_{\mathbf{i }\in\{0,1\}^{k}}|||^{(k+1)}\Big{(}(\mathbf{D}_{x}\eta_{\mathrm{i}})(r)(\mathbf{ \omega_{\mathrm{i}}}\otimes\omega_{0})(\mathbf{w_{j}})+(\mathbf{D}_{y}\eta_{ \mathrm{i}})(r)(\mathbf{\omega_{\mathrm{i}}}\otimes\omega_{1})(\mathbf{w_{j}}) \Big{)}\Big{]}.\]
Let \(B\) be the matrix \(\mathcal{D}_{2^{k}-1}\psi_{k}(\mathbf{\eta})\), then by the definition of \(\mathcal{D}_{2^{k}-1}\) we have that \(B_{r,b^{-1}((\mathbf{i},0))}=(\mathbf{D}_{x}\eta_{\mathrm{i}})(r),\) similarly, \(B_{r,b^{-1}((\mathbf{i},1))}=(\mathbf{D}_{y}\eta_{\mathrm{i}})(r).\) This is because at odd indices we have \(\mathbf{D}_{y}\) and at even indices we have \(\mathbf{D}_{x}\). With these we have:
\[\big{(}\epsilon_{k}\mathbf{\eta}(r)\big{)}(\mathbf{w_{j}})=\frac{1}{k+1!}\Big{[} \sum_{\mathbf{i}\in\{0,1\}^{k}}\sum_{\pi\in S_{k+1}}\Big{(}B_{r,b^{-1}(( \mathbf{i},0))}(\mathbf{\omega_{\mathrm{i}}}\otimes\omega_{0})(\mathbf{w}_{f_{\pi }(\mathbf{j})})+B_{r,b^{-1}((\mathbf{i},1))}(\mathbf{\omega_{\mathrm{i}}}\otimes \omega_{1})(\mathbf{w}_{f_{\pi}(\mathbf{j})})\Big{)}\Big{]}.\]
The two terms inside the summation can be combined into a single summation by defining a bigger vector \(\mathbf{m}=(\mathbf{i},*)\). Here, \(*\) can be \(0\) or \(1\). Now, the expression becomes:
\[\big{(}\epsilon_{k}\mathbf{\eta}(r)\big{)}(\mathbf{w_{j}})=\frac{1}{k+1!}\Big{[} \sum_{\mathbf{m}\in\{0,1\}^{k+1}}\sum_{\pi\in S_{k+1}}\Big{(}B_{r,b^{-1}( \mathbf{m})}(\mathbf{\omega_{\mathrm{m}}})(\mathbf{w}_{f_{\pi}(\mathbf{j})}) \Big{)}\Big{]}.\]
Now, \((\mathbf{\omega_{\mathrm{m}}})(\mathbf{w}_{f_{\pi}(\mathbf{j})})\) is one if and only if \(\mathbf{m}=f_{\pi}(\mathbf{j})\) and \(0\) else. Observe that \(\mathbf{j}=b(l)\). With this, we have:
\[\big{(}\epsilon_{k}\mathbf{\eta}(r)\big{)}(\mathbf{w_{j}})=\frac{1}{k+1!}\Big{[} \sum_{\pi\in S_{k+1}}\Big{(}B_{r,b^{-1}(f_{\pi}(b(l)))}\Big{)}\Big{]}=(\Pi^{(k+ 1)}B)_{r,l}=(\Pi^{(k+1)}\mathcal{D}_{2^{k}-1}(\psi_{k})(\mathbf{\eta}))_{r,l}.\]
Now, with the help of the above proposition we give a tensor free representation of TGV.
**Theorem 1**.: _(Direct Tensor-free Representation of TGV) Let \(\mathbf{g}\) be an image having N pixels. Then, the total generalized variation in discrete form is given as:_
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{n}=\mathbf{0}, \mathbf{p}_{i}\in\mathbb{R}^{N\times(2^{i})},\mathbf{p}_{i}\in\mathcal{R}( \Pi^{(i+1)})}\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\Pi^{(i+1)}\mathcal{D}_{2^{i}-1} \mathbf{p}_{i}-\mathbf{p}_{i+1}\|_{1,2}. \tag{9}\]
_Here, \(\alpha_{0},...,\alpha_{n-1}\) are the regularization parameters and \(\Pi^{(i+1)}\) and \(\mathcal{D}_{2^{i}-1}\) are as given in the section 1._
Proof.: Recalling the definition of the total generalized variation (definition 3.13):
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{u}_{i}\in\mathbb{S}_{N}^{(i)},\mathbf{u}_{0 }=(\mathbf{g}),\mathbf{u}_{n}=\mathbf{0}}\sum_{i=0}^{n-1}\alpha_{n-i-1}\| \epsilon_{i}\mathbf{u}_{i}-\mathbf{u}_{i+1}\|. \tag{21}\]
Let \(\boldsymbol{\eta}(j)=\sum_{\mathbf{p}\in\{0,1\}^{i}}\eta_{\mathbf{p}}(j) \boldsymbol{\omega}_{\mathbf{p}}\) be any symmetric i-tensor image, then from proposition 3 we obtain for any pixel location \(j\):
\[(\epsilon_{i}(\boldsymbol{\eta}))(j) \tag{22}\] \[=||^{(i+1)}\Big{[}\sum_{\mathbf{p}\in\{0,1\}^{i}}\Big{(}( \mathbf{D}_{x}\eta_{\mathbf{p}})(j)\boldsymbol{\omega}_{\mathbf{p}}\otimes \omega_{0}+(\mathbf{D}_{y}\eta_{\mathbf{p}})(j)\boldsymbol{\omega}_{\mathbf{p} }\otimes\omega_{1}\Big{)}\Big{]}.\] (23) \[=(\psi_{i+1}^{-1}\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\psi_{i}( \boldsymbol{\eta}))(j) \tag{24}\]
With the above result \(TGV^{n}\) becomes:
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{u}_{i}\in\mathbb{S}_{N}^{(i)}, \mathbf{u}_{0}=(\mathbf{g}),\mathbf{u}_{n}=\mathbf{0}}\sum_{i=0}^{n-1}\alpha_ {n-i-1}\|\psi_{i+1}^{-1}\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\psi_{i}\mathbf{u}_{i} -\mathbf{u}_{i+1}\| \tag{25}\] \[\stackrel{{(p)}}{{=}}\inf_{\mathbf{u}_{i}\in\mathbb{ S}_{N}^{(i)},\mathbf{u}_{0}=(\mathbf{g}),\mathbf{u}_{n}\mathbf{0}}\sum_{i=0}^{n-1} \alpha_{n-i-1}\|\psi_{i+1}^{-1}\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\psi_{i}\mathbf{ u}_{i}-\psi_{i+1}^{-1}\psi_{i+1}\mathbf{u}_{i+1}\|.\] (26) \[\stackrel{{(q)}}{{=}}\inf_{\mathbf{v}_{i}\in\mathcal{ R}(\Pi^{(i)}),\mathbf{v}_{0}=(\mathbf{g}),\mathbf{u}_{n}=\mathbf{0}}\sum_{i=0}^{n-1} \alpha_{n-i-1}\|\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\mathbf{v}_{i}-\mathbf{v}_{i+1 }\|_{1,2}. \tag{27}\]
Here, (p) follows from the definition of the norm of the tensor and (q) follows by substituting \(\psi_{i}\mathbf{u}_{i}=\mathbf{v}_{i}\) for \(i=0,1,..,n\)
To give the compact and tensor free represntation we give the matrix equivalent of \(\epsilon_{k}\) using the orthogonal basis of symmetric tensor images rather than the standard basis of tensor images. This greatly simplifies the TGV expression.
**Proposition 4**.: _(Derivation of symmetric derivative operator \(\epsilon_{k}\) in compact matrix form using basis \(\{\mathbf{e}_{j}^{(k)}\}_{j=0}^{k}\) of \(\mathbb{S}_{N}^{(k)}\) )_
_The linear operator \(\phi_{k+1}\epsilon_{k}\phi_{k}^{-1}:\mathbb{R}^{N\times(k+1)}\rightarrow\mathbb{ R}^{N\times(k+2)}\) can be written as a composition of two linear operators. \(\mathcal{A}_{k}:\mathbb{R}^{N\times(2k+2)}\rightarrow\mathbb{R}^{N\times(k+2)}\) and \(\mathcal{D}_{k}:\)_
\[\mathbb{R}^{N\times(k+1)}\rightarrow\mathbb{R}^{N\times(2k+2)}.\]
_Here, \(\mathcal{A}_{k}(\mathbf{z})=\mathbf{z}\cdot M_{k}\), \(M_{k}=\left[\begin{array}{ccccc}1&0&0&...&0&0\\ 0&\frac{\sqrt{k}C_{1}}{\sqrt{k+1}C_{1}}&0&...&0&0\\ 0&\frac{\sqrt{k}C_{0}}{\sqrt{k+1}C_{1}}&0&...&0&0\\ 0&0&\frac{\sqrt{k}C_{2}}{\sqrt{k+1}C_{2}}&...&0&0\\ 0&0&\frac{\sqrt{k}C_{1}}{\sqrt{k+1}C_{2}}&...&0&0\\ \vdots&\vdots&\vdots&...&\vdots&\vdots\\ 0&0&0&...&\frac{\sqrt{k}C_{k}}{\sqrt{k+1}C_{k}}&0\\ 0&0&0&...&\frac{\sqrt{k}C_{k-1}}{\sqrt{k+1}C_{k}}&0\\ 0&0&0&...&0&1\end{array}\right];\]
\[\mathcal{D}_{k}([\mathbf{y}_{0},\mathbf{y}_{1},...,\mathbf{y}_{k}])=[\mathbf{ D}_{x}\mathbf{y}_{0},\mathbf{D}_{y}\mathbf{y}_{0},\mathbf{D}_{x}\mathbf{y}_{1}, \mathbf{D}_{y}\mathbf{y}_{1},...,\mathbf{D}_{x}\mathbf{y}_{k},\mathbf{D}_{y} \mathbf{y}_{k}].\]
_Also, \(\mathcal{A}_{k}\mathcal{A}_{k}^{T}=\mathcal{I}.\)_
Proof.: Let \(\boldsymbol{\beta}=\sum_{r=0}^{k}\beta_{r}\mathbf{e}_{r}^{(k)}\in\mathbb{S}_{N }^{(k)}.\) Consider any pixel location \(i\), the operator \(\epsilon_{k}\) is given as:
\[(\epsilon_{k}(\boldsymbol{\beta}))(i)=|||^{(k+1)}\Big{[}\sum_{r=0}^{(k)}\Big{(} (\mathbf{D}_{x}\beta_{r})(i)\mathbf{e}_{r}^{(k)}\otimes\omega_{0}+(\mathbf{D} _{y}\beta_{r})(i)\mathbf{e}_{r}^{(k)}\otimes\omega_{1}\Big{)}\Big{]}. \tag{28}\]
Since, \(|||^{(k+1)}\) is a linear operator we have,
\[(\epsilon_{k}(\boldsymbol{\beta}))(i)=\sum_{r=0}^{(k)}\Big{(}(\mathbf{D}_{x} \beta_{r})(i)|||^{(k+1)}(\mathbf{e}_{r}^{(k)}\otimes\omega_{0})+(\mathbf{D}_{y }\beta_{r})(i)|||^{(k+1)}(\mathbf{e}_{r}^{(k)}\otimes\omega_{1})\Big{)}. \tag{29}\]
Consider,
\[|||^{(k+1)}(\mathbf{e}_{r}^{(k)}\otimes\omega_{0})\stackrel{{ \text{literature Theorem \ref{thm:
Proof.: We begin the proof from the definition of the total generalized variation (definition 3.13):
\[TGV^{n}(\mathbf{g})=\inf_{\mathbf{u}_{i}\in\mathbb{S}_{N}^{(i)},\mathbf{u}_{0}=( \mathbf{g}),\mathbf{u}_{n}=\mathbf{0}}\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\epsilon_ {i}\mathbf{u}_{i}-\mathbf{u}_{i+1}\|. \tag{32}\]
With the above proposition proposition 4, we have:
\[TGV^{n}(\mathbf{g}) =\inf_{\mathbf{u}_{i}\in\mathbb{S}_{N}^{(i)},\mathbf{u}_{0}= \mathbf{g},\mathbf{u}_{n}=0}\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\phi_{i+1}^{-1} \mathcal{A}_{i}\circ\mathcal{D}_{i}\phi_{i}\mathbf{u}_{i}-\mathbf{u}_{i+1}\| \tag{33}\] \[=\inf_{\mathbf{u}_{i}\in\mathbb{S}_{N}^{(i)},u_{0}=(\mathbf{g}),u _{n}=0}\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\phi_{i+1}^{-1}\mathcal{A}_{i}\circ \mathcal{D}_{i}\phi_{i}\mathbf{u}_{i}-\phi_{i+1}^{-1}\phi_{i+1}\mathbf{u}_{i+1}\|\] (34) \[\stackrel{{(r)}}{{=}}\inf_{\mathbf{p}_{i}\in\mathbb{ R}^{N\times(i+1)},\mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{n}=0}\sum_{i=0}^{n-1} \alpha_{n-i-1}\|\mathcal{A}_{i}\circ\mathcal{D}_{i}\mathbf{p}_{i}-\mathbf{p}_ {i+1}\|_{1,2} \tag{35}\]
In the above expression (r) follows from remark 3.9 and substituting \(\phi_{i}\mathbf{u}_{i}=\mathbf{p}_{i}\) for \(i=0,1,..,n\) we get the result.
We use the following lemma for proving the equivalence of the two given reprsentations.
**Lemma 2**.: _Consider any \(\mathbf{p}=[\mathbf{p}_{0},...,\mathbf{p}_{k}]\in\mathbb{R}^{N\times(k+1)}\), \(\mathbf{q}=[\mathbf{q}_{0},...,\mathbf{q}_{k+1}]\in\mathbb{R}^{N\times(k+2)}\), \(\mathbf{u}=[\mathbf{u}_{0},...,\mathbf{u}_{2^{k}-1}]\in\mathbb{R}^{N\times 2^{k}}\), \(\mathbf{v}=[\mathbf{v}_{0},...,\mathbf{v}_{2^{k+1}-1}]\in\mathbb{R}^{N\times(2 ^{k+1})}\). If \(\mathbf{u}_{j}=\frac{\mathbf{p}_{s(b(j))}}{\sqrt{k}C_{s(b(j))}}\) for \(j=0,...,2^{k}-1\) and \(\mathbf{v}_{l}=\frac{\mathbf{p}_{s(b(l))}}{\sqrt{k}C_{s(b(l))}}\) for \(l=0,...,2^{k+1}-1\) then \(\|\mathcal{A}_{k}\mathcal{D}_{k}\mathbf{p}-\mathbf{q}\|_{1,2}=\|\Pi^{(k+1)} \mathcal{D}_{2^{k}-1}\mathbf{u}-\mathbf{v}\|_{1,2}\)._
Proof.: By the given definition of \(\mathbf{u}\) we have:
\[\mathcal{D}_{2^{k}-1}\mathbf{u} =[\mathbf{D}_{x}\mathbf{u}_{0},\mathbf{D}_{y}\mathbf{u}_{0}, \mathbf{D}_{x}\mathbf{u}_{1},\mathbf{D}_{y}\mathbf{u}_{1},...,\mathbf{D}_{x} \mathbf{u}_{k},\mathbf{D}_{y}\mathbf{p}_{k}] \tag{37}\] \[=[\frac{\mathbf{D}_{x}\mathbf{p}_{0}}{\sqrt{k}C_{0}},\frac{ \mathbf{D}_{y}\mathbf{p}_{0}}{\sqrt{k}C_{0}},\frac{\mathbf{D}_{x}\mathbf{p}_{1 }}{\sqrt{k}C_{1}},\frac{\mathbf{D}_{y}\mathbf{p}_{1}}{\sqrt{k}C_{1}},..., \frac{\mathbf{D}_{x}\mathbf{p}_{k}}{\sqrt{k}C_{k}},\frac{\mathbf{D}_{y} \mathbf{p}_{k}}{\sqrt{k}C_{k}}]. \tag{38}\]
For any \(i\in\mathbb{N}\), define the set \([i]\stackrel{{ def}}{{=}}\left\{\,r\in\mathbb{N}\cup 0\mid r<i\,\right\}\). Now, the definition of \(\mathcal{D}_{2^{k}-1}(\cdot)\) implies that for any even \(r\in[2^{k+1}]\), \((\mathcal{D}_{2^{k}-1}\mathbf{u})_{:,r}=\mathbf{D}_{x}\mathbf{u}_{r/2}\), and for
odd \((\mathcal{D}_{2^{k}-1}\mathbf{u})_{:,r}=\mathbf{D}_{y}\mathbf{u}_{(r-1)/2}\). As multiplication by \(2\) only shifts the binary code to one place left, \(s(b(r/2))=s(b(r))\) (for \(r\) even) \(s(b((r^{\prime}-1)/2))=s(b(r^{\prime}))-1.\) (for \(r^{\prime}\) odd). Further from the given relation between \(\mathbf{u}\) and \(\mathbf{p}\) we have for
* \(r\) even: \((\mathcal{D}_{2^{k}-1}\mathbf{u})_{:,r}=\frac{1}{\sqrt{{}^{k}C_{s(b(r))}}} \mathbf{D}_{x}\mathbf{p}_{s(b(r))}\)
* \(r\) odd: \((\mathcal{D}_{2^{k}-1}\mathbf{u})_{:,r}=\frac{1}{\sqrt{{}^{k}C_{s(b(r))-1}}} \mathbf{D}_{y}\mathbf{p}_{s(b(r))-1}\)
As \(\Pi^{(k+1)}\) only sums the columns, we compute the \(J^{th}\) column of \(\Pi^{(k+1)}\mathcal{D}_{2^{k}-1}\mathbf{u},\)\((\Pi^{(k+1)}\mathcal{D}_{2^{k}-1}\mathbf{u})_{:,J}\) as:
\[(\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\mathbf{u})_{:,J} =\frac{1}{{}^{k+1}C_{j}}\sum_{\{\,r|s(b(r))=j\,\}}(\mathcal{D}_{2 ^{i}-1}\mathbf{u})_{:,r}(\text{ let }j=s(b(J))) \tag{39}\] \[=\frac{1}{{}^{k+1}C_{j}}\Big{[}\sum_{\{\,r\text{ even}|s(b(r))=j\,\}}\frac{ \mathbf{D}_{x}\mathbf{p}_{s(b(r))}}{\sqrt{{}^{k}C_{s(b(r))}}}+\sum_{\{\,r\text { odd}|s(b(r))=j\,\}}\frac{\mathbf{D}_{y}\mathbf{p}_{s(b(r))-1}}{\sqrt{{}^{k}C_{s(b (r))-1}}}\Big{]}\] (40) \[=\frac{1}{{}^{k+1}C_{j}}\Big{[}\frac{\mathbf{D}_{x}\mathbf{p}_{j} }{\sqrt{{}^{k}C_{j}}}\sum_{\{\,r\text{ even}|s(b(r))=j\,\}}1+\frac{\mathbf{D}_{y} \mathbf{p}_{j-1}}{\sqrt{{}^{k}C_{j-1}}}\sum_{\{\,r\text{ odd}|s(b(r))=j\,\}}1 \Big{]} \tag{41}\]
Now, summation of \(1\) over any set is same as the number of elements in that set. Let \(Crd(S)\) denote the number of elements in S. First we compute, \(Crd(\{\,r\in[2^{k+1}]\mid\text{r odd},\,s(b(r))=j\,\})=Crd(\{\,2l+1\in[2^{k+1 }]\mid l\in[2^{k}],s(b(2l+1))=j\,\})\). Using \(s(b(2l+1))=s(b(l))+1\) we get:\(Crd(\{\,2l+1\in[2^{k+1}]\mid l\in[2^{k}],s(b(2l+1))=j\,\})=Crd(\{\,l\in[2^{k}] \mid s(b(l))=j-1\,\})\). Now, \(Crd(\{\,l\in[2^{k}]\mid s(b(l))=j-1\,\})\) is the number of non-negative integers less than \(2^{k}\) which have \(j-1\) ones in their binary code. Therefore, \(Crd(\{\,l\in[2^{k}]\mid s(b(l))=j-1\,\})={}^{k}C_{j-1}.\) Similarly, \(Crd(\{\,r\text{ even }|\ s(b(r))=j\,\})={}^{k}C_{j}.\) Plugging these we get:\((\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\mathbf{u})_{:,J}=\frac{1}{\sqrt{{}^{k+1}C_{j}}} \Big{[}\frac{\sqrt{{}^{k}C_{j}}}{\sqrt{{}^{k+1}C_{j}}}\mathbf{D}_{x}\mathbf{p}_ {j}+\frac{\sqrt{{}^{k}C_{j-1}}}{\sqrt{{}^{k+1}C_{j}}}\mathbf{D}_{y}\mathbf{p}_ {j-1}\Big{]}\). Now, \((\Pi^{(k+1)}\mathcal{D}_{2^{k}-1}-\mathbf{v})_{:,J}=\frac{1}{\sqrt{{}^{k+1}C_{ j}}}\Big{[}\frac{\sqrt{{}^{k}C_{j}}}{\sqrt{{}^{k+1}C_{j}}}\mathbf{D}_{x} \mathbf{p}_{j}+\frac{\sqrt{{}^{k}C_{j-1}}}{\sqrt{{}^{k+1}C_{j}}}\mathbf{D}_{y} \mathbf{p}_{j-1}-\mathbf{q}_{j}\Big{]}.\) Comparing with the expressions of \(\mathcal{A}_{k}\mathcal{D}_{k}\) and using the fact that there are \({}^{k+1}C_{j}\) columns with \(s(b(\cdot))=j\) we get the result.
**Remark 2.1**.: (Direct and Compact expressions of TGV are equivalent) It can be noted that proof of the compact form is derived from the original TGV definition. But, one can arrive the compact form starting from the direct form with the help of the following statement. For any \(\mathbf{g}\in\mathbb{R}^{N},\ \inf\{\,\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\Pi^{(i+1)} \mathcal{D}_{2^{i}-1}\mathbf{u}_{i}-\mathbf{u}_{(i+1)}\|_{1,2}\mid\mathbf{u}_{ n}=\mathbf{0},\mathbf{u}_{0}=\mathbf{g},\mathbf{u}_{i}\in\mathcal{R}(\Pi^{(i)})\) for \(i=1,...,n-1\,\}=\inf\{\,\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\mathcal{A}_{i} \mathcal{D}_{i}\mathbf{p}_{i}-\mathbf{p}_{i+1}\|_{1,2}\mid\mathbf{p}_{n}= \mathbf{0},\mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{i}\in\mathbb{R}^{N\times(i+ 1)}\text{ for }i=1,...,n-1\,\}\)__
Proof.: Denote \(S_{D}(\mathbf{g})=\{\,\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\Pi^{(i+1)}\mathcal{D}_{ 2^{i}-1}\mathbf{u}_{i}-\mathbf{u}_{i+1}\|_{1,2}\mid\mathbf{u}_{n}=\mathbf{0}, \mathbf{u}_{0}=\mathbf{g},\mathbf{u}_{i}\in\mathcal{R}(\Pi^{(i)})\,\}\), and \(S_{C}(\mathbf{g})=\{\,\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\mathcal{A}_{i}\mathcal{ D}_{i}\mathbf{p}_{i}-\mathbf{p}_{i+1}\|_{1,2}\mid\mathbf{p}_{n}=\mathbf{0}, \mathbf{p}_{0}=\mathbf{g},\mathbf{p}_{i}\in\mathbb{R}^{N\times(i+1)}\,\}.\) We prove the above result by showing that \(S_{D}(\mathbf{g})=S_{C}(\mathbf{g}).\) First we show that \(S_{C}(\mathbf{g})\subseteq S_{D}(\mathbf{g}).\) Consider any element \(e_{C}\) of \(S_{C}(\mathbf{g}).\) Then
\[e_{C}=\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\mathcal{A}_{i}\mathcal{D}_{i}\mathbf{p} _{i}-\mathbf{p}_{i+1}\|_{1,2}\text{ where }\mathbf{p}_{n}=\mathbf{0},\mathbf{p}_{0}= \mathbf{g},\mathbf{p}_{i}\in\mathbb{R}^{N\times(i+1)}\text{ for }i=1,...,n-1.\]
Choose \((\mathbf{u}_{i})_{:,j}=\frac{(\mathbf{p}_{i})_{:,s(b(j))}}{\sqrt{{}^{t}C_{s(b (j))}}}\) for all \(i\in[n+1]\) and all \(j\in[2^{i}]\). Then, by lemma 2 we have that \(e_{C}=\sum_{i=0}^{n-1}\alpha_{n-i-1}\|\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\mathbf{ u}_{i}-\mathbf{u}_{(i+1)}\|_{1,2}.\) Therefore, \(e_{C}\in S_{D}(\mathbf{g}).\) Now, we show \(S_{D}(\mathbf{g})\subseteq S_{C}(\mathbf{g}).\) Consider any element \(e_{D}\in S_{D}(\mathbf{g}).\) Then, \(e_{D}=\|\Pi^{(i+1)}\mathcal{D}_{2^{i}-1}\mathbf{u}_{i}-\mathbf{u}_{i+1}\|_{1,2}\) where \(\mathbf{u}_{n}=\mathbf{0},\mathbf{u}_{0}=\mathbf{g}\),and \(\mathbf{u}_{i}\in\mathcal{R}(\Pi^{(i)})\) for \(i=1,..,n-1.\)As \(\mathbf{u}_{i}\in\mathcal{R}(\Pi^{(i)})\) for all \(i\in[n],\) we have \((\mathbf{u}_{i})_{:,m}=(\mathbf{u}_{i})_{:,o}\) if \(s(b(m))=s(b(o)).\) Therefore, there exists \(\mathbf{q}_{i}\) such that \((u_{i})_{:,j}=\frac{(\mathbf{q}_{i})_{:,s(b(j))}}{\sqrt{{}^{t}C_{s(b(j))}}}\) for all \(i\in[n+1]\) and all \(j\in[2^{i}]\). Therefore, by lemma 2\(e_{D}\) is also in \(S_{C}(\mathbf{g}).\)
|
2301.00072 | LeaFTL: A Learning-Based Flash Translation Layer for Solid-State Drives | In modern solid-state drives (SSDs), the indexing of flash pages is a
critical component in their storage controllers. It not only affects the data
access performance, but also determines the efficiency of the precious
in-device DRAM resource. A variety of address mapping schemes and optimization
techniques have been proposed. However, most of them were developed with
human-driven heuristics. They cannot automatically capture diverse data access
patterns at runtime in SSD controllers, which leaves a large room for
improvement. In this paper, we present a learning-based flash translation layer
(FTL), named LeaFTL, which learns the address mapping to tolerate dynamic data
access patterns via linear regression at runtime. By grouping a large set of
mapping entries into a learned segment, it significantly reduces the memory
footprint of the address mapping table, which further benefits the data caching
in SSD controllers. LeaFTL also employs various optimization techniques,
including out-of-band metadata verification to tolerate mispredictions,
optimized flash allocation, and dynamic compaction of learned index segments.
We implement LeaFTL with an SSD simulator and evaluate it with various storage
workloads. LeaFTL saves the memory consumption of the mapping table by 2.9x on
average and improves the storage performance by 1.4x on average, in comparison
with state-of-the-art FTL schemes. | Jinghan Sun, Shaobo Li, Yunxin Sun, Chao Sun, Dejan Vucinic, Jian Huang | 2022-12-30T23:37:39Z | http://arxiv.org/abs/2301.00072v1 | # LeaFTL: A Learning-based Flash Translation Layer
###### Abstract.
In modern solid-state drives (SSDs), the indexing of flash pages is a critical component in their storage controllers. It not only affects the data access performance, but also determines the efficiency of the precious in-device DRAM resource. A variety of address mapping schemes and optimizations have been proposed. However, most of them were developed with human-driven heuristics.
In this paper, we present a learning-based flash translation layer (FTL), named LeaFTL, which learns the address mapping to tolerate dynamic data access patterns via linear regression at runtime. By grouping a large set of mapping entries into a learned segment, it significantly reduces the memory footprint of the address mapping table, which further benefits the data caching in SSD controllers. LeaFTL also employs various optimization techniques, including out-of-band metadata verification to tolerate mispredictions, optimized flash allocation, and dynamic compaction of learned index segments. We implement LeaFTL with both a validated SSD simulator and a real open-channel SSD board. Our evaluation with various storage workloads demonstrates that LeaFTL saves the memory consumption of the mapping table by 2.9% and improves the storage performance by 1.4% on average, in comparison with state-of-the-art FTL schemes.
Learning-Based Storage, Flash Translation Layer, Solid-State Drive +
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Computer Communications and Information Systems
+
Footnote †: journal: Journal:
only 8 bytes (1 byte for \(S\) and \(L\), 2 bytes for \(K\), and 4 bytes for \(I\)) with our optimizations (see the details in SS3). Compared to the on-demand page-level mapping [20], the learned segment reduces the mapping table size by a factor of \(m*avg(L)/8\), where \(m\) is the size (8 bytes) of each entry in the on-demand page-level mapping table, and \(avg(L)\) is the average number of LPA-PPA mappings that can be represented in a learned index segment, \(avg(L)\) is 20.3 according to our study of various storage workloads.
Beyond learning contiguous LPA-PPA mappings, LeaFTL also learns different correlation patterns, such as regular and irregular strided data accesses as shown in, and, respectively. Unlike existing indexing optimizations based on human-driven heuristics, LeaFTL can learn more irregular patterns of LPA-PPA mappings with guaranteed error bound, as shown in. This enables LeaFTL to further condense the address mapping table. Therefore, given a limited DRAM capacity in the SSD controller, LeaFTL can maximally utilize the DRAM caching and improve the storage performance. For the worst case like random I/O accesses, LeaFTL will transfer the mapping into single-point linear segments (\(L=0\), \(K=0\), and \(I=PPA\) in Figure 1), and its memory consumption will be no more than that of the page-level mapping.
With the learned index segments, LeaFTL may occasionally return an inaccurate PPA (i.e., address misprediction), which incurs additional flash accesses until the correct PPA is identified. To overcome this challenge, we develop an error-tolerant mechanism in LeaFTL. For each flash page access, we use the reverse mapping stored in the out-of-band (OOB) metadata of each flash page to verify the correctness of the data access. Since the OOB usually has 64-256 bytes [20, 23], we use it to store the accurate LPAs mapped to the neighbor PPAs. Thus, upon an address misprediction, we use the stored reverse mappings to find the correct PPA, avoiding additional flash accesses. LeaFTL leverages the intrinsic OOB structure to handle address mispredictions and make SSD perfectly-suited for practical learned indexing.
Due to the intrinsic out-of-place write property of SSDs (see SS2), the learned index segments will be disrupted by writes and GC, and the segments need to be relearned with new LPA-PPA mappings. To tolerate these disruptions, the learned segments are organized within multiple levels to maintain the temporal order in a log-structured manner: the topmost level has the most recent segments, and the lower level stores older segments. The segments at the same level are sorted without overlapping. If the new segment has a conflict with an existing segment, the old segment will be moved to the lower level. Therefore, LeaFTL can always identify the latest version of the corresponding LPA-PPA mapping in a top level of learned index segments. LeaFTL will compact the learned segments periodically to reduce its memory footprint.
To further maximize the efficiency of LeaFTL, we coordinate its learning procedure with flash block allocation in the SSD. As flash block allocation decides the distribution of mapped PPAs, LeaFTL will allocate consecutive PPAs to contiguous LPAs at its best effort, for increasing the possibility of learning a space-efficient index segment. Similar to existing page-level mapping [20, 23], LeaFTL stores the learned index segments in flash blocks for recovery. Overall, we make the following contributions:
* We present a learning-based FTL, it can learn various data access patterns and turn them into index segments for reducing the storage cost of the mapping table.
* We develop an error-tolerant address translation mechanism to handle address mispredictions caused by the learned indexes, with minimal extra flash accesses.
* We preserve the core FTL functions, and enable the coordination between the learning procedure of the address mapping table with the flash block allocation and GC to maximize the efficiency of the learned FTL.
* We manage the learned segments in an optimized log-structured manner, and enable compaction to further improve the space efficiency for the address mapping.
We implement LeaFTL with a validated SSD simulator Wisc-Sim [27] and evaluate its efficiency with a variety of popular storage workloads. We also develop a system prototype with a real 1TB open-channel SSD to verify the functions of LeaFTL and validate its efficiency with real data-intensive applications, such as the key-value store and transactional database. Our evaluation with the real SSD shows similar benefits as that of the SSD simulator implementation. We demonstrate that LeaFTL reduces the storage cost of the address mapping in the FTL by 2.9\(\times\) on average. The saved memory space benefits the utilization of the precious SSD DRAM, and further improves the storage performance by 1.4\(\times\) on average. We also show that LeaFTL does not affect the SSD lifetime, and its
Figure 1. An illustrative example of learning LPA-PPA mappings using piecewise linear regression in LeaFTL. It can learn various patterns of LPA-PPA mappings with guaranteed error bound. Each learned index segment can be represented with \((S,L,K,I)\), where \([S,S+L]\) denotes the interval of LPAs, \(K\) is the slope, and \(I\) is the intercept of the index segment.
learning procedure introduces negligible performance overhead to the storage processor in the SSD controllers. The codebase of LeaFTL is available at [https://github.com/platformxlab/LeaFTL](https://github.com/platformxlab/LeaFTL).
## 2. Background and Motivation
**Flash-Based Solid-State Drive.** An SSD has three major parts (see Figure 2): a set of flash memory packages, an SSD controller with embedded processors, and a set of flash controllers. With the nature of NAND Flash, when a free page is written, the page cannot be written again until that page is erased. However, erase operation is performed only at a block granularity. As the erase operation is expensive, writes are issued to free flash pages erased in advance (i.e., out-of-place write). GC will be performed to clean the stale data. As each flash block has limited endurance, it is important for them to age uniformly (i.e., wear leveling). SSDs have a logical-to-physical address mapping table to index flash pages. All these functions are managed by the FTL in the SSD firmware.
Modern SSD controllers have general-purpose embedded processors (e.g., ARM processors). The processors help with issuing I/O requests, translating LPAs to PPAs, and handling GC and wear-leveling. SSDs also have limited DRAM capacities to cache the mapping table and the application data.
**Address Mapping Table in the FTL.** The address mapping table in FTL generally has three types: page-level mapping, block-level mapping, and hybrid mapping. The page-level mapping enables direct IPA-PPA mapping for fast lookup. However, each entry usually takes 8 bytes (4 bytes for LPA, 4 bytes for PPA), and the entire mapping table requires large storage space. The block-level mapping significantly reduces the mapping table size. However, it introduces additional overhead for the page lookup in the flash block. The hybrid mapping takes advantages of both page-level and block-level mapping. It uses log blocks to store new writes, and index them with the page-level mapping. The log blocks will be moved into data blocks that are indexed with block-level mapping. This incurs significant GC overhead. Therefore, modern SSDs commonly use the page-level mapping scheme.
**Metadata Structures for Flash Management.** The FTL usually employs four metadata structures (see Figure 3): (1) the address mapping cache ( \(\blacksquare\) AMD) for caching the address mapping table in the SSD DRAM; (2) the global mapping directory ( \(\blacksquare\) GMD) for tracking the locations of the address mapping table pages in the SSD; (3) the block validity counter ( \(\blacksquare\) BVC) for tracking the number of valid pages for each flash block for assisting the GC in the SSD; and (4) the page validity table ( \(\blacksquare\) PVT), which uses bitmaps to track the valid pages in each flash block. During the GC, the FTL will check the \(\blacksquare\) BVC to select candidate flash blocks, and migrate their valid pages to free flash blocks. After that, it will erase these selected flash blocks, and mark them as free blocks.
**Limited DRAM Capacity in SSD Controllers.** It is hard to provision large DRAM inside SSD controllers, due to their hardware constraints and limited budgets for power and hardware cost ((12; 41; 60)). Thus, SSD controllers often use on-demand caching to maintain the recently accessed metadata and data in the SSD DRAM.
Among all the metadata structures, the address mapping table has the largest memory footprint. As discussed, \(\blacksquare\) AMD caches the recently accessed mapping table entries. If a mapping entry is not cached, the FTL will locate the corresponding address mapping table pages stored in the flash blocks, and place the mapping entry in the \(\blacksquare\) AMD. As we scale the SSD capacity, the DRAM challenge will become even worse. To overcome this challenge, various optimizations on the mapping table have been proposed ((9; 25; 29; 31; 38; 39)) to improve the utilization of the SSD DRAM. However, most of them cannot automatically capture diverse data access patterns at runtime, leaving a large room for improvement.
## 3. Design and Implementation
To develop LeaFTL in the SSD controller, we have to overcome the following research challenges.
* LeaFTL should be able to automatically capture diverse data access patterns, and generate memory-efficient address mapping (SS3.1, SS3.2, SS3.3, and SS3.4).
* LeaFTL may incur address mispredictions, which could incur additional flash accesses. LeaFTL should be tolerant of errors and have low misprediction penalty (SS3.5).
* LeaFTL should work coordinately with other core FTL functions that include GC and wear leveling (SS3.6).
* LeaFTL should be lightweight and not incur much extra overhead to storage operations (SS3.7, SS3.8 and SS3.9).
Figure 3. The common data structures in the FTL of SSDs.
Figure 2. The internal system architecture of SSDs.
### Key Ideas of LeaFTL
Instead of using the space-consuming one-to-one mapping in the page-level mapping, the key idea of LeaFTL is to exploit learning techniques to identify various LPA-PPA mapping patterns and build efficient learned address mapping entries. Modern SSD controllers usually have a data buffer for grouping writes and write the large data chunk at once for exploiting the internal flash parallelisms. LeaFTL utilizes this data buffer to collect LPA-to-PPA mappings for learning index segments for free, and does not introduce extra data collection overhead (see the details in SS3.3).
As shown in Figure 4 (a), the PPA of an LPA can be obtained with the expression: \(PPA=f(LPA)=[K*LPA+I]\), \(LPA\in[S_{LPA},S_{LPA}+L]\), where \([S_{LPA},S_{LPA}+L]\) denotes the interval (\(L\)) of LPs, \(K\) is the slope, and \(I\) is the intercept. As discussed in SS1, each learned index segment can be represented in 8 bytes: 1 byte for \(S_{LPA}\) and \(L\), respectively; 2 bytes for \(K\), and 4 bytes for \(I\). The size of \(S_{LPA}\) is reduced from 4 bytes to 1 byte with our optimizations on the segment management (see SS3.4).
We can relax the linear regression to capture more flash access patterns, which further reduces the learned address mapping table size. As shown in Figure 4 (b), the linear regression can learn a pattern with guaranteed error bound \([-\gamma,\gamma]\). As we increase \(\gamma\), we can cover more flash access patterns. We applied the relaxed linear regression with different \(\gamma\) values to a variety of storage workloads (see SS4.1), our experimental results demonstrate that the number of learned index segments is gradually decreased, as we increase \(\gamma\). Figure 5 shows that 98.2\(-\)99.2% of the learned index segments cover up to 128 LPA-PPA mapping entries, demonstrating the potential advantages of the learning-based approach.
As for random access patterns, LeaFTL will transfer the learned segments into single-point segments. And these linear segments do not require more storage space than the page-level mapping.
### Learned Index Segment
**Types of Learned Index Segment.** The mapping table of LeaFTL is built with learned index segments. It has two types of segments: accurate and approximate segments, as shown in Figure 6. Both of them are learned with piecewise linear regression technique (Zhu et al., 2017).
As for the accurate index segments, given an LPA, we can precisely get the corresponding PPA with \(f(LPA)=[K*LPA+I]\). For example, when the LPA is 2 in Figure 6, we can directly get the PPA value of 34 with \(\lceil 1.00*2+32\rceil\). In this example, the learned segment has \(L=3\) and it indexes 4 LPA-PPA mappings. If \(L=0\), the learned segment will become a single-point segment, the slope \(K=0\), and we will get its PPA with \(PPA=I\).
As for approximate index segments, we use the same formula \(f(LPA)=[K*LPA+I]\) to calculate the PPA. However, the returned PPA may not be the exact corresponding PPA. It has an error bound \([-\gamma,\gamma]\) guaranteed by the linear regression, and \(\gamma\) is configurable. For example, given \(LPA=4\) in Figure 6, the value of the PPA is 67, according to the calculation \([4*0.56+64]\). However, the real PPA should be 66. We define this as _address misprediction_. We will discuss how we handle the address misprediction with reduced miss penalty in SS3.5.
**Size of Learned Index Segment.** As discussed in SS3.1, each segment can be expressed in \((S_{LPA},L,K,I)\). The starting LPA will take 4 bytes. We can further reduce this size by partitioning a range of LPAs into small groups, and each LPA group represents a certain number of contiguous LPAs. Therefore, we can index an LPA with its offset in a corresponding group. In LeaFTL, each group represents 256 contiguous LPAs. Thus, \(S_{LPA}\) can be indexed by the offset (\(2^{8}=256\)) in the group, which takes only 1 byte. We use 256 as the group size, because the length of the learned segments is usually less than 256 (see Figure 5).
Given an LPA, we can get its offset in the group with (\(LPA\ mod\) 256). In LeaFTL, we set the \(L\) as 1 byte. Thus, each segment can index 256 LPA-PPA mappings. We use a 16-bit floating point to store the value of the slope \(K\). And the intercept \(I\) of a segment can be represented in 4 bytes. Therefore, in combination with \(S_{LPA}\), both accurate and approximate segments can be encoded with 8 bytes (see Figure 6), which are memory aligned.
Figure 4. Visualization of learned index segments.
Figure 5. Aggregated distribution of learned segments.
Figure 6. Types of learned segments in LeaFTL.
LeaFTL uses the least significant bit of the \(K\) to indicate segment types (0 for accurate segments, 1 for approximate segments). This has negligible impact on the address translation accuracy, because \(K\in[0,1]\), which will only affect the tenth digit after decimal point.
### Improve the Learning Efficiency
To further reduce the number of learned segments, LeaFTL performs optimizations to improve its learning efficiency of address mappings by exploiting the flash block allocation in SSD controllers, as shown in Figure 7. Flash pages are usually buffered in the SSD controller and written to flash chips at a flash block granularity, for utilizing the internal bandwidth and avoiding the open-block problem (Han et al., 2017; Wang et al., 2018; Wang et al., 2019; Wang et al., 2019). This allows LeaFTL to learn more space-efficient index segments (i.e., index segments can cover more LPA-PPA mappings) by reordering the flash pages with their LPAs in the data buffer. As shown in Figure 7 (a), LeaFTL learns 5 index segments (78), (32, 33), (76), (115), and (34, 38) with \(\gamma=4\). After sorting the pages in the data buffer shown in Figure 7 (b), LeaFTL generates 3 index segments (32, 33, 34, 38), (76, 78), and (115).
To develop the optimized learned segments, LeaFTL sorts the flash pages in ascending order of their LPAs in the data buffer (8MB by default). When pages in the data buffer is flushed to the flash chips, their PPAs are in ascending order. This ensures a monotonic address mapping between LPAs and PPAs, which reduces the number of index segments.
### Manage Learned Index Segments
Upon new data updates or GC in the SSD, the learned index segments need to be updated, due to the intrinsic property (i.e., out-of-place update) of SSDs. Unfortunately, the direct updates to learned index segments are expensive, since we have to relearn the index segments with new PPAs. This relearning procedure not only consumes extra compute cycles, but also involves additional flash accesses, since we have to access the corresponding flash pages to obtain accurate PPAs for some of the LPAs in the index segment being updated. For instance, for in-place update to an approximate segment, it can incur 21 flash accesses on average when relearning. In-place update also breaks the existing LPA-to-PPA mapping patterns, which results in 1.2\(\times\) additional segments and memory footprint, according to our experiments with various workloads.
To address this challenge, we manage the learned index segments in a log-structured manner, as shown in Figure 8. Therefore, the newly learned index segments will be appended to the log structure (level 0 in Figure 8) and used to index the updated LPA-PPA mappings, while the existing learned segments (level 1 and lower levels in Figure 8) can still serve address translations for LPAs whose mappings have not been updated. Such a structure supports concurrent lookups as enabled in the traditional log-structured merge tree. As we insert the newly learned index segments at the top level of the log-structured tree, this minimizes the impact on other segments.
**Log-Structured Mapping Table.** The log-structured mapping table has multiple levels to maintain the temporal order of index segments. As discussed, the topmost level has the most recent learned index segments, and the lower level stores the older segments. For the segments on the same level, LeaFTL ensures that they are sorted and do not have overlapped LPAs. This is for fast location of the corresponding learned index segments in each level. For the segments across the levels, they may have overlapped LPAs, due to the nature of the log-structured organization. And the segments with overlapped LPA-PPA mappings will be compacted periodically for space reclamation (see its detailed procedure in SS3.7).
**Manage Two Types of Index Segments.** LeaFTL manages the accurate and approximate index segments in the same log-structured mapping table, as they can be encoded in the same format. For each accurate segment, we can directly infer its indexed LPAs with the \(S_{LPA}\), \(K\), and \(L\), since it has a regular pattern. However, for approximate index segments, we only have the knowledge of the starting LPA and the end LPA with \(S_{LPA}+L\). Its encoded LPAs cannot be directly inferred from their metadata (\(S_{LPA},L,K,I\)), since they are learned from irregular access patterns and may have mispredictions.
If two approximate segments have overlapping LPA ranges, we could obtain inaccurate PPAs from the learned index segments. As shown in Figure 9 (a), given an LPA with the value 105, we will check the segment at Level 0 and may get an inaccurate PPA. This will also affect the efficiency of the segment compaction, with which we eliminate duplicated entries between segments.
To address this challenge, LeaFTL uses a Conflict Resolution Buffer (CRB) for each LPA group to store the LPAs indexed by each approximate segment. The main purpose of CRB is to help LeaFTL check whether a given LPA belongs to one approximate segment.
The CRB is a nearly-sorted list (Kang et al., 2017) by the starting LPAs of its approximate segments. To be specific, the CRB ensures the following
Figure 8. The learned index segments are managed in a log-structured manner in LeaFTL.
Figure 7. An example of reducing the number of learned segments via exploiting the flash block allocation.
properties: (1) the LPAs belong to the same approximate segment are stored contiguously; (2) different approximate segments are sorted by their starting LPA, and CRB uses a _null_ byte to separate these segments; (3) it does not have redundant LPAs, which means an LPA will appear at most once in the CRB. This is achieved by removing existing same LPAs when we insert new approximate segments into the CRB.
However, if the \(S_{LPA}\) of a new approximate segment is the same as any starting LPAs that have been stored in the CRB, LeaFTL will update the \(S_{LPA}\) of the old segment with the adjacent LPA. Take Figure 9 (b) as an example, upon a new approximate segment with \(S_{LPA}=100\), we will update the \(S_{LPA}\) of the existing segment to \(101\), and then insert the new segment into the CRB. In this case, LeaFTL will ensure each approximate segment will have its unique \(S_{LPA}\). This will facilitate the approximate LPA-PPA address translation with high accuracy confidence.
Since CRB is nearly sorted, its insertion, deletion, and lookup operations are fast. The CRB is also space efficient, as each LPA (the offset in its corresponding LPA group) will take only one byte, and it guarantees that there are no redundant LPAs. Therefore, the CRB will maximally store \(256\) LPAs. Our experiments with a variety of storage workloads show that the CRB will take \(13.9\) bytes on average, as shown in Figure 10.
Given an LPA, in order to identify which approximate index segment it belongs to, LeaFTL will check the CRB with binary search. Once the LPA is found, LeaFTL will search to its left until identifying the \(S_{LPA}\), and this \(S_{LPA}\) will be the starting LPA of the corresponding approximate segment, as shown in Figure 9 (b). Therefore, CRB can assist LeaFTL to resolve the LPA lookups.
### Handle Address Misprediction
As discussed in SS3.2, the mapping table entries encoded with approximate segments may occasionally incur mispredictions and return an approximated PPA. These approximate segments have a guaranteed error bound \([-\gamma,\gamma]\), where \(\gamma\) is a constant value that can be specified in the linear regression algorithm. To verify the correctness of the address translation, a simple method is to access the flash page with the predicted PPA, and use the reverse mapping (its corresponding LPA) stored in the OOB metadata of the flash page to check whether the LPA matches or not. In this case, upon a PPA misprediction, we need \(\log(\gamma)\) flash accesses on average to identify the correct PPA.
To avoid extra flash accesses for address mispredictions, LeaFTL leverages the OOB of the flash page to store the reverse mappings of its neighbor PPAs. This is developed based on the insight that: with a \(PPA_{learned}\) obtained from an approximate segment, its error bound \([-\gamma,\gamma]\) guarantees that the correct PPA is in the range of \([PP_{learned}-\gamma,PP_{learned}+\gamma]\), as discussed in Figure 4 (b). Thus, upon a misprediction, LeaFTL will read the flash page with \(PPA_{learned}\), and use its OOB to find the correct PPA. In this case, LeaFTL ensures that it will incur only one extra flash access for address mispredictions.
This is a feasible approach, as the OOB size is usually \(128\)-\(256\) bytes in modern SSDs. As each LPA takes \(4\) bytes, we can store \(32\)-\(64\) reverse mapping entries in the OOB. We show the OOB organization of LeaFTL in Figure 11. For the flash page \(PPA_{X}\), the first \(2\gamma+1\) entries in its OOB correspond to the LPAs for the flash pages \([PPA_{X}-\gamma,PP_{X}+\gamma]\). For the flash pages at the beginning and end of a flash block, we may not be able to obtain the reverse mapping of their neighbor PPAs. We place the _null_ bytes in the corresponding entry of the OOB.
### Preserve Other Core FTL Functions
LeaFTL preserves the core functions such as GC and wear leveling in an FTL. It follows the same GC and wear leveling policies in modern SSDs. When the number of free blocks in an SSD is below a threshold (usually \(15\)-\(40\%\) of the total flash blocks), the SSD controller will trigger the GC execution. LeaFTL employs the greedy algorithm (Beng et al., 2017) to select the candidate blocks which have the minimal
Figure 11. The out-of-band (OOB) metadata organization. It stores the reverse mapping for its neighbor PPAs.
Figure 10. The distribution of CRB sizes for different storage workloads, when we set \(\gamma=4\) in LeaFTL.
Figure 9. A case study of conflict resolution buffer for approximate learned index segments.
number of valid pages, for reducing the data movement overhead at GC. As the GC move the valid pages from the candidate blocks to the free blocks, LeaFTL places these valid pages into the DRAM buffer, sort them by their LPAs, and learn a new index segment. The learning procedure is the same as we build index segments for new flash writes/updates. Thus, the address mapping of the valid pages is updated after the GC.
LeaFTL also ensures all the flash blocks age at the same rate (i.e., wear leveling). It uses the throttling and swapping mechanism developed in existing GC, in which the cold data blocks (i.e., blocks not frequently accessed) will be migrated to hot blocks (i.e., blocks that experience more wear). LeaFTL will learn new indexes for these swapped blocks and insert them into the mapping table to update their address mappings.
### LeaFTL Operations
Now we describe the LeaFTL operations, including segment creation, insert/update, LPA lookup, and compaction. We discuss their procedures, and use examples to illustrate each of them, respectively. We present their detailed procedures in Algorithm 1 and 2.
```
Input:\(groups\gets\)\(LeaFTL\)\(group\)\(partitions\)//Insert/UpdateSegmentintheLeaFTL
1Function\(seg\_update\)(\(segment\_level\)):
2\(seg\_pos=binary\_search(level,segment\_SLPA)\)\(level.insert(segment\_seq\_pos)\)ifnotsegment.accuratethen
3 Insert LPAs into CRB and remove redundant LPAs
4ifsegment.SLPA exists in CRBthen
5 Update the \(SLPA\) of the old segment
6\(victim\_segments\)\(\leftarrow\) All segments that overlap the segment starting with seg_pos
7foreach\(victim\_segments\)do
8\(seg\_merge(segment\_victim)\)//ifmarkedasremovablebyseg.merge()
9if\(victim.L=-1\)then
10\(level\_remove(victim)\)if\(segment\_overlap\_injection\)then
11 Pop\(victim\) to the next level
12if\(victim\)has overlapsinthenextlevelthen
13 Createlevel for\(victim\)to avoid recursion
14 // Lookup LPA in theLeaFTL
15Function\(lookup(lpa)\):
16foreach\(level\in groups[lpa\ mod\ 256]\)do
17\(seg\_pos=binary\_search(level,lpa)\)\(segment=level.get\_segment(seg\_pos)\)
18if\(has\_lpa\)(\(segment\_lpa\))then
19return\(segment\_translatePPA(lpa)\)//LeaFTL
20
21Function\(seg\_compact()\):
22foreach\(group\in groups\)do
23foreach\(upper\_level,lover\_level\in group\)do
24foreach\(segment\in upper\_level\)do
25\(seg\_update(segment\_lover\_level)\)
26if\(upper\_level\)isemptythen
27\(group\_remove(upper\_level)\)
```
**ALGORITHM 1**LeaFTL operations
**Function \(groups\gets\)\(LeaFTL\)\(group\)\(partitions\)//Insert/UpdateSegmentintheLeaFTL
21Function\(seg\_update(segment\_level)\):
22\(seg\_pos=binary\_search(level,segment\_SLPA)\)\(level.insert(segment\_seq\_pos)\)ifnotsegment.accuratethen
23Insert/UpdateSegmentintheLeaFTL
24if\(segment\_SLPA\) existsinCRBthen
25 Update the \(SLPA\) of the old segment
26\(victim\_segments\)\(\leftarrow\) All segments that overlap the segment starting with seg_pos
27foreach\(victim\_segments\)\(\leftarrow\)\(\mathit{scip}\_segments\)do
28\(seg\_merge(segment\_victim)\)\(seg.merge(segment\_victim)\)//ifmarkedasremovablebyseg.merge()
29if\(victim.L=-1\)then
30\(level\_remove(victim)\)\(seg.merge(victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg\_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_vic)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_vic)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_victim)\)\(seg.merge(seg_vic)\)\(seg.
segment is an approximate segment, LeaFTL will leverage the \(S_{LPA}\), \(L\), and the LPAs stored in the CRB to reconstruct the encoded LPAs. Afterwards, LeaFTL will conduct a comparison between the bitmaps to identify the overlapped LPAs (line 15-19 in Algorithm 2).
During the segment merge, LeaFTL will update the \(S_{LPA}\) and \(L\) of the old and segments accordingly, remove the outdated LPAs from CRB for approximate segments. Note that we do not update the \(K\) and \(I\) for the victim segments during the merge.
After the merge, (1) if the victim segment does not contain any valid LPA (\(L\) is negative), it will be removed from the mapping table (line 11-12 in Algorithm 1). (2) If the victim segment has valid LPAs but their range still overlaps with the new segment, the victim segment will be moved to the next level in the log-structured mapping table (line 13-16 in Algorithm 1). To avoid recursive updates across the levels, we create a new level for the victim segment if it also overlaps with segments in the next level. According to our study of diverse workloads, this will not create many levels in the mapping table (see Figure 12). (3) If the victim segment has valid LPAs and they do not overlap with the new segment, we do not need to perform further operations. This is because the victim segment is updated with new \(S_{LPA}\) and \(L\) during segment merge (line 20-25 in Algorithm 2), and the new segment insertion keeps each level sorted (line 3 in Algorithm 1).
To facilitate our discussion, we present a few examples in Figure 13. At the initial stage, the mapping table has one segment that indexes the LPA range [0, 63]. At \(T_{1}\), the new segment [200, 255] is directly inserted into the topmost level, as it does not overlap with existing segments. At \(T_{2}\), we insert a new segment [16, 31] that has overlaps with the old segment [0, 63], LeaFTL conducts the segment merge procedure. After that, the old segment still has valid LPAs. Thus, it moves to level 1. At \(T_{3}\) and \(T_{4}\), we insert two approximate segments [75, 82] and [72, 80], LeaFTL will also insert their encoded LPAs into the CRB. The segment [75, 82] will be moved to the next level as it overlaps with the new segment [72, 80].
**LPA Lookup.** LeaFTL conducts an LPA lookup from the topmost level of the mapping table with binary searches (line 19 in Algorithm 1). We will check whether the LPA is represented by the matched segment (line 21 in Algorithm 1, line 1-5 in Algorithm 2). If the \(LPA\in[S_{LPA},S_{LPA}+L]\) of the segment, LeaFTL will check the least bit of its \(K\). If the least bit of \(K\) is 0, it is an accurate segment, and LeaFTL will use \(f(LPA)=\lceil K*LPA+I\rceil\) to get the accurate PPA (see SS3.2). Otherwise, it is an approximate segment. LeaFTL will check the CRB to identify the \(S_{LPA}\) of the segment, following the approach described in Figure 9 and SS3.4. LeaFTL will use the same \(f(LPA)\) formula to obtain the PPA. If the LPA is not found in the top level of the mapping table, LeaFTL will search the lower levels until a segment is identified.
We use Figure 13 to illustrate the lookup procedure. At \(T_{5}\), we conduct the address translation for \(LPA=50\). However, none of the segments in the level 0 covers this LPA, LeaFTL will continue the search in the level 1 and find the accurate segment [0, 63]. At \(T_{6}\), we do the address translation for \(LPA=78\). LeaFTL finds that the LPA 78 is in the LPA range of the segment [72, 80]. Since this is an approximate segment, LeaFTL checks the CRB and finds this LPA is actually indexed by the segment [75, 82].
With the PPA, LeaFTL will read the corresponding flash page and use the reversed mapping (its corresponding LPA) in its OOB to verify the correctness of the address translation. Upon mispredictions, we will use the approach discussed in SS3.5 to handle it.
**Segment Compaction.** The purpose of the compaction is to merge segments with overlapped LPAs across different levels, which further saves memory space. LeaFTL will iteratively move the upper-level segments into the lower level, until the mapping table is fully compacted (line 27 in Algorithm 1). When an approximate segment is removed, its corresponding CRB entries will also be deleted. As shown in \(T_{7}\) of Figure 13, we insert a new segment [32, 90] which fully covers the LPA range of the segment [72, 80]. After merge, LeaFTL removes the old segment [72, 80]. However, some segments
Figure 12. A study of the number of levels in the log-structured mapping table for different storage workloads.
Figure 13. Examples that involve update/insert, lookup, and compaction operations in LeaFTL.
in the level 0 still overlap with the segments in the level 1. After \(T_{\text{S}}\), LeaFTL will remove outdated segments and LPAs.
LeaFTL performs segment compaction after each 1 million writes by default. According to our experiments with various storage workloads, the segment compaction of the entire mapping table will take 4.1 milliseconds (the time of 20-40 flash writes) on average. Consider the low frequency (i.e., once per 1 million writes), the compaction incurs trivial performance overhead to storage operations.
### Put It All Together
LeaFTL is compatible with existing FTL implementations. As shown in Figure 14, it uses the log-structured mapping table () to replace the address mapping cache ( in Figure 3), and employs CRB () for assisting the address translation of approximate segments. The CRB requires trivial storage space in the SSD DRAM (see Figure 10).
**Read Operation.** For a read request, LeaFTL will first check the data cache. For a cache hit, LeaFTL serves the read request with the cached flash page. Otherwise, LeaFTL will perform address translation with ( see SS3.7). If there is a misprediction of PPA, LeaFTL checks the OOB of the mispredicted flash page, read the correct page (SS3.5), and updates the data cache with the page.
**Write Operation.** For a write request, LeaFTL buffers it in the data cache. Once the buffered writes reach the size of a flash block, LeaFTL will allocate a free block. It will sort the writes in the buffer based on their LPAs, and learn new index segments with the PPAs of the allocated flash block. This enables LeaFTL to group more LPAPPA mappings in the same index segment. After that, LeaFTL will insert the new index segment in the mapping table, and flush the buffered data to the flash blocks. For those writes, LeaFTL will also check whether their LPAs exist in the mapping table. If yes, LeaFTL will update their corresponding entries in \(\copyright\) BVC and \(\copyright\) PVT to indicate that they become invalid and can be garbage collected in the future. Otherwise, the new learned segments will have their LPA-PPA mappings for future address translations.
LeaFTL caches the mapping table in SSD DRAM for fast lookup. The table will also be stored in the flash blocks. LeaFTL utilizes the existing \(\copyright\) GMD to index the translation pages. If a segment is not found in the cached mapping table, LeaFTL will fetch it from the translation blocks and place it in the cached mapping table.
**Crasn Consistency and Recovery.** Upon system crashes or power failures, LeaFTL guarantees the crash consistency of learned indexes. In order to ensure the data durability of DRAM buffer in SSD controllers, modern SSDs today have employed battery-backed DRAM and power loss protection mechanisms (Bartos et al., 2016; Bartos et al., 2016). With battery-backed DRAM, LeaFTL has sufficient time to persist the up-to-date mapping table to the flash blocks and record their PPAs in the GMD ( in Figure 3). During the data recovery, LeaFTL reads the GMD to locate its mapping table and place it into the DRAM.
Without battery-backed DRAM, LeaFTL periodically flushes the learned mapping table and the Block Validity Counter ( \copyright\) BVC in Figure 3) into the flash blocks. When GC is triggered, LeaFTL also flushes the updated mapping table and BVC into the flash blocks. Upon crashes, LeaFTL will scan all the flash blocks at the channel-level parallelism, and reconstruct an up-to-date BVC. LeaFTL is able to identify the flash blocks allocated since the last mapping table flush, by comparing the up-to-date BVC with the stored BVC in the SSD. Therefore, LeaFTL only needs to relearn the index segments for these recently allocated flash blocks and add them into the mapping table (see SS3.4).
### Implementation Details
**SSD Simulator.** We implement LeaFTL based on a trace-driven simulator WiscSim (Zhu et al., 2017), which has provided an event simulation environment for the end-to-end performance analysis of SSDs. We extend WiscSim by implementing an LRU-based read-write cache. LeaFTL also preserves the functions of existing FTL, such as GC and wear-leveling. To support the learned indexing, LeaFTL employs a simple linear regression algorithm (Zhu et al., 2017), which incurs negligible computation overhead with modern storage processors (see SS4.5). The error bound \(\gamma\) for learned segments is configurable, and we set it to 0 by default in LeaFTL.
**SSD Prototype.** We also develop a real system prototype with an open-channel SSD to validate the functions and efficiency of LeaFTL. The SSD has 1TB storage capacity with 16 KB flash page size. It has 16 channels, each channel has 16K flash blocks, and each flash block has 256 pages. It enables developers to implement their own FTL in the host by providing basic I/O commands such as read, write, and erase. We implement LeaFTL with 4,016 lines of code using C programming language with the SDK library of the device.
## 4. Evaluation
Our evaluation shows that: (1) LeaFTL significantly reduces the address mapping table size, and the saved memory brings performance benefits (SS4.2); (2) the benefits of LeaFTL are validated on a real SSD device (SS4.3); (3) LeaFTL can achieve additional memory savings and performance benefits with larger error-tolerance, and it demonstrate generality for different SSD configurations (SS4.4); (4) Its learning procedure does not introduce much extra overhead to the SSD controller (SS4.5); (5) It has minimal negative impact on the SSD lifetime (SS4.6).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Parameter** & **Value** & **Parameter** & **Value** \\ \hline Capacity & 2TB & \#Channels & 16 \\ \hline Page size & 4KB & OOB size & 128B \\ \hline DRAM size & 1GB & Pages/block & 256 \\ \hline Read latency & 20\(\mu\)s & Write latency & 200\(\mu\)s \\ \hline Erase & 1.5 milliseconds & Overprovisioning ratio & 20\% \\ \hline \end{tabular}
\end{table}
Table 1. SSD configurations in our simulator.
Figure 14. Key data structures used in LeaFTL.
### Experiment Setup
We examine the efficiency of LeaFTL with both the SSD simulator and real SSD prototype. As for the evaluation with the SSD simulator, we configure a 2TB SSD with 4KB flash pages and 1GB DRAM in the SSD controller. We list the core SSD parameters in Table 1. For other parameters, we use the default setting in the WiscSim. We use a variety of storage workloads that include the block I/O traces from enterprise servers from Microsoft Research Cambridge (Wang et al., 2019) and workload traces from computers at FIU (Hu et al., 2020). As for the evaluation with the real SSD prototype (see SS3.9), we validate the benefits of LeaFTL using a set of real-world file system benchmarks and data intensive applications as shown in Table 2. Before we measure the performance, we run a set of workloads consisting of various real-world and synthetic storage workload traces to warm up the SSD and make sure the GC will be executed during the experiments.
We compare LeaFTL with state-of-the-art page-level mapping schemes described as follows 1.
Footnote 1: We do not compare LeaFTL with block-level and hybrid-level mappings, as they perform dramatically worse than the page-level mapping (Zhu et al., 2020; Wang et al., 2020).
* **DFTL (Demand-based FTL) (Zhu et al., 2020)**: it uses a page-level mapping scheme, and caches the most recently used address translation entries in the SSD DRAM.
* **SFTL (Spatial-locality-aware FTL) (Wang et al., 2020)**; it is a page-level mapping that exploits the spatial locality and strictly sequential access patterns of workloads to condense mapping table entries.
### Memory Saving and Performance
We first evaluate the benefits of LeaFTL on the memory saving and storage performance with the SSD simulator. As shown in Figure 15, LeaFTL reduces the mapping table size by 7.5-37.7\(\times\), compared to the page-level mapping scheme DFTL. This is because LeaFTL can group a set of page-level mapping entries into an 8-byte segment. In comparison with SFTL, LeaFTL achieves up to 5.3\(\times\) (2.9\(\times\) on average) reduction on the address mapping table for different storage workloads, when we set its \(\gamma=0\) (i.e., the learned segments are 100% accurate). This is because LeaFTL captures more LPA-PPA mapping patterns.
We now evaluate the performance benefit of LeaFTL from its saved memory space. We evaluate LeaFTL with two experimental settings: (1) the SSD DRAM is mainly used (as much as possible) for the mapping table; (2) the SSD DRAM is partially used for the mapping table, in which we ensure at least 20% of the DRAM will be used for the data caching.
In the first setting, DRAM is almost used for mapping table in DFTL. As shown in Figure 16 (a), LeaFTL reduces the storage access latency by 1.6\(\times\) on average (up to 2.7\(\times\)), compared to SFTL. This is because LeaFTL saves more memory from the mapping table
\begin{table}
\begin{tabular}{|l|l|} \hline
**Workload** & **Description** \\ \hline OLTP (Wang et al., 2019) & Transactional benchmark in the FileBench. \\ \hline CompFlow (CompF) (Wang et al., 2019) & File accesses in a computation flow. \\ \hline TPC (Hu et al., 2020) & Online transaction queries in warehouses. \\ \hline \multicolumn{2}{|l|}{AuctionMark (AMark) (Hu et al., 2020)} & Activity queries in an auction site. \\ \hline SEATS (Hu et al., 2020) & Airline ticketing system queries. \\ \hline \end{tabular}
\end{table}
Table 2. Real workloads used in our real SSD evaluation.
Figure 16. Performance improvement with LeaFTL.
Figure 17. Performance on the real SSD prototype.
Figure 15. The reduction on the mapping table size of LeaFTL, in comparison with DFTL and SFTL.
than SFTL. SFTL slightly outperforms DFTL, because it reduces the mapping table size by compressing mapping entries with grouping strictly sequential data accesses. In the second setting, as shown in Figure 16 (b), LeaFTL obtains 1.4\(\times\) (up to 3.4\(\times\)) and 1.6\(\times\) (up to 4.9\(\times\)) performance speedup, compared to SFTL and DFTL, respectively.
### Benefits on the Real SSD Prototype
We validate the benefits of LeaFTL on the real SSD prototype with real workloads (see Table 2). They include filesystem benchmark suite FileBench (Shen et al., 2019), and transactional database workloads from BenchBase (Beng et al., 2019; Chen et al., 2020). All these workloads run on the ext4 file system. With FileBench, we run OLTP and CompFlow (CompF) workloads to read/write 10GB files. With BenchBase, we run TPCC, Auction-Mark (AMark), and SEATS workloads on MySQL, and their data-base sizes are 10-30GB. These database workloads will generate 37-230GB read traffic and 26-59GB write traffic to the SSD. We allocate 256MB DRAM to host the mapping table (for different DRAM sizes, see our sensitivity analysis in SS4.4).
We present the performance benefit of LeaFTL in Figure 17. Across all workloads, LeaFTL obtains 1.4\(\times\) performance speedup on average (up to 1.5\(\times\)), compared to SFTL and DFTL. Similar to our evaluation with the SSD simulator implementation, the performance benefit of LeaFTL comes from the memory saving from the address mapping table. And LeaFTL demonstrates comparable performance improvement on real SSD devices, in comparison with the SSD simulator in SS4.2. We also show the latency distribution of storage accesses in Figure 18, when running the OLTP workload on the real SSD prototype. In comparison with existing FTL schemes, LeaFTL does not increase the tail latency of storage accesses. And the higher cache hit ratio of LeaFTL brings latency reduction for many storage accesses.
### Sensitivity Analysis
**Vary the value of \(\gamma\).** As we increase the value of \(\gamma\) from 0 to 16, the size of the learned mapping table is reduced, as shown in Figure 19. LeaFTL achieves 1.3\(\times\) reduction on average (1.2\(\times\) on the real SSD) with \(\gamma=16\), compared to that of \(\gamma=0\). The saved memory with a larger \(\gamma\) is achieved by learning a wider range of LPAs into approximate segments. To further understand this, we profile the distribution of segments learned by LeaFTL with different values of \(\gamma\), as shown in Figure 20. When \(\gamma=0\), all the segments are accurate. When \(\gamma=16\), 26.5% of the learned segments are approximate on average, and LeaFTL delivers 1.3\(\times\) improvement on storage performance (1.2\(\times\) with workloads on the real SSD), in comparison with the case of \(\gamma=0\) (see Figure 21).
**Vary the SSD DRAM capacity.** We now conduct the sensitivity analysis of SSD DRAM by varying its capacity from 256MB to 1GB on the real SSD prototype. As shown in Figure 22 (a), LeaFTL always outperforms DFTL and SFTL as we vary the SSD DRAM capacity. As we increase the DRAM capacity, the storage workloads are still bottlenecked by the available memory space for the data caching. LeaFTL can learn various data access patterns and significantly reduce the address mapping table size, the saved memory further benefits data caching.
**Vary the flash page size.** In this experiment, we fix the number of flash pages, and vary the flash page size from 4KB to 16KB in the SSD simulator, as SSD vendors usually use larger flash pages for increased SSD capacity. We use the simulator for this study, since the flash page size of the real SSD is fixed. As shown in Figure 22 (b), LeaFTL always performs the best in comparison with DFTL and SFTL. As we increase the flash page size to 16KB, we can cache less number of flash pages with limited DRAM capacity. Thus, LeaFTL experiences a slight performance drop. As we fix the total SSD
Figure 19. The reduction of the mapping table size of LeaFTL with different \(\gamma\) (lower is better).
Figure 21. Performance with various \(\gamma\) (lower is better).
Figure 20. The distribution of learned segments.
capacity and vary the page size, LeaFTL outperforms SFTL by \(1.2\times\) and \(1.1\times\) for the page size of 8KB and \(16\)KB, respectively.
### Overhead Source in LeaFTL
We evaluate the overhead sources in LeaFTL in three aspects: (1) the performance overhead of the learning procedure in LeaFTL; (2) the LPA lookup overhead in the learned segments; and (3) the overhead caused by the address misprediction in LeaFTL.
We evaluate the performance of segment learning and address lookup on an ARM Cortex-A72 core. This core is similar to the storage processor used in modern SSDs. The learning time for a batch of 256 mapping entries is 9.8-10.8 \(\mu\)s (see Table 3). As we learn one batch of index segments for every 256 flash writes, the learning overhead is only 0.02% of their flash write latency.
In LeaFTL, the LPA lookup is 40.2-67.5 ns, as the binary search of segments is fast and some segments can be cached in the processor cache. The lookup time is slightly higher as we increase \(\gamma\), due to the additional CRB accesses. We also profile the cumulative distribution function (CDF) of the number of levels to lookup for each LPA lookup, and present the results in Figure 23 (a). For most of the tested workloads, 90% of the mapping table lookup can be fulfilled at the topmost level, and 99% of the lookups are within 10 levels. Although MSR-prn workload requires more lookups than other workloads, it only checks 1.4 levels on average. We also evaluate the performance overhead of the LPA lookup on the real SSD, and show the results in Figure 23 (b). The extra lookup overhead for each flash read is 0.21% on average. And for 99.99% of all the lookups, the additional overhead is less than 1% of the flash access latency.
LeaFTL also has low misprediction ratios with approximate segments. This is because LeaFTL can still learn accurate segments even if \(\gamma>0\), and not all entries in the approximate segments will result in misprediction. As shown in Figure 24, most of the workloads achieve less than 10% misprediction ratio when \(\gamma=16\). We obtain similar misprediction ratio on the real SSD prototype. Note that each misprediction only incurs one flash read access with the help of our proposed OOB verification.
### Impact on SSD Lifetime
The flash blocks of an SSD can only undergo a certain amount of writes. In this experiment, we use the write amplification factor (WAF, the ratio between the actual and requested flash writes) to evaluate the SSD lifetime. The SSD will age faster if the WAF is larger. As shown Figure 25, the WAF of LeaFTL is comparable to DFTL and SFTL. DFTL has larger WAF in most workloads. SFTL and LeaFTL occasionally flush translation pages to the flash blocks, but the cost is negligible.
## 5. Discussion
**Why Linear Regression.** Unlike deep neural networks, the linear regression used in LeaFTL is simple and lightweight, which takes only a few microseconds to learn an index segment with embedded ARM processors available in modern SSD controllers. In addition, the linear regression algorithm has been well studied, and offers guaranteed error bounds for its learned results. LeaFTL is the first work that uses learning techniques to solve a critical system problem (i.e., address mapping) in SSDs.
**Adaptivity of LeaFTL.** LeaFTL focuses on the page-level address translation, its design and implementation will not be affected by the low-level flash memory organization (i.e., TLC/QLC). As we use TLC/QLC technique to further increase the SSD capacity, the address mapping issue will become more critical, since the SSD DRAM capacity does not scale well and becomes the bottleneck for caching address mappings and user data.
**Recovery of Learned Index Segments.** As discussed in SS3.8, using a battery or large capacitor to preserve and persist the cached segments upon failures or crashes will simplify the recovery procedure significantly. In our real SSD prototype, we do not assume the battery-backed DRAM is available. Thus, we follow the conventional recovery approach in modern SSDs (Kang et al., 2019; Wang et al., 2019), and scan flash blocks in parallel by utilizing the channel-level parallelism.
When we run real workloads like TPCC on the SSD prototype, we intentionally reboot the system after running the workload for a period of time (0.5-3 hours). We find that the system can recover in 15.8 minutes on average whenever the reboot happens. This is similar to the time of recovering the conventional page-level mapping table in DFTL (Kang et al., 2019). This is mostly caused by scanning the blocks in a channel (70MB/s per channel in our SSD prototype), and the time for reconstructing recently learned segments is relatively low (101.3 milliseconds on average). We believe the recovery
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \(\gamma\) & 0 & 1 & 4 \\ \hline Learning (256 LPAs) & 9.8 \(\mu\)s & 10.8 \(\mu\)s & 10.8 \(\mu\)s \\ \hline Lookup (per LPA) & 40.2 ns & 60.5 ns & 67.5 ns \\ \hline \end{tabular}
\end{table}
Table 3. Overhead source of LeaFTL with an ARM core.
Figure 23. Performance overhead of the LPA lookup.
time is not much of a concern as the recovery does not happen frequently in reality. And the recovery can be accelerated as we increase the channel-level bandwidth. In addition, if an SSD can tolerate more data losses, we can still ensure the crash consistency by only loading the stored index segments from flash chips, which requires minimum recovery time.
## 6. Related Work
**Address Translation for SSDs.** A variety of FTL optimizations have been proposed (Han et al., 2018; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). These works exploited the data locality of flash accesses to improve the cache efficiency of the mapping table. However, most of them were developed with human-driven heuristics. An alternative approach is to integrate application semantics into the FTL, such as content-aware FTL (Han et al., 2018). However, they were application specific and required significant changes to the FTL. LeaFTL is a generic solution and does not require application semantics in its learning. Researchers proposed to integrate the FTL mapping table into the host (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019). Typical examples include DFS (Li et al., 2019), Nameless writes (Li et al., 2019), FlashMap (Li et al., 2019), and FlatFlash (Li et al., 2019). LeaFTL is orthogonal to them and can be applied to further reduce their memory footprint.
**Machine Learning for Storage.** Recent studies have been using learning techniques to build indexes such as B-trees, log-structured merge tree, hashmaps, and bloom filters (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019) for in-memory datasets, identify optimal cache replacement and prefetching policies (Li et al., 2019; Li et al., 2019; Li et al., 2019; Li et al., 2019), facilitate efficient storage harvesting (Li et al., 2019), and drive the development of software-defined storage (Li et al., 2019). LeaFTL applies learning techniques to optimize the address mapping. However, unlike existing optimizations (Li et al., 2019; Li et al., 2019) such as learned page table for virtual memory that used deep neural networks to learn the patterns, LeaFTL provides a lightweight solution.
**SSD Hardware Development.** For the recent SSD innovations (Li et al., 2019; Li et al., 2019; Li et al., 2019) like Z-SSD (Li et al., 2019), KVSSD (Li et al., 2019), and ZNS SSD (Li et al., 2019), DRAM capacity and storage processor are still the main constraints in SSD controllers. As we scale the storage capacity, the challenge with the address translation becomes only worse. Researchers recently deployed hardware accelerators inside SSD controllers for near-data computing (Li et al., 2019; Li et al., 2019; Li et al., 2019). We wish to extend LeaFTL with in-storage accelerators to deploy more powerful learning models as the future work.
## 7. Conclusion
We present a learning-based flash translation layer, named LeaFTL for SSDs. LeaFTL can automatically learn different flash access patterns and build space-efficient indexes, which reduces the address mapping size and improves the caching efficiency in the SSD controller. Our evaluation shows that LeaFTL improves the SSD performance by 1.4\(\times\) on average for a variety of storage workloads.
###### Acknowledgements.
We thank the anonymous reviewers for their helpful comments and feedback. This work is partially supported by the NSF CAREER Award 2144796, CCF-1919044, and CNS-1850317.
|
2309.17304 | Source-Replacement Model for Phase-Matching Quantum Key Distribution | Quantum key distribution has emerged as a promising solution for constructing
secure communication networks, offering information-theoretic security
guaranteed by the principles of quantum mechanics. One of the most advanced
quantum key distribution protocols to date is the phase-matching protocol. Its
security was initially established using an abstract method known as
symmetry-protected privacy. In this study, we reevaluate the security of the
phase-matching protocol using an intuitive source-replacement model, and we
arrive at conclusions that align with the original proof. This model provides a
fresh perspective on the protocol's security. As an application of this
approach, we introduce a beam-splitting attack scheme. Leveraging the
source-replacement model, we derive a lower bound on the phase error rate under
this attack, further underscoring the robustness of our security analysis
method. | Yizhi Huang, Zhenyu Du, Xiongfeng Ma | 2023-09-29T15:00:10Z | http://arxiv.org/abs/2309.17304v1 | # Source-Replacement Model for Phase-Matching Quantum Key Distribution
###### Abstract
Quantum key distribution has emerged as a promising solution for constructing secure communication networks, offering information-theoretic security guaranteed by the principles of quantum mechanics. One of the most advanced quantum key distribution protocols to date is the phase-matching protocol. Its security was initially established using an abstract method known as symmetry-protected privacy. In this study, we reevaluate the security of the phase-matching protocol using an intuitive source-replacement model, and we arrive at conclusions that align with the original proof. This model provides a fresh perspective on the protocol's security. As an application of this approach, we introduce a beam-splitting attack scheme. Leveraging the source-replacement model, we derive a lower bound on the phase error rate under this attack, further underscoring the robustness of our security analysis method.
**Keywords:** quantum key distribution, phase-matching scheme, source-replacement model, pseudo-Fock state, beam-splitting attack
## I Introduction
Quantum key distribution (QKD) [1; 2] is currently one of the most successful applications in quantum information science. It allows remote communication parties, Alice and Bob, to establish a secure key by leveraging the principles of quantum mechanics. The information-theoretical security of QKD has been proven theoretically since the end of last century based on the quantum bit and phase error correction [3; 4; 5; 6] or the entropic approach [7; 8]. Here, bit error rates can tell differences between the key strings held by Alice and Bob, while phase error rates can provide an upper bound on the potential information leakage. Therefore, by performing bit and phase error correction, Alice and Bob can share two consistent and secure key strings. The information-theoretic security of QKD enables it to withstand attacks from quantum computers, which has attracted significant interest from researchers. In 2012, Lo _et al._ proposed measurement-device-independent (MDI) QKD [9] further propelling advancements in this fields. The security of MDI QKD requires no assumption on how the measurement site performs measurement and announcement, making it naturally immune to all detection attacks. Meanwhile, it helps reduce the number of trusted nodes and makes quantum communication networks more implementable. Later, another type of MDI QKD, known as the twin-field scheme [10], was introduced with the aim of achieving a quadratic improvement in key rate [11; 12; 13; 14]. This approach successfully allowed for a breakthrough in the key rate beyond the linear bound [15; 16] without the need for quantum relays. Recently, a protocol called mode-pairing QKD has adopted innovative pairing ideas, achieving a quadratic improvement in the key rate while maintaining a relatively simple implementation difficulty as conventional MDI setups [17; 18; 19]. With the introduction of various protocols, the practicality, communication distance, and speed of QKD have made significant advancements in recent years. For reviews of this subject, one may refer to [20; 21].
Among the various QKD protocols developed, the phase-matching (PM)scheme [11] has garnered significant attention due to its robustness and efficiency. The security of the PM scheme was originally established using a method known as symmetry-protected privacy [14; 22]. Unlike traditional complementary-based security proofs, this method utilizes the symmetry of encoding to establish security, leveraging the parity properties of the corresponding state space to derive the phase error rate. The advantage of this proof technique lies in its analysis being entirely based on encoding operations, independent of the exact form of the source and measurements. It provides a straightforward and convenient framework for analyzing the security of QKD protocols, particularly MDI QKD protocols. However, this simple and highly abstract nature also poses certain problems. The symmetry-protected privacy method becomes less applicable in providing specific security discussions for different protocols. Furthermore, it often limits the analysis to the encoding operations carried out by the communicating parties, making it difficult to draw conclusive results about the protocol's performance in the presence of potential eavesdroppers when an attack occurs.
In this work, we attempt to reexamine the security of the PM scheme. We start by considering the source-replacement approach, which is more concrete and easier to comprehend. This approach was first introduced in [4], and its name comes from [23]. Using an entanglement-based protocol equivalent to the original protocol, we introduce
virtual CNOT gates, quantum Fourier transforms, and photon number measurements to form a pseudo-Fock state and show how to simultaneously get the total photon number and the random phase difference. Finally, based on the original definition, we establish the relationship between photon numbers and phase errors. By approaching the problem from this different perspective, we arrive at the same conclusion as the symmetry-protected privacy method: quantum states with an odd total photon number result in phase errors, while states with an even total photon number do not.
By delving into the source-replacement analysis, we aim to provide a new viewpoint that complements the original proof of security. Our analysis reaffirms the protocol's security and provides valuable insights into its underlying mechanisms. In addition, we introduce a beam-splitting attack scheme that poses a potential threat to the PM scheme. By using the source-replacement approach, we derive an upper bound on the phase error rate, quantifying the attack's impact on the protocol's performance. By analyzing the simulation results, we demonstrate that the phase error rate provided by the security proof is very close to the one introduced by the beam-splitting attack. This finding indicates that the lower bound on the key rate provided by security analysis is already highly tight, leaving little room for further improvement. Our proposed analysis establishes a direct connection between the attack and the quantum phase error rate, elucidating the security of the PM scheme.
The rest of the content of this paper is as follows. In Sec. II, we review the process of the phase-matching scheme and introduce the sketch of symmetry-protected security proofs. In Sec. III, we introduce our security proof based on the source-replacement approach and derive the phase error rate. In Sec. IV, we present the attack strategy, analyze the lower bound of the phase error rate resulting under this attack using the source-replacement model, and simulate the disparity between this lower bound and the phase error rate provided by security analysis. Finally, in Sec. V, we provide a conclusion and outlook of our work.
## II Preliminary
In this section, we will introduce the specific steps of the PM scheme and its characteristics. We will also provide a brief overview of the core principles and main conclusions of the symmetry-protected privacy security proof method.
### Phase-matching scheme
Firstly, we introduce the phase-matching QKD scheme. The core idea of this scheme is that Alice and Bob encode their respective key information into the phase of individual optical pulses, with key bits \(0\) and \(1\) corresponding to phase values of \(0\) and \(\pi\), for instance. Subsequently, they send their pulses to Charlie for single-photon interference. By analyzing the interference outcomes, they can ascertain the degree of phase matching between their encoded phases. This process, conducted through a single optical mode, establishes the connection between the key information of both parties. A more detailed description of the protocol steps is shown in Box 1.
1. State Preparation: In the \(i\)-th round, Alice prepares coherent state \(\left|\alpha_{i}\right\rangle=\left|\sqrt{\mu_{a}^{i}}e^{\mathrm{i}(\pi\kappa _{a}^{i}+\phi_{a}^{i})}\right\rangle\) on optical mode \(A_{i}\), where \(\mu_{a}^{i}\) is the intensity, \(\kappa_{a}^{i}\) is Alice's raw key bit which chosen from \(\{0,1\}\) randomly, and phase \(\phi_{a}^{i}\) is uniformly chosen from \([0,2\pi)\). Similarly, Bob randomly chooses \(\kappa_{b}^{i}\) and \(\phi_{b}^{i}\), and prepares \(\left|\beta_{i}\right\rangle=\left|\sqrt{\mu_{b}^{i}}e^{\mathrm{i}(\pi\kappa _{b}^{i}+\phi_{b}^{i})}\right\rangle\) on mode \(B_{i}\).
2. Measurement: Alice and Bob send their optical modes \(A_{i}\) and \(B_{i}\) to an untrusted party, Charlie, who is supposed to perform single-photon interference measurement and records the clicks of detectors \(L\) and/or \(R\).
3. Announcement: For all rounds with successful detection, Charlie announces the \(L\) and \(R\) detection results. Then, Alice and Bob announce the random phases, \(\phi_{a}^{i}\) and \(\phi_{b}^{i}\), of these rounds.
4. Key mapping: Alice and Bob repeat the above steps for \(N\) rounds. For those rounds with successful detections, Alice and Bob compare their encoded random phases \(\phi_{a}^{i}\) and \(\phi_{b}^{i}\). If \(\left|\phi_{a}^{i}-\phi_{b}^{i}\right|=0\) or \(\pi\), Alice and Bob keep the corresponding \(\kappa_{a}^{i}\) and \(\kappa_{b}^{i}\) as their raw key bits in this round, respectively. Otherwise, the discard their raw key bits. Additionally, Bob flips his key bit \(\kappa_{b}^{i}\) if Charlie's announcement was an \(R\) click and he also flip his key bit if \(\left|\phi_{a}^{i}-\phi_{b}^{i}\right|=\pi\).
5. Parameter estimation: For all the left raw key bits, Alice and Bob analyze the gains \(Q_{\mu}\) and quantum bit error rates \(E_{Z}\). Here, the gain \(Q_{\mu}\) is defined as the successful detection probabilities of the signal states with intensity \(\mu\), and the quantum bit error rates \(E_{Z}\) can be obtained through random sampling. Then they estimate the phase error rate \(e_{p}\) use the method in [22].
6. Post-processing: Alice and Bob reconcile the key via an classical channel. They then perform privacy amplification according to \(e_{p}\) to get the final key bits.
The key observation of the phase-matching scheme is that by utilizing the result of single-photon interference, Alice and Bob can distill raw key bits from a single detection. The probability of a successful detection event is proportional to \(\sqrt{\eta}\) when either Alice's or Bob's photon causes a detection click, where \(\sqrt{\eta}\) is the transmittance from Alice or Bob to Charlie and here we suppose the channel is symmetric. Therefore, the key rate performance of the phase-matching scheme is \(O(\sqrt{\eta})\). This is a quadratic improvement compared to conventional MDI schemes whose key rate performance is \(O(\eta)\) due to the requirement of coincidence detection.
To provide a clearer illustration of the encoding, we show the encoding process using quantum circuit notation in Figure 1. In this representation, we introduce three classical random bits at each side for classical control to encode the key bits, random phases, and intensities on the optical mode. The control parameters, \(\mu_{a}^{i},\mu_{b}^{i},\kappa_{a}^{i},\kappa_{b}^{i},\phi_{a}^{i},\phi_{b}^{i}\), are random numbers coming from quantum random number generators according to the scheme.
### Symmetry-protected security
Unlike the BB84 scheme or the conventional MDI scheme, the PM scheme utilizes only one basis for key generation and parameter estimation. Consequently, applying the conventional security proof based on basis complementarity [6] directly to the phase-matching scheme becomes challenging. To address this issue, an approach known as symmetry-protected privacy was introduced in [22]. This method establishes security proof based on a different principle, namely encoding symmetry. The key encoding operation on the initial state can always be described in the following form:
\[U_{AB}(\kappa_{a},\kappa_{b})=U_{A}(\kappa_{a})\otimes U_{B}(\kappa_{b}). \tag{1}\]
Figure 1: Encoding circuit of PMQKD. This circuit is symmetric for Alice and Bob, and here, we only introduce the systems and operations on Alice’s side. The systems and operations on Bob’s side are analogous to those of Alice. Systems \(A\) and \(B\) are optical modes initialized to coherent states \(|\alpha\rangle\) and \(|\beta\rangle\), respectively. Alice uses quantum random number generator to generate \(\mu_{a}^{i},\kappa_{a}^{i},\phi_{a}^{i}\) according to the protocol, and then she uses the classical control-phase operation to encode the discrete phase and key information according to \(\phi_{a}^{i}\) and \(\kappa_{a}^{i}\). She also uses the random number \(\mu_{a}^{i}\) to control the intensity of \(|\alpha\rangle\) to carry out the decoy state estimation.
By leveraging the its unitarity, the Hilbert space can be divided into two distinct subspaces. The first is the even space \(\mathcal{H}^{\text{even}}\), which consists of even states that are eigenstates of \(U\) with an eigenvalue of \(+1\). The second is the odd space \(\mathcal{H}^{\text{odd}}\), which comprises odd states that are eigenstates of \(U\) with an eigenvalue of \(-1\). For an initial state \(\rho_{AB}\), we can define the encoded state \(\rho_{AB}^{\prime}\) as
\[\rho_{AB}^{\prime}(\kappa_{a},\kappa_{b})=\left[U_{A}(\kappa_{a})\otimes U_{B} (\kappa_{b})\right]\rho_{AB}\left[U_{A}(\kappa_{a})\otimes U_{B}(\kappa_{b}) \right]^{\dagger}. \tag{2}\]
Intuitively, if the initial state \(\rho_{AB}\) is a mixture of states solely from one of the two subspaces, then the four possible encodings will produce the same encode state in pairs, i.e., \(\rho_{AB}^{\prime}(0,0)=\rho_{AB}^{\prime}(1,1)\) and \(\rho_{AB}^{\prime}(1,0)=\rho_{AB}^{\prime}(0,1)\). Consequently, Eve cannot determine the specific encoding operations employed by Alice and Bob when she obtains the encoded state. Therefore, Eve at most can know whether Alice and Bob's encoding is the same, and she can not get the specific encoding information. In this scenario, the phase error rate is zero. However, unfortunately, both Alice and Bob are also unaware of each other's encoding operations. As a result, the bit error rate will be 50%, and the final key rate will be 0. In practice, the initial state \(\rho_{AB}\) is a mixture of both even and odd states,
\[\rho_{AB}=p_{\text{even}}\rho_{\text{even}}+p_{\text{odd}}\rho_{\text{odd}}, \tag{3}\]
where \(\rho_{\text{even}}=\sum_{\rho_{i}\in\mathcal{H}_{even}}p_{i}\rho_{i}\), \(\rho_{\text{odd}}=\sum_{\rho_{j}\in\mathcal{H}_{odd}}p_{j}\rho_{j}\). When adopting such a mixture state, Eve can, to some extent, distinguish different encoding outcomes by observing the relative phase changes on the purified state \(\ket{\Psi}_{ABE}\) of \(\rho_{AB}\), where system \(E\) is under Eve's control, leading to a non-zero phase error rate.
In the symmetry-protected security proof, it is further demonstrated that in this particular scenario, the phase error rate, denoted as \(e_{p}\), corresponds to the proportion of odd or even components that contribute to effective detection. These components are represented by \(q_{\text{odd}}\) and \(q_{\text{even}}\) respectively, and it is important to note that they differ from the previously mentioned \(p_{\text{odd}}\) and \(p_{\text{even}}\). The variable \(p\) denotes the components in the initial state, while the variable \(q\) pertains to the post-selected states that have been successfully detected. The relationship between these two kinds of quantities can be expressed as
\[\begin{split} q_{\text{odd}}&=p_{\text{odd}}\frac {Y_{\text{odd}}}{Q},\\ q_{\text{even}}&=p_{\text{even}}\frac{Y_{\text{ even}}}{Q},\end{split} \tag{4}\]
where \(Y_{\text{odd/even}}\) and \(Q\) represent the successful detection probabilities of \(\rho_{\text{odd/even}}\) and \(\rho_{AB}^{\prime}\) respectively.
In the phase-matching scheme, the odd and even states correspond to Fock states with an odd or even number of photons, respectively. Therefore, the phase error in the phase-matching scheme is
\[e_{p}\leq q_{\text{even}}=1-\sum_{k}q_{2k+1}\equiv e_{p}^{u}, \tag{5}\]
where \(q_{k}\) is the fraction of detection when Alice and Bob send out \(k\)-photon signals in all. For ideal case where \(\mu_{a}=\mu_{b}=\mu\), we can employ the following formulas to calculate \(q_{even}\) by
\[q_{k}=\frac{Y_{k}(2\mu)^{k}e^{-2\mu}}{k!Q_{\mu}}, \tag{6}\]
where \(Y_{k}=1-(1-\eta)^{k}\) and \(Q_{\mu}=1-e^{-2\eta\mu}\) are successful detection probabilities of \(k\)-photon state and coherent state with intensity \(\mu\), respectively. The key rate of the phase-matching scheme is [11]
\[R\geq Q_{\mu}\left[1-h\left(e_{p}\right)-fh(E_{\mu})\right], \tag{7}\]
where \(E_{\mu}\) is the bit error rate that can be directly obtained from experimental data, \(h(x)=-x\log(x)-(1-x)\log(1-x)\) is the binary entropy function, and \(f\) is the error correction efficiency. Note that in ref. [22], the security of QKD under symmetric encoding is also analyzed using the standard phase-error-correction method, proving that the upper bound on the phase error rate provided by the symmetry-protected method is secure.
## III Source-replacement security proof
The symmetry-protected privacy method has been rigorously established for security proof, characterized by a highly abstract and overarching analytical process. This approach allows the analytical framework to extend beyond
the PM scheme, encompassing other protocols employing symmetric encodings. However, this abstraction complicates the intuitive understanding of the security of specific protocols. For instance, unlike conventional practices, where the phase error rate is obtained through measurements on qubits held by Alice and Bob, this method employs a different analytical process. Therefore, while this method provides an upper bound on the phase error rate based on photon components for the PM scheme, it cannot explain the underlying physical connection between these two factors in a concrete way. Furthermore, when applying the symmetry-protected privacy analysis, one do not need to consider the potential attack an eavesdropper may carry out. Although this simplifies the analysis, it also limits the method's efficacy when evaluating the impact of specific attacks on the protocol. As a result, when attempting to dissect the influence of a particular attack on the protocol, this analytical approach may fall short.
In order to offer a more concrete and physically intuitive security proof tailored to the PM scheme, we have undertaken a reexamination of the protocol's security proof. Employing a source-replacement approach, we revert to the initial entanglement-based framework to define the phase error rate and derive conclusions consistent with those of the symmetry-protected privacy method.
### Virtual operation and measurement
Firstly, we attempt to transform the circuit in Figure 1 into an entanglement-based protocol circuit by using the source-replacement approach. In general, the control parameters can be seen as \(Z\)-basis measurement results of \(X\)-basis eigenstates of Hilbert spaces with different dimensions. Taking Alice's side as an example, we introduce ancillary systems \(A_{0}\) and \(A_{1}\) to encode the discrete phase and the key information. They will be initialized to the eigenstates \(|+_{d}\rangle\) of the Pauli \(X\) operator in \(d\)-dimension according to the encoding requirements and \(|+\rangle\), respectively. For example, if Alice and Bob choose to encode 16 random phases, i.e., \(\phi_{a}^{i}\in\{\frac{2j\pi}{16}\},j=0,1,2,\cdots,15\) in Box 1, then the system \(A_{0}\) will be initialized to \(|+_{16}\rangle\). The \(Z\)-basis measurement results on these two systems act as \(\phi_{a}^{i}\) and \(\kappa_{a}^{i}\) and control the random phase encoding and the key bit encoding, respectively. The situation with the ancillary system \(A_{2}\) which controls the intensity is relatively complex. Here, we'll provide a simple example. We suppose the protocol involves only three types of light intensities: vacuum state intensity, decoy state intensity, and signal state intensity, with equal probabilities of being sent. In this case, system \(A_{3}\) would be initialized to a 3-dimensional \(X\)-basis eigenstate \(|+_{3}\rangle\). After performing a \(Z\)-basis measurement on it, the three possible outcomes will control the intensities of the pulses.
According to [24], the quantum phase control gates act as classically replaceable operations gates, enabling measurements to be performed prematurely and replaced with classical control. Utilizing this property, we can represent the measurement operations as the final steps performed by Alice and Bob, as depicted in Figure 2. In this picture, Alice and Bob initially initialize the ancillary systems into \(|+\rangle\) states in different-dimensional Hilbert spaces, as required by the protocol. These ancillary systems serve as control bits, and they utilize quantum control operations to encode the optical modes before transmitting them to the measurement end. Finally, Alice and Bob perform \(Z\)-basis measurements on these ancillary systems to obtain the encoded key bits, random phases, and intensities in this round.
The control operations entangle the ancillary systems of Alice and Bob with the optical modes they send to Charlie. Subsequently, their optical modes undergo measurements performed by Charlie, resembling an entanglement swapping process. This results in Alice and Bob sharing imperfect entanglements. Consequently, by replacing the source with a quantum encoded one, the entire protocol is transformed into an entanglement-based protocol. With the entanglement-based picture, we now try to derive the phase error rate of the PM scheme. In the following discussion, we will not focus on system \(A_{3}\) and intensities of the optical mode, as varying light intensities only result in different photon number distributions and do not affect our subsequent derivations. According to the original security proof based on the quantum phase error [3], we need to estimate the quantum bit error rate and the quantum phase error rate for error correction in order to distill perfect entangled pairs between Alice and Bob and ensure the security of the key bits. Typically, the quantum bit error rate can be directly calculated from results obtained in practical experiments. Therefore, we will primarily focus on estimating the quantum phase error rate.
As we employ \(Z\)-basis measurements on systems \(A_{1}\) and \(B_{1}\) for key bits in PM scheme, the quantum phase error rate is naturally defined as the probability that Alice and Bob obtain inconsistent results when performing \(X\)-basis measurements on these two systems.
Furthermore, to obtain conclusions consistent with the symmetry-protected privacy method, specifically that the quantum phase error rate of the protocol is bounded by the total photon number distribution of the optical pulses sent by Alice and Bob in a single round, we introduce an additional virtual CNOT operation between systems \(A_{0}\) and \(B_{0}\). We replace the measurements of random phases on system \(A_{0}\), acting as control bits, with a virtual quantum Fourier transform followed by a photon number measurement. In the end, the corresponding quantum circuit diagram for the protocol is depicted as Figure 2.
Note that the CNOT operation between systems \(A_{0}\) and \(B_{0}\), the quantum Fourier transformation, the photon number measurement, and the \(X\)-basis measurements on systems \(A_{1}\) and \(B_{1}\) are operations introduced to obtain the quantum phase error rate. These operations were not part of the original protocol and are referred to as virtual operations. In the following sections, we will demonstrate that through these virtual operations, Alice and Bob can obtain the photon number distribution of the pulses they send, and establish a connection between this distribution and the quantum phase error rates.
### Phase randomization and pseudo-Fock state
Firstly, let us focus on one side of the system. It can be calculated that in the encoding circuit depicted in Figure 2, users can directly obtain the photon number in the pulses through coherent operations on ancillary systems, without the need for measurements on the optical systems \(A\) or \(B\).
**Observation 1**.: _The number of photons emitted in the optical mode \(A(B)\) can be acquired by doing a high-dimensional \(Z\)-basis measurement after the inverse quantum Fourier transformation on ancillary qudit \(A_{0}(B_{0})\) which is used to do the discrete phase randomization._
Here, we provide an analysis using Alice's side as an example. It's evident from symmetry that on Bob's side, we can arrive at the same conclusion. Concretely, Alice can obtain the photon number information by performing the inverse \(d\)-dimensional Hadamard gate, or inverse quantum Fourier transformation
\[\begin{split} F^{\dagger}&\equiv\sum_{j=0}^{d-1} \left|\tilde{j}\right\rangle\!\!\left\langle j\right|,\\ \left|\tilde{j}\right\rangle&=\frac{1}{\sqrt{d}}\sum _{k=0}^{d-1}e^{2\pi ijk/d}\left|k\right\rangle,\end{split} \tag{8}\]
Figure 2: Encoding circuit of source-replaced PM scheme. Similar to the circuit in Fig. 1, the ancillary systems \(A_{0},A_{1},A_{2},B_{0},B_{1},B_{2}\) are used to encode the random phases, key bits, and intensities. In this picture, the ancillary systems are measured after the encoding operations, allowing Alice and Bob to employ additional operations beyond simple \(Z\)-basis measurements to estimate parameters like photon number and phase differences. The quantum inverse Fourier transform \(F^{\dagger}\), high-dimensional CNOT operations, and \(X\)-basis measurements depicted in the figure are virtual operations not part of the original protocol.
on the qudit ancillary system \(A_{0}\) and measuring it in the computational basis,
\[\begin{split}\ket{+_{d}}_{A_{0}}\ket{\alpha}_{A}& \xrightarrow{C\cdot\theta}\frac{1}{\sqrt{d}}\sum_{j=0}^{d-1}\ket{j}_{A_{0}} \ket{e^{2\pi\mathrm{i}\frac{j}{d}}\alpha}_{A}\\ &\xrightarrow{F^{\dagger}\otimes I}\frac{1}{\sqrt{d}}\sum_{j=0}^{ d-1}\ket{\widetilde{-j}}_{A_{0}}\ket{e^{2\pi\mathrm{i}\frac{j}{d}}\alpha}_{A}\\ &\qquad=\frac{1}{d}\sum_{j=0}^{d-1}\sum_{k=0}^{d-1}e^{-2\pi \mathrm{i}kj/d}\ket{k}_{A_{0}}\ket{e^{2\pi\mathrm{i}\frac{j}{d}}\alpha}_{A}\\ &\xrightarrow{\text{\small{\small{\small{\small{\small{\small{ \small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{\small{ \small{\small{\small{\small{\small{\small{\small{\!}}}}}}}}}{}{{{{{{{{{{{{{{{{{{ {{{{{{{{{{{{{ } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} \end{\}\}\}\}\}}}}} \} \}} \} \ \ \end\ \ \end\
measurement state.
\[\left|+_{d}\right\rangle_{A_{0}}\left|\alpha\right\rangle_{A}\left|+ _{d}\right\rangle_{B_{0}}\left|\beta\right\rangle_{B} \xrightarrow{C\cdot\theta}\frac{1}{d}\left(\sum_{j_{a}=0}^{d-1}\left|j_{a} \right\rangle_{A_{0}}\left|e^{2\pi\mathrm{i}\frac{i\mu}{d}}\alpha\right\rangle_ {A}\right)\left(\sum_{j_{b}=0}^{d-1}\left|j_{b}\right\rangle_{B_{0}}\left|e^{2 \pi\mathrm{i}\frac{i\mu}{d}}\beta\right\rangle_{B}\right) \tag{11}\] \[\xrightarrow{\mathrm{minus}}_{A_{0}}\frac{1}{d}\sum_{j_{a}=0}^{d -1}\sum_{j_{b}=0}^{d-1}\left|j_{a}\right\rangle_{A_{0}}\left|e^{2\pi\mathrm{i} \frac{i\mu}{d}}\alpha\right\rangle_{A}\left|j_{b}-j_{a}\right\rangle_{B_{0}} \left|e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta\right\rangle_{B}\] \[\xrightarrow{\mathrm{minus}}_{A_{0}}\frac{1}{d}\sum_{j=0}^{d-1} \sum_{j_{a}=0}^{d-1}\left|\widetilde{-j_{a}}\right\rangle_{A_{0}}\left|e^{2 \pi\mathrm{i}\frac{i\mu}{d}}\alpha\right\rangle_{A}\left|j\right\rangle_{B_{0} }\left|e^{2\pi\mathrm{i}\frac{i+j_{a}}{d}}\beta\right\rangle_{B}\] \[=d^{-\frac{3}{2}}\sum_{j=0}^{d-1}\sum_{j_{a}=0}^{d-1}\sum_{k=0}^{ d-1}e^{-2\pi\mathrm{i}kj_{a}/d}\left|k\right\rangle_{A_{0}}\left|e^{2\pi \mathrm{i}\frac{i\mu}{d}}\alpha\right\rangle_{A}\left|j\right\rangle_{B_{0}} \left|e^{2\pi\mathrm{i}\frac{i+j_{a}}{d}}\beta\right\rangle_{B}\] \[\xrightarrow{\mathrm{measure}}_{k,j}\frac{1}{\sqrt{d}}\sum_{j_{a}= 0}^{d-1}e^{-2\pi\mathrm{i}kj_{a}/d}\left|e^{2\pi\mathrm{i}\frac{i\mu}{d}} \alpha\right\rangle_{A}\left|e^{2\pi\mathrm{i}\frac{i+j_{a}}{d}}\beta\right \rangle_{B}.\]
where for the ket of mode \(B_{0}\), \(\left|j_{b}-j_{a}\right\rangle\), there is modulo \(d\) for the subtraction. We change the variable \(j=(j_{b}-j_{a})\mod d\) and then \(j_{b}=(j_{a}+j)\mod d\). For the phase, modulo \(d\) is automatically taken. In the last line, \(k\) is the total "photon number" measured by Alice and \(j\) is the random-phase difference measured by Bob.
Now, similar to Eq. (10), we can evaluate the (unnormalized) post-measurement state, by denoting \(\beta^{\prime}=e^{2\pi\mathrm{i}\frac{i}{d}}\beta\),
\[\sum_{j_{a}=0}^{d-1} e^{-2\pi\mathrm{i}kj_{a}/d}\left|e^{2\pi\mathrm{i}\frac{i\mu}{d}} \alpha\right\rangle\left|e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta^{\prime}\right\rangle \tag{12}\] \[=e^{-(\left|\alpha\right|^{2}+\left|\beta\right|^{2})/2}\sum_{j_ {a}=0}^{d-1}e^{-2\pi\mathrm{i}kj_{a}/d}\left(\sum_{n=0}^{\infty}e^{2\pi \mathrm{i}nj_{a}/d}\frac{\alpha^{n}}{\sqrt{n!}}\left|n\right\rangle\right) \left(\sum_{m=0}^{\infty}e^{2\pi\mathrm{i}mj_{a}/d}\frac{\beta^{\prime m}}{ \sqrt{m!}}\left|m\right\rangle\right)\] \[=e^{-(\left|\alpha\right|^{2}+\left|\beta\right|^{2})/2}\sum_{j_ {a}=0}^{d-1}\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}e^{\frac{2\pi\mathrm{i}}{d}( n+m-k)j_{a}}\frac{\alpha^{n}\beta^{\prime m}}{\sqrt{n!m!}}\left|n\right\rangle \left|m\right\rangle\]
Here, we notice that
\[\sum_{j_{a}=0}^{d-1}e^{\frac{-2\pi\mathrm{i}}{d}(n+m-k)j_{a}}=0, \tag{13}\]
unless \(n+m-k\) is a multiplier of \(d\). We can change the variable \(N=m+n\), denoting the total number of photons in optical modes \(A\) and \(B\). Then, the (unnormalized) post-measurement state can be written as,
\[\sum_{N}\sum_{m=0}^{N}\frac{\alpha^{N-m}\beta^{\prime m}}{\sqrt{(N-m)!m!}} \left|N-m\right\rangle\left|m\right\rangle \tag{14}\]
where the summation of \(N\) take values of \(N=k,k+d,k+2d,\cdots\). Again, let us take the assumption that \(d\gg\left|\alpha\right|^{2}+\left|\beta\right|^{2}\). Then, we can ignore the higher-order terms in the pseudo Fock states, \(N=k+d,k+2d,\cdots\). That is, we can directly set \(N\approx k\),
\[\sum_{m=0}^{k}\frac{\alpha^{k-m}\beta^{\prime m}}{\sqrt{(k-m)!m!}} \left|k-m\right\rangle\left|m\right\rangle, \tag{15}\]
which is a Fock state in two modes.
In practice, Alice will not perform the inversed Fourier transform to measure the total photon number. Thus, we can remove \(F^{\dagger}\) operation in practice. Then we can move the \(Z\) basis measurement to the front of the controlled
minus. The highly entangled controlled minus operation becomes classical. Furthermore, the \(Z\) basis measurement can be moved previous to the controlled phase operation, making the controlled phase operation classical. This is equivalent to that Alice and Bob first randomly pick a phase \(\phi_{a},\phi_{b}\), and modulate the phase of the coherent state. Then, they compare their phase \(\phi_{a},\phi_{b}\) to post-select the key. This process reduces the entanglement picture to the original protocol.
We point out that this reduction will not affect our security proof. In principle, Alice can perform the total photon number measurement. As we will prove in the next section, the state after total photon number measurement contains the secure part and the insecure part. The security is only related to the parity of the total photon number and is irrelevant to the measurement result \(k\). Thus, whether or not Alice measures \(k\), the secure part exists. The entanglement picture derives an upper bound on the phase error rate of the original protocol. In addition, Alice and Bob do not carry out the virtual operations in actual experiments. They only perform \(Z\)-basis measurements on the systems \(A_{1}\) and \(B_{1}\) to obtain the raw key bits and further estimate the quantum bit error rate. Therefore, in our entanglement picture, users can always upper bound the quantum phase error rate while obtaining the quantum bit error rate.
### Phase-matching scheme with key encoding
Based on the case discussed in the former section, we now include the key-bit encoding in the scheme. As shown in Fig. 2, Alice applies another qubit ancillary system \(A_{1}\). She prepares \(\ket{+}\) on \(A_{1}\) and employs a controlled-phase gate between \(A_{1}\) and \(A\) to encode the key information. Bob applies similar operations as well. Then, the state becomes
\[\ket{+}_{A_{0}}\ket{+}_{A_{1}}\ket{\alpha}_{A}\ket{+}_{d_{B_{0}}} \ket{+}_{B_{1}}\ket{\beta}_{B}\] \[\xrightarrow{C\cdot\theta_{j}}\frac{1}{d}\left(\sum_{j_{a}=0}^{d- 1}\ket{j_{a}}_{A_{0}}\ket{+}_{A_{1}}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}} \alpha}_{A}\right)\left(\sum_{j_{b}=0}^{d-1}\ket{j_{b}}_{B_{0}}\ket{+}_{B_{1}} \ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta}_{B}\right)\] \[\xrightarrow{C\cdot\theta_{a}}\frac{1}{2d}\left[\sum_{j_{a}=0}^{ d-1}\ket{j_{a}}_{A_{0}}\left(\ket{0}_{A_{1}}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}} \alpha}_{A}+\ket{1}_{A_{1}}\ket{-e^{2\pi\mathrm{i}\frac{i\mu}{d}}\alpha}_{A} \right)\right]\left[\sum_{j_{b}=0}^{d-1}\ket{j_{b}}_{B_{0}}\left(\ket{0}_{B_{1 }}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta}_{B}+\ket{1}_{B_{1}}\ket{-e^{2 \pi\mathrm{i}\frac{i\mu}{d}}\beta}_{B}\right)\right]\] \[=\frac{1}{2d}\left[\sum_{j=0}^{d-1}\sum_{j_{a}=0}^{d-1}\ket{j_{a}} _{A_{0}}\left(\ket{0}_{A_{1}}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}}\alpha}_{A} +\ket{1}_{A_{1}}\ket{-e^{2\pi\mathrm{i}\frac{i\mu}{d}}\alpha}_{A}\right)\right] \ket{j_{B}}_{B_{0}}\left(\ket{0}_{B_{1}}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}} \beta}_{B}+\ket{1}_{B_{1}}\ket{-e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta}_{B} \right)\right]\] \[=\frac{1}{2d^{\frac{3}{2}}}\sum_{j=0}^{d-1}\sum_{j_{a}=0}^{d-1} \sum_{k=0}^{d-1}e^{-2\pi\mathrm{i}kj_{a}/d}\ket{k}_{A_{0}}\left(\ket{0}_{A_{1}} \ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}}\alpha}_{A}+\ket{1}_{A_{1}}\ket{-e^{2\pi \mathrm{i}\frac{i\mu}{d}}\alpha}_{A}\right)\ket{j}_{B_{0}}\left(\ket{0}_{B_{1 }}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta}_{B}\right)\] \[\xrightarrow{\mathrm{measure}}_{k,j}\sum_{j_{a}=0}^{d-1}e^{-2\pi \mathrm{i}kj_{a}/d}\left(\ket{0}_{A_{1}}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}} \alpha}_{A}\right)+\ket{1}_{A_{1}}\ket{-e^{2\pi\mathrm{i}\frac{i\mu}{d}}\alpha }_{A}\right)\left(\ket{0}_{B_{1}}\ket{e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta^{ \prime}}_{B}+\ket{1}_{B_{1}}\ket{-e^{2\pi\mathrm{i}\frac{i\mu}{d}}\beta}_{B} \right), \tag{16}\]
where for the ket of mode \(B_{0}\), \(\ket{j_{b}-j_{a}}\), there is modulo \(d\) for the subtraction. We change the variable \(j=(j_{b}-j_{a})\mod d\) and \(\beta^{\prime}=e^{2\pi\mathrm{i}\frac{i}{d}}\beta\). This is the same as Eq. (11). Similarly, we can evaluate the (unnormalized) post-measure
state,
\[\sum_{j_{a}=0}^{d-1}e^{-2\pi{\rm i}kj_{a}/d}\left(\left|0\right\rangle _{A_{1}}\left|e^{2\pi{\rm i}\frac{j_{a}}{d}}\alpha\right\rangle_{A}+\left|1 \right\rangle_{A_{1}}\left|-e^{2\pi{\rm i}\frac{j_{a}}{d}}\alpha\right\rangle_{ A}\right)\left(\left|0\right\rangle_{B_{1}}\left|e^{2\pi{\rm i}\frac{j_{a}}{d}} \beta^{\prime}\right\rangle_{B}+\left|1\right\rangle_{B_{1}}\left|-e^{2\pi{\rm i }\frac{j_{a}}{d}}\beta^{\prime}\right\rangle_{B}\right)\] \[\quad=e^{-(\left|\alpha\right|^{2}+\left|\beta\right|^{2})/2}\sum _{j_{a}=0}^{d-1}e^{-2\pi{\rm i}kj_{a}/d}\] \[\quad\left[\left|00\right\rangle_{A_{1}B_{1}}\left(\sum_{n=0}^{ \infty}e^{2\pi{\rm i}nj_{a}/d}\frac{\alpha^{n}}{\sqrt{n!}}\left|n\right\rangle _{A}\right)\left(\sum_{m=0}^{\infty}e^{2\pi{\rm i}mj_{a}/d}\frac{\beta^{\prime m }}{\sqrt{m!}}\left|m\right\rangle_{B}\right) \tag{17}\] \[\quad+\left|01\right\rangle_{A_{1}B_{1}}\left(\sum_{n=0}^{ \infty}e^{2\pi{\rm i}nj_{a}/d}\frac{\alpha^{n}}{\sqrt{n!}}\left|n\right\rangle _{A}\right)\left(\sum_{m=0}^{\infty}e^{2\pi{\rm i}mj_{a}/d}\frac{(-\beta^{ \prime})^{m}}{\sqrt{m!}}\left|m\right\rangle_{B}\right)\] \[\quad+\left|10\right\rangle_{A_{1}B_{1}}\left(\sum_{n=0}^{ \infty}e^{2\pi{\rm i}nj_{a}/d}\frac{(-\alpha)^{n}}{\sqrt{n!}}\left|n\right\rangle _{A}\right)\left(\sum_{m=0}^{\infty}e^{2\pi{\rm i}mj_{a}/d}\frac{\beta^{\prime m }}{\sqrt{m!}}\left|m\right\rangle_{B}\right)\] \[\quad=e^{-(\left|\alpha\right|^{2}+\left|\beta\right|^{2})/2}\sum _{j_{a}=0}^{d-1}\sum_{n=0}^{\infty}\sum_{m=0}^{\infty}e^{\frac{2\pi{\rm i}}{d }(n+m-k)j_{a}}\left[\left|0\right\rangle+(-1)^{n}\left|1\right\rangle\right]_ {A_{1}}\left[\left|0\right\rangle+(-1)^{m}\left|1\right\rangle\right]_{B_{1}} \frac{\alpha^{n}\beta^{\prime m}}{\sqrt{n!m!}}\left|nm\right\rangle_{AB}.\]
Here, we change the order of some systems for simplicity and use the subscript to denote the system. Note that according to Eq. (13), we have \(N=n+m=k,k+d,k+2d,\cdots\). The phase error is obtained by \(X\otimes X\) measurement on the key qubits \(A_{1},B_{1}\). Then, we can determine whether there exists a phase error by the total photon number.
1. \(N\) is odd, the \(X\)-basis measurement results on qubits \(A_{1}\) and \(B_{1}\) are different, \(\left|-+\right\rangle_{A_{1}B_{1}}\) or \(\left|+-\right\rangle_{A_{1}B_{1}}\);
2. \(N\) is even, the \(X\)-basis measurement results on qubits \(A_{1}\) and \(B_{1}\) are the same \(\left|++\right\rangle_{A_{1}B_{1}}\) or \(\left|--\right\rangle_{A_{1}B_{1}}\).
In conclusion, the phase error rate is 1 if the total photon number in \(A\) and \(B\) is odd, and it is 0 if the total photon number is even. Then the upper bound of the phase error rate is
\[e_{p} \leq 1\cdot q_{\rm even}+0\cdot q_{\rm odd}\] \[=q_{\rm even} \tag{18}\] \[=1-\sum_{k}q_{2k+1}.\]
This is consistent with Eq. (5) in Sec. II.2. As previously discussed, our result can be applied to the original protocol. Thus, by analyzing the fraction of odd and even state via decoy state method[26], the upper bound of phase error can be derived. Utilizing the entanglement-based source-replacement picture, we establish the relationship between the total photon number and the phase error rate in the PM scheme. This provides a more concrete and physically intuitive explanation for the conclusions drawn in the symmetry-protected privacy method.
## IV Beam-Splitting Attacks
From the security analyses presented in Sec. II and III, it is apparent that the conclusions drawn from the source-replacement and symmetry-protected privacy security analyses are entirely consistent. These two approaches appear to be equivalent to some extent, but their starting points are not identical. One is based on the entire source and the quantum states it emits, while the other is based on encoding operations. Therefore, in practical applications, it is essential to choose the appropriate analysis method based on specific requirements.
In this section, we introduce a new attack mainly based on the beam-splitting attack and the unambiguous state discrimination approach and attempt to analyze the key rate of the PM scheme under this attack. Similar attack strategies were initially introduced in Ref. [27] to target MDI quantum key distribution protocols. Here, we draw inspiration from these attack ideas and apply our approach to analyze the phase error rate under this attack. In this scenario, we find that the security analysis approach based on source replacement can be directly applied to analyze the attack and straightforwardly establish a lower bound on the phase error rate. This shows the advantages of the source-replacement security analysis approach in analyzing attacks.
### Attack strategy
Firstly, we give the detailed description of the proposed attack in Box 2, and we also illustrate this attack in Figure 3. Here we suppose the channel transmittance from Alice and Bob to the measurement site are both \(\eta\), and the intensities of pulses are the same, \(\mu_{a}=\mu_{b}=\mu\). We only consider the case of pure states for simplicity.
The core idea of this beam-splitting attack lies in that the state \(\left|\varphi\right\rangle\) obtained by Eve through beam splitting is very close to the states emitted by Alice and Bob. Moreover, as the channel transmission rate decreases, these two states become even closer. With the help of quantum memory, Eve can utilize the stored states to attempt to obtain the key bits chosen by Alice and Bob, given that she knows the random phases selected by them. In this scenario, Eve can
Figure 3: Illustration of beam-splitting attack. Solid arrows represent the transmission of quantum states, while dashed arrows represent the exchange of classical information. The upper part of the figure corresponds to step (1) in Box 2. Eve splits the light pulse emitted by Alice and Bob into two parts with a ratio of \(1-\eta:\eta\). The former part is stored in a quantum memory, while the latter part undergoes interference measurement, and the measurement result is publicly announced. The lower part of the figure corresponds to steps (2) and (3). After the measurement is completed and Alice and Bob announce the phase information \(\phi_{a}^{i}\) and \(\phi_{b}^{i}\), Eve retrieves the two corresponding states from the quantum memory, performs unambiguous state discrimination measurement based on the phase information, and applies post-processing to the results. Between these two stages, the only transmitted quantum states are the states held by Eve in the quantum memory.
maximize the utilization of all the information she can obtain without interfering with the protocol execution, thus maximizing her ability to steal the key. Whether Eve can obtain the key information or not depends on her ability to distinguish whether the state she holds is \(\left|\varphi^{0}\right\rangle\) or \(\left|\varphi^{1}\right\rangle\) through unambiguous state discrimination measurements. If she successfully distinguishes these two states, Eve will perfectly learn the key bit value. The optimal success probability for her to distinguish these two states is given by the fidelity between these two states [28],
\[p_{usd}=1-|\langle\phi_{0}|\phi_{1}\rangle|=1-e^{-4(1-\eta)\mu}. \tag{19}\]
### Phase error rate estimation and simulation
Given the probability for Eve to perfectly learn the key bit value, we can further estimate a lower bound on the phase error rate that Alice and Bob will encounter in the protocol. To illustrate this point more clearly, we still use the source-replacement entanglement-based picture, as shown in Figure 2. In this picture, the states of systems \(A_{1}\) and \(B_{1}\) are the key states held by Alice and Bob, respectively. They will perform \(Z\)-basis measurements on their own states to obtain their raw keys. According to the definition of the phase error rate [3], if they perform \(X\)-basis measurements on these states, they will obtain the phase error rate.
When Eve successfully obtains Alice and Bob's encoded keys through unambiguous state discrimination measurements, in the entanglement-based scenario, we can consider that Eve deterministically acquired knowledge of the results of the \(Z\)-basis measurements performed on systems \(A_{1}\) and \(B_{1}\). In this sense, the states on these two systems have already collapsed to either \(\left|00\right\rangle\) or \(\left|11\right\rangle\) from Eve's point of view. Then Alice and Bob do subsequent operations to get the raw key bits or the phase error rate. If Alice and Bob perform \(Z\)-basis measurements, the raw key bits they get will be the same as what Eve obtained. If Alice and Bob perform \(X\)-basis measurements to estimate the phase error rate, since systems \(A_{1}\) and \(B_{1}\) have already collapsed to either \(\left|00\right\rangle\) or \(\left|11\right\rangle\), the results of the \(X\)-basis measurements will be completely random, resulting in a phase error rate of \(\frac{1}{2}\). This scenario here is similar to the intercept-resend attack on the BB84 protocol. Therefore, the contribution of the case where Eve successfully distinguishes the encoded states to the phase error rate is \(\frac{1}{2}p_{usd}\). As for the case where Eve fails to distinguish the states, the phase error rate is lower bounded by \(0\) since any additional operation will only increase the phase error rate. Hence, we can conclude that the final phase error rate satisfies
\[e_{p}\geq\frac{1}{2}p_{usd}=\frac{1}{2}-\frac{1}{2}e^{-4(1-\eta)\mu}\equiv e_{ p}^{L}. \tag{20}\]
Note that although we have only considered the case of pure states here, in the case of mixed states, we can always assume that Eve holds a purification of the mixed state. Therefore, she can only make \(p_{usd}\) better, so it is a valid lower bound. Then, by substituting the upper and lower bounds of the phase error rates obtained into the key rate formula, we can derive the upper bound on the key rate obtained from the beam-splitting attack and the lower bound on the key rate obtained from symmetry-protected security proof, respectively.
To visually illustrate the difference in phase error rates obtained from these two methods, we conducted simulations of the phase error rates for various optical intensities and channel transmittance. We also calculated the ratio of the difference, \(\frac{e_{p}^{u}-e_{p}^{L}}{e_{p}^{u}}\) between the two rates. The results are presented in Figure 4.
The figures show that the upper and lower bounds of the phase error rate given by the two methods are very close. The difference becomes smaller as the intensity increases and the channel transmission decreases. Intuitively, these results are also reasonable. As the intensity increases or the channel transmission rate decreases, Eve can obtain quantum states with higher intensities through beam splitting, which makes the states Eve holds closer to the ones sent by Alice and Bob. And it will finally lead to a higher success probability of unambiguous state discrimination. If the communication distance between Alice and Bob is zero, Eve cannot obtain any quantum states through beam splitting. We need to emphasize that a typical sending intensity of phase-matching QKD is in the order of \(10^{-2}\)[29], implying that our simulation results encompass the practical scenarios of phase-matching QKD in practical implementations. The results indicate that the lower bounds on key rates are highly tight, especially for long communication distances, leaving little room for further enhancement. Similarly, the results can also imply that the attack we proposed, along with the corresponding calculation and analysis method for the phase error rate, provides a good lower bound under the beam-splitting attack. The analysis process also demonstrates that the entanglement-based security analysis approach can provide straightforward conclusions with simplicity when analyzing such attacks. This highlights the advantages of the entanglement-based security analysis method when dealing with these types of attacks.
## V Conclusion and outlook
In this work, we reexamine the security of the PM scheme and sought a more tangible and intuitive understanding of its security. Our approach introduced a source-replacement model, offering a perspective on the PM scheme's security based on entanglement. By employing this source-replacement method, we revealed that it yielded the same conclusions as the symmetry-protected privacy method. Specifically, both approaches establish that quantum states with an odd total photon number result in phase errors, while states with an even total photon number remain phase-error-free.
Furthermore, we extended our analysis to include potential threats in the form of a beam-splitting attack scheme. By considering the entanglement properties associated with this attack, we quantified its impact on the protocol's performance, providing an upper bound on the phase error rate. Our analysis proves the tightness of the key rate and establishes a direct link between attacks and quantum phase error rates. This approach is also applicable to other QKD protocols, including continuous-variable ones that have attracted considerable attention from the quantum communication community.
Note that our current source-replacement security analysis method can mainly be applied to phase-based encoding protocols. It cannot directly provide conclusions consistent with the symmetry-protected privacy method for intensity-based encoding protocols like the mode-paring scheme [17] and sending-or-not-sending twin-field scheme [13]. How to extend this source-replacement picture to such protocols and prove their security is an intriguing topic, and we hope that our findings will inspire further research and development in the field of QKD. Furthermore, we also hope that our proposed beam-splitting attack and its analysis can inspire the exploration of new attack strategies, the investigation of the security of different QKD protocols, and the development of more robust and rigorous security analysis methods. By enhancing protocols and strengthening security measures, we hope it will be helpful to advance the practicality and security of QKD systems, paving the way for their widespread adoption in real-world applications.
###### Acknowledgements.
This work was supported by the National Natural Science Foundation of China Grant No. 12174216 and the Innovation Program for Quantum Science and Technology Grant N0. 2021ZD0300804.
|
2309.06209 | Fractal and Fractional SIS model for syphilis data | This work studies the SIS model extended by fractional and fractal
derivatives. We obtain explicit solutions for the standard and fractal
formulations; for the fractional case, we study numerical solutions. As a real
data example, we consider the Brazilian syphilis data from 2011 to 2021. We fit
the data by considering the three variations of the model. Our fit suggests a
recovery period of 11.6 days and a reproduction ratio ($R_0$) equal to 6.5. By
calculating the correlation coefficient ($r$) between the real data and the
theoretical points, our results suggest that the fractal model presents a
higher $r$ compared to the standard or fractional case. The fractal formulation
is improved when two different fractal orders with distinguishing weights are
considered. This modification in the model provides a better description of the
data and improves the correlation coefficient. | Enrique C. Gabrick, Elaheh Sayari, Diogo Leonai Marques de Souza, Fernando da Silva Borges, José Trobia, Ervin K. Lenzi, Antonio M. Batista | 2023-09-12T13:24:06Z | http://arxiv.org/abs/2309.06209v1 | # Fractal and Fractional SIS model for syphilis data
###### Abstract
This work studies the SIS model extended by fractional and fractal derivatives. We obtain explicit solutions for the standard and fractal formulations; for the fractional case, we study numerical solutions. As a real data example, we consider the Brazilian syphilis data from 2011 to 2021. We fit the data by considering the three variations of the model. Our fit suggests a recovery period of 11.6 days and a reproduction ratio (\(R_{0}\)) equal to 6.5. By calculating the correlation coefficient (\(r\)) between the real data and the theoretical points, our results suggest that the fractal model presents a higher \(r\) compared to the standard or fractional case. The fractal formulation is improved when two different fractal orders with distinguishing weights are considered. This modification in the model provides a better description of the data and improves the correlation coefficient.
**Mathematical models are a powerful tool to understand, forecast, and simulate control strategies for disease spread. In mathematical epidemiology, one of the most successful is the compartmental. This model type stores individuals in compartments according to their infection status. In general, the flux among the compartments is described by ordinary differential equations that have high accuracy in reproducing real data. In the classical formulations, the host population is divided into compartments of Susceptible (\(S\)), Exposed (\(E\)), Infected (\(I\)), and Recovery (\(R\)). Combinations of these compartments lead to the SI, SIS, SIR, and SEIR models used to study the spread of different diseases. For sexual diseases, such as gonorrhoea or syphilis, the adequate model is the SIS. The SIS model describes diseases which not confer immunity after the recovery period. In this work, we study extensions of the SIS model via non-integer differential operators, fractional and fractal. We consider syphilis data from 2011 to 2021, collected in Brazil. Our results show that the fractal order operator is more efficient than the fractional and the standard to fit the considered data. Therefore, our methodology can be extended for different models and diseases to obtain the best description of real data.**
## I Introduction
After the pioneering work of Kermack and McKendrick [1] in 1927, many works have considered compartmental models [2]. This type of model compartmentalises the population into groups according to the infection status, which can be Susceptible (\(S\)), Exposed (\(E\)), Infected (\(I\)), and Recovered (\(R\)) [3]. The \(S\) compartment is related to healthy individuals; \(E\) with the infected individuals, but not yet infectious; \(I\) with the infectious individuals; and \(R\) with the individuals who acquire immunity, permanent or not [4]. The combination of these compartments leads to the SI [5], SIS [6], SIR [7], SIR [8], SEIR [9], and SEIRS [10] models.
These models have been successfully applied in several contexts [11], for example in diseases as gonorrhoea [12], COVID-19 [13], HIV [14], influenza [15], dengue [16], and others [17; 18; 19; 20; 21; 22; 23]. In addition to the success in modelling real data, the compartmental models can be easily adapted to study generic situations [24; 25; 26]. For instance, the SEIR model can be adapted to study the effects of two vaccination doses in a determined population [27]. The inclusion of multi-strain in a SIR model describes the data of dengue, and, due to seasonality and multi-strain, the solutions become complex, i.e., chaotic [28]. Including seasonality in an SEIRS model can lead to coexistence between chaotic and periodic attractors [29]. This coexistence is associated with tipping points, which depend on the control parameter. A tipping point was also found when the network topology is considered in SIS or SIR model [30]. Despite its simplicity, the SIS model can generate rich solutions, such as Turing patterns, when spatial dynamics are included [31].
Although we have many possibilities for compartmental models, the appropriate choice of model is made based on considerations consistent with the disease [32]. Some diseases do not confer long-immunity in the infected in
dividuals [33], such as rota-viruses [34], sexually transmitted [35], bacterial [36; 37], and other types of infections [38; 39]. For these diseases, the appropriate model is the SIS model [40; 41]. Considering a SIS model with variable population size, Hethcote and van den Driessche [42] obtained persistence, equilibrium, and stability thresholds. Their results suggest that combinations of disease persistence and death rate can cause a decrease to zero in the population size. Furthermore, the endemic point is asymptotically stable for some parameters. However, for other parameter ranges, Hopf bifurcation emerges. Gray et al. [43] studied the effects of environmental noise in a SIS model. They obtained explicit solutions of the stochastic version of the model and compared them with numerical solutions. First, they consider a two-state Markov chain, then generalise the results to a finite one. Additionally, they consider a realistic scenario by considering the parameters of _Streptococcus pneumoniae_ spread in children. The stochastic version of this model was obtained in the previous work [6]. Gao and Ruan [44], reported a SIS patch model with variable coefficients. In this formulation, the authors investigated the human movement's influence on the spread of disease among patches. They performed numerical solutions to study the two patches' situation. Also, considering the two-patch SIS model, Feng et al. explored the stability and bifurcation [37].
In an attempt to improve this model, extensions have been proposed, for instance, the stochastic version proposed by Gray et al. [6] or the inclusion of reaction-diffusion terms [45; 46; 47; 48]. An extension that has been gaining much attention is the inclusion of fractional derivatives in the SIS model [49; 50; 51; 52; 53; 54; 55; 56; 57]. Fractional calculus has been advanced in different fields as a powerful approach to incorporate different aspects with extensions of the differential operators to a non-integer order [58]. Fractional operators have been applied in several scenarios, such as anomalous diffusion [58; 59; 60], anomalous charge transport in semiconductors [61], chaos [62], magnetic resonance imaging [63; 64], and electrical impedance [65; 66]. In epidemiological models, fractional calculus has been used to extend the differential operators and, consequently, the models [67; 68; 69; 70; 71; 72; 73], allowing us to obtain different behaviours connected to the different relaxation processes. An important aspect of fractional calculus is the memory effect [74]. Due to the non-locality, memory, and extra degree of freedom, the fractional epidemic models are richer compared to the standard ones [52; 53].
Despite fractional models gaining much attention, less attention has been devoted to extending the epidemiological models with fractal derivatives. The fractal operators, which use the concept of fractal space [75], have been applied in many situations, such as porous media [76], anomalous diffusion [77; 78], heat conduction [79], dark energy [80], Casimir effect [81], and others [82; 83]. Compared to standard calculus, which considers a continuous space-time, fractal calculus has been shown to be more accurate when fitting the experimental data [82].
In this work, we study fractional and propose fractal extensions of the SIS model calibrated with syphilis data from Brazil (available on Ref. [84]) from 2011 to 2021. We consider the SIS model due to its simplicity and adequate description of sexual disease transmission [2]. However, other models can be employed to study the syphilis spread [85; 86; 87]. We consider the simplest form of the SIS model, i.e., without demographic characteristics. We made this simplification considering that birth and death rates are practically constant in the time range considered [88]. This research is organised as follows. We first present the standard SIS model (Section II), and after that, we analyse the extensions based on the fractional (Section III) and fractal (Section IV) differential operators. In the standard and fractal cases, we obtain analytical expressions. For the fractional case, we consider the numerical integrator Predictor-Evaluate-Corrector-Predictor (PECE) [90]. Our results suggest that fractal calculus presents a higher correlation coefficient in fitting the real data. Finally, in Section V, we draw our conclusions.
## II Standard SIS model
The SIS model compartmentalises the host population into Susceptible (\(S\)) and Infected individuals (\(I\))\({}^{2}\). The \(S\) individuals are infected when in contact with \(I\); after that, they evolve to the \(I\) compartment by a transmission rate \(\beta\). Once in the \(I\) compartment, the individuals stay by an average time equal to \(1/\gamma\), thenceforth they can be reinfected, as schematically represented in Fig. 1.
The standard SIS model is described by the following equations [4]
\[\frac{dS}{dt} = -\beta\frac{SI}{N}+\gamma I, \tag{1}\] \[\frac{dI}{dt} = \beta\frac{SI}{N}-\gamma I, \tag{2}\]
subject to the initial conditions \(I(0)=I_{0}\) and \(S(0)=S_{0}=N-I_{0}\), where \(N\), \(\beta\), \(\gamma\), \(S_{0}\), and \(I_{0}\geq 0\)[91]. Therefore, for Eqs. (1) and (2), there exists only one solution for a given initial condition \(S_{0}\), \(I_{0}\) for all \(t\geq 0\), defined in \(D=\{(S,I)\in[0,N]^{2}\mid S+I=N\}\). The proof of this statement is straightforward using the techniques reported in [92]. In addition, as \(S_{0}^{+}\) and \(I_{0}^{+}\), we have \(S(t)\) and \(I(t)\geq 0\) for all \(t\geq 0\). The proof is found in Refs. [93; 94].
Equations (1) and (2) are population size dependent. To normalise it, it is necessary to impose the transfor
Figure 1: Schematic representation of SIS model.
mations \(S\to sN\) and \(I\to iN\), which are valid for constant population size. As we are considering data from 2011 to 2021, we normalise the model. According to Brazilian Institute of Geography and Statistics (IBGE, Portuguese abbreviation), the Brazilian population increased by around \(0.52\%\) per year in the range 2010 to 2022 [88; 89]. In this way, our assumption is reasonable and we can neglect demographic characteristics. With these transformations, the equations are given by
\[\frac{ds}{dt} = -\beta si+\gamma i, \tag{3}\] \[\frac{di}{dt} = \beta si-\gamma i, \tag{4}\]
subject to the initial conditions \(i(0)=i_{0}\) and \(s(0)=1-i_{0}\). The solutions with biological meaning are restricted to \(s,i\in[0,1]\). The sum of Eqs. (3) and (4) results in \(s+i=1\). With this constraint, Eq. 4 is rewritten as
\[\frac{di}{dt}=i\beta[1-i-R_{0}^{-1}(t)], \tag{5}\]
where \(R_{0}=\beta/\gamma\) (Ref. [2]).
Let \(i(0)=i_{0}\), by integration of Eq. 5, we obtain
\[i(t)=\frac{i_{0}\xi e^{\xi\beta t}}{\xi+i_{0}(e^{\xi\beta t}-1)}, \tag{6}\]
where \(\xi\equiv(1-R_{0}^{-1})\) and \(s(t)\) is immediately determined by \(s(t)=1.0-i(t)\). In the limit \(t\rightarrow\infty\), Eq. 6 results in \(1-R_{0}^{-1}\), which corresponds to the equilibrium state [2].
Considering the Brazilian data from syphilis available on Ref. [84], from 2011 to 2021, the best fit suggests \(\beta=204.4\), and \(\gamma=31.39\) (years\({}^{-1}\)). To obtain the best fit, we first calibrate the model by the Levenberg-Marquardt non-linear least-squares algorithm in the R package minpack.lm [95]. After that, we compute the correlation coefficient (\(r\)) between real and simulated data in C language and consider the parameters which maximise \(r\). The time evolution of the model with these parameters is shown in Fig. 2(a). These parameters correspond to \(R_{0}=6.5\). The real data are not normalised. However, without loss of generality, it is possible to normalise the population in relation to \(N=200\) millions (value suggested by the fit). In this case, \(i\) gives us information about the fraction of infected each year. As the initial condition, we choose the fraction of infected people in the population at the beginning of the spread, i.e., our initial condition is equal to the initial value \(i_{0}=0.0912\). With the \(\gamma\) estimated, the average recovery period is equal to \(11.6\) days, which agrees with syphilis characteristics. For these parameters, the correlation coefficient \(r\) is equal to \(0.9900\). The \(r\) value indicates a good fit, which can be seen in Fig. 2(b) by the red line and the experimental points (blue points). The error associated with the explicit solution and the numerical integration (using the Runge-Kutta 4th order method) is \(10^{-6}\). This model is very good at fitting the points until 2016. From this point, the theoretical model diverges from the experimental points.
## III Fractional SIS model
To improve the fit of real data, we fix all the parameters found and varied the order of the differential operators to observe if improvement is obtained. As we are dealing with fractions of infected, i.e., normalised population, we can apply the extension directly in the system described by Eqs. 3 and 4. This extension is made by the replacement \(\frac{d}{dt}\rightarrow\frac{d^{\alpha}}{dt^{\alpha}}\)[60]. In this work, we consider the Caputo fractional operator [58], defined by
\[\frac{\partial^{\alpha}}{\partial t^{\alpha}}f(\vec{r},t)=\frac{1}{\Gamma \left(1-\alpha\right)}\int_{0}^{t}dt^{\prime}\frac{1}{(t-t^{\prime})^{\alpha }}\frac{\partial}{\partial t}f(\vec{r},t), \tag{7}\]
where \(\Gamma(\cdot)\) is the Gamma function, and \(\alpha\in(0,1)\). If \(\alpha=1\), we recover the usual operator (standard case). Considering the Caputo operator, Eqs. (3) and (4) become
\[\frac{d^{\alpha}s}{dt^{\alpha}} = -\beta si+\gamma i, \tag{8}\] \[\frac{d^{\alpha}i}{dt^{\alpha}} = \beta si-\gamma i, \tag{9}\]
defined in \(D=\{(s,i)\in[0,1]^{2}\mid s+i=1\}\). Given the initial condition \(s_{0}\) and \(i_{0}\geq 0\), Eqs. (8) and (9) admit a
Figure 2: (a) Solution of the SIS model. The green curve is related to \(s\) and the red one to \(i\). (b) Amplification of \(i\) curve in red line and real data in blue points. The correlation coefficient is \(r=0.9900\). We consider \(\beta=204.4\), \(\gamma=31.39\), \(R_{0}=6.5\), and \(i_{0}=0.0912\). The population is normalised for \(N=200\) million.
unique solution \(s(t)\) and \(i(t)\)[51], which are positive for all \(t\geq 0\)[53].
Considering \(s+i=1\), Eqs. (5) and (4) are rewritten as
\[\frac{d^{\alpha}i}{dt^{\alpha}}=i\beta[1-i-R_{0}^{-1}]. \tag{10}\]
Due to the nature of fractional operators and the nonlinear aspect of the previous equation, an analytical expression for Eq. (10) as in the previous section is not possible. Numerical solutions are feasible and can be found using the PECE method [90]. As we are working with Caputo's definition, the \(R_{0}\) is equal to \(\beta/\gamma\) and, as all parameters are positive, the solutions stay positive respecting \(1=s+i\). Numerical solutions are shown in Fig. 3 for \(\alpha=1\) (red line), \(\alpha=0.95\) (cyan line), \(\alpha=0.90\) (green line), \(\alpha=0.85\) (magenta line), and \(\alpha=0.75\) (orange line). The respective correlation coefficients are \(r=0.9900\), \(r=0.9891\), \(r=0.9880\), \(r=0.9868\), and \(r=0.9834\). Note that \(r\) decreases as a function of \(\alpha\). In this way, no improvement in the fit occurs when fractional operators are considered. This happens because of the nature of the data points. The points increase after 2016, and the fractional derivative slows down the \(i\) curve [52]. This effect can simulate a control measure in which memory effects are embedded.
## IV Fractal model
As the next extension of the model, we consider the fractal derivatives, given by the following definition
\[\frac{df}{dt^{\alpha}}=\frac{1}{\alpha}t^{1-\alpha}\frac{df}{dt}. \tag{11}\]
This definition is known as Hausdorff derivative [82].
For extending the Eqs. (3) and (4) to fractal order, we apply a direct substitution of Eq. (11) into the Eqs., and obtain
\[\frac{ds}{dt} = \alpha t^{\alpha-1}\big{(}-\beta si+\gamma i\big{)}, \tag{12}\] \[\frac{di}{dt} = \alpha t^{\alpha-1}\big{(}\beta si-\gamma i\big{)}, \tag{13}\]
where \(\alpha>0\). Due to the direct connection of fractal derivative with standard one and \(t\geq 0\), all the assumptions made for the positive solutions of Eqs. (1) and (2) remain valid for Eqs. (12) and (13). As all parameters are positive, including \(t\), the solutions stay preserve \(1=s+i\). Similarly to Eq.( 5),
\[\frac{di}{dt}=\alpha t^{\alpha-1}i\beta[1-i-R_{0}^{-1}]. \tag{14}\]
An explicit solution for Eq. (14) is possible and is given by
\[i(t)=\frac{i_{0}\xi e^{\xi\beta t^{\alpha}}}{\xi+i_{0}(e^{\xi\beta t^{\alpha} }-1)}, \tag{15}\]
where \(\xi\equiv(1-R_{0}^{-1})\), and \(\alpha>0\). The new parameter \(\alpha\) is a multiplicative constant, then \(R_{0}=\beta/\gamma\).
As a new degree of freedom is included in the model, we expect a better fit of the data set. Figure 4(a) displays \(r\) as function of \(\alpha\). Differently from the fractional case, the fractal derivative exhibits one point which maximises \(r\). This point is \(\alpha=0.9673\) and \(r=0.99028\). Considering this \(\alpha\) value, the solution for the model is shown in Fig. 4(b) by the red line. With this extension, the theoretical model reproduces the data with more precision when compared to the standard and fractional cases. The standard case is recovered when \(\alpha\to 1\).
Although we improved the fit by including the fractal derivative, the points from 2017 remain away from the red curve. Considering the improvement given by the fractal derivative, we hypothesise that the experimental data are dominated by one fraction, namely \(\sigma_{1}\), with weight \(a_{1}\) in a certain range of time and, after that, by other fraction \(\sigma_{2}\), with the weight \(a_{2}\). To include these modifications, we consider \(t^{\alpha}\to a_{1}t^{\sigma_{1}}+a_{2}t^{\sigma_{2}}\), obtaining the expression
\[i(t)=\frac{i_{0}\xi e^{\xi\beta(a_{1}t^{\sigma_{1}}+a_{2}t^{\sigma_{2}})}}{ \xi+i_{0}(e^{\xi\beta(a_{1}t^{\sigma_{1}}+a_{2}t^{\sigma_{2}})}-1)}, \tag{16}\]
where \(\sigma_{1,2}>0\) and \(a_{1,2}\) are real positive constants.
Fixing \(\beta=204.4\) and \(\gamma=31.39\), the best fit is given for \(a_{1}=0.184\), \(\sigma_{1}=1.9\), \(a_{2}=0.82\), and \(\sigma_{2}=0.1\), as can be seen in Fig. 5. Our hypotheses result in \(r=0.9980\). The inclusion of two fractal orders modulated the curve behaviour in the different time ranges. For example, for the selected parameters, if we change \(a_{1}\) the curve shape changed after 2016. In this way, the first part (\(t>2016\)) is dominated by \(\sigma_{1}\). On the other hand, if we increase or decrease \(a_{2}\), the first half of the curve changes for the selected parameters. Due to this characteristic, the SIS model is improved to fit the syphilis Brazilian data. The
Figure 3: Solutions for the fractional SIS model. The red line is for \(\alpha=1\) (\(r=0.9900\)), cyan line for \(\alpha=0.95\) (\(r=0.9891\)), green line for \(\alpha=0.90\) (\(r=0.9880\)), magenta line for \(\alpha=0.85\) (\(r=0.9868\)), and orange line for \(\alpha=0.75\) (\(r=0.9834\)). We consider \(\beta=204.4\), \(\gamma=31.39\), \(R_{0}=6.5\), and \(i_{0}=0.0912\).
point located in 2020 diverges from the behaviour, which is not considered in the fit data.
Figure 6 shows the extension of the \(i\) values overtime for the standard case (black line), fractional case with \(\alpha=0.9\) (green line), and fractal situation with \(a_{1}=0.184\), \(\sigma_{1}=1.9\), \(a_{2}=0.82\), and \(\sigma_{2}=0.1\) (red curve). Superposing these solutions, we show that the fractal model reproduces the data with more precision. Table 1 shows, for the range 2011-2021, the Mean Absolute Error (MAE) equal to 0.05, 0.06, and 0.03 for the standard, fractional, and fractal approaches, respectively. Considering the last point, 2022, the respective MAE changes to 0.09, 0.08, and 0.06. Therefore, our results suggest that the best model to describe the considered data is the fractal one. The last point increases the error due to the fact that we do not employ control measures. This point is in 2022 and can be associated with social behaviour changes or measures of error during the pandemic [40]. To better fit the social behaviour, it is necessary to change \(\beta\) or the respective fractional order. As our goal is to reproduce and explains the previous data, we do not take into account control measures, which will be considered in future works.
Finally, Fig. 7 displays generic solutions for the standard model in panel (a), for the fractional model in panel (b) (with \(\alpha=0.9\)), and for the fractal model in panel (c) (\(\sigma_{1}=1.9\), \(a_{2}=0.82\), and \(\sigma_{2}=0.1\)). We consider \(\beta=204.4\) and \(\gamma=31.39\). The green curve is related to \(s\) and red one with \(i\) solutions. From this solution, we note that the fractional situation takes more time to reach a steady solution than fractal and standard formulations. On the other hand, the fractal formulation reaches the steady solution faster than the other cases, as shown in panel (c).
## V Conclusions
In this work, we considered the SIS model without demographic characteristics and analysed the extensions described by the substitution of integer operators by non-integer (fractional and fractal). We obtained analytical solutions for the standard (i.e., integer order) and the fractal case. Regarding the fractional situation, we studied the numerical solutions.
Considering these three formulations, we investigated real data from Brazilian syphilis. From 2011 to 2021, our simulations show a basic reproduction number equal to 6.5. We calculated the correlation coefficient between the
Figure 4: Correlation coefficient as a function of \(\alpha\) in the panel (a). Time series for \(i\) in the panel (b) for \(\alpha=0.9673\), in red line and experimental data in blue points, with \(r=0.99028\). We consider \(\beta=204.4\), \(\gamma=31.39\), \(R_{0}=6.5\), and \(i_{0}=0.0912\).
Figure 5: Time series of \(i\) in the red line and real data in blue points. We consider \(\beta=204.4\), \(\gamma=31.39\), \(a_{1}=0.184\), \(\sigma_{1}=1.9\), \(a_{2}=0.82\), and \(\sigma_{2}=0.1\). The parameter adjustment is \(r=0.9980\).
Figure 6: Comparison among the three models versus data points. Standard case (black line), the fractional case with \(\alpha=0.9\) (green line), and fractal situation with \(a_{1}=0.184\), \(\sigma_{1}=1.9\), \(a_{2}=0.82\), and \(\sigma_{2}=0.1\) (red curve). We consider \(\beta=204.4\), \(\gamma=31.39\), \(R_{0}=6.5\), and \(i_{0}=0.0912\).
experimental and theoretical points, namely \(r\), to measure the best fit. For the standard case, we obtained \(r=0.99\). This formulation adjusts the real data with a good approximation. However, after a specific time (2016), the points follow a different trajectory than the predicted by the model. To adjust these points, we first hypothesise the fit by fractional derivatives due to the increase in the degree of freedom. Our results showed a slowdown in the infected curve, which followed an opposite behaviour compared with points after \(t=2017\). The \(r\) increases when the fractional order (\(\alpha\)) tends to the unity, i.e., recovering the standard case. In this situation, it was not possible to improve the fit. The third consideration was the replacement of integer operators with fractal operators. In this case, we constructed a curve of \(r\) as a function of \(\alpha\). The curve has a maximum point in \(\alpha=0.9673\). Therefore, the \(r\) value is improved when fractal derivatives are considered. In this case, we obtain \(r=0.99028\). Looking at the data behaviour, we observed a different increase in the years 2016 and 2017. In light of this characteristic, we hypothesised that the curve is dominated by one fractal order in a specific range of time and by another fractal order after this time. In this way, we considered two different fractal orders, and our hypothesis was confirmed. We obtain a correlation coefficient equal to \(0.998\). The fractal model with two orders described the data set with more accuracy than the other considered approaches. This result remains valid when uncertainty, a type of random noise, is added in the data set.
The fractional and fractal operators are a simple way of extending the standard approach and incorporating different effects such as memory effects, long-range correlations, etc. These effects may be related to the relaxation processes present in the system, which deviates from Debye's case, characterised by exponential relaxations. The non-Debye's cases present a different behaviour, such as power-law, stretched exponential, mixing between these behaviours, among others. Thus, extending standard operators to fractional or fractal operators is a possibility of capturing these behaviours, which are unsuitable for the usual approaches. Our results show that the fractal formulation describes the data with great accuracy. Comparing our results to the other work [96], employing geo-processing techniques, we conclude that in addition to the simplicity of the fractal model, it describes the data with a very well accuracy. Models considering more sophisticated statistical analysis and machine learning techniques are found in Refs. [97, 98, 99]. In addition, our model describes the data with great accuracy in the range from 2011 to 2021. In 2022, there is a decrease in the syphilis cases, which is predicted only by the fractional formulation. Data of 2023 are not available from official agencies, however, some brazilian regions report an increase, which is in agreement with our model.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \(\times 10^{-3}\) & 2011 & 2012 & 2013 & 2014 & 2015 & 2016 & 2017 & 2018 & 2019 & 2020 & 2021 & 2022 & **MAE** 11-21 (22) \\ \hline Data & 0.0912 & 0.1397 & 0.1966 & 0.2530 & 0.3476 & 0.4575 & 0.6142 & 0.7986 & 0.8176 & 0.6257 & 0.8376 & 0.3979 & \\ \hline Standard & 0.0912 & 0.1375 & 0.2011 & 0.2824 & 0.4771 & 0.5712 & 0.6511 & 0.7132 & 0.7582 & 0.7893 & 0.8099 & 0.8233 & \\ \hline Error & 0 & 0.0022 & 0.0045 & 0.0294 & 0.1295 & 0.1137 & 0.0369 & 0.0854 & 0.0594 & 0.1636 & 0.0277 & 0.4254 & **0.05 (0.09)** \\ \hline Fractional & 0.0912 & 0.1407 & 0.2009 & 0.2735 & 0.3549 & 0.4387 & 0.5811 & 0.5878 & 0.6452 & 0.6905 & 0.7251 & 0.7511 & \\ \hline Error & 0 & 0.0010 & 0.0043 & 0.0205 & 0.0073 & 0.0188 & 0.0961 & 0.2108 & 0.1724 & 0.0648 & 0.1125 & 0.3532 & **0.06 (0.08)** \\ \hline Fractal & 0.0912 & 0.1377 & 0.1712 & 0.2315 & 0.3290 & 0.4651 & 0.6147 & 0.7340 & 0.8022 & 0.8318 & 0.8422 & 0.0845 & \\ \hline Error & 0 & 0.0020 & 0.0254 & 0.0215 & 0.0186 & 0.0076 & 0.0005 & 0.0646 & 0.0154 & 0.2061 & 0.0046 & 0.4474 & **0.03 (0.06)** \\ \hline \end{tabular}
\end{table}
Table 1: Fraction of syphilis case per year followed by the predicted value with respective absolute error. In the last column, there is the Mean Absolute Error (MAE) computed from 2011 until 2021 and 2011 until 2022.
Figure 7: Numerical solutions for \(s\) (green line) and \(i\) (red line). Panel (a) is for the standard case, the panel (b) is for the fractional case, with \(\alpha=0.9\), and the panel (c) for fractal case with \(\sigma_{1}=1.9\), \(a_{2}=0.82\), and \(\sigma_{2}=0.1\). We consider \(\beta=204.4\), \(\gamma=31.39\), \(R_{0}=6.5\), and \(i_{0}=0.0912\).
## Acknowledgements
The authors thank the financial support from the Brazilian Federal Agencies (CNPq); CAPES; Fundacao Araucaria. Sao Paulo Research Foundation (FAPESP 2022/13761-9). E.K.L. acknowledges the support of the CNPq (Grant No. 301715/2022-0). E.C.G. received partial financial support from Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior - Brasil (CAPES) - Finance Code 88881.846051/2023-01. We would like to thank www.105groupscience.com.
## Data availability
The data that supports the findings of this study are available within the article.
|
2309.15275 | Efficient Low-rank Backpropagation for Vision Transformer Adaptation | The increasing scale of vision transformers (ViT) has made the efficient
fine-tuning of these large models for specific needs a significant challenge in
various applications. This issue originates from the computationally demanding
matrix multiplications required during the backpropagation process through
linear layers in ViT. In this paper, we tackle this problem by proposing a new
Low-rank BackPropagation via Walsh-Hadamard Transformation (LBP-WHT) method.
Intuitively, LBP-WHT projects the gradient into a low-rank space and carries
out backpropagation. This approach substantially reduces the computation needed
for adapting ViT, as matrix multiplication in the low-rank space is far less
resource-intensive. We conduct extensive experiments with different models
(ViT, hybrid convolution-ViT model) on multiple datasets to demonstrate the
effectiveness of our method. For instance, when adapting an EfficientFormer-L1
model on CIFAR100, our LBP-WHT achieves 10.4% higher accuracy than the
state-of-the-art baseline, while requiring 9 MFLOPs less computation. As the
first work to accelerate ViT adaptation with low-rank backpropagation, our
LBP-WHT method is complementary to many prior efforts and can be combined with
them for better performance. | Yuedong Yang, Hung-Yueh Chiang, Guihong Li, Diana Marculescu, Radu Marculescu | 2023-09-26T21:27:55Z | http://arxiv.org/abs/2309.15275v1 | # Efficient Low-rank Backpropagation for
###### Abstract
The increasing scale of vision transformers (ViT) has made the efficient fine-tuning of these large models for specific needs a significant challenge in various applications. This issue originates from the computationally demanding matrix multiplications required during the backpropagation process through linear layers in ViT. In this paper, we tackle this problem by proposing a new Low-rank Back-Propagation via Walsh-Hadamard Transformation (LBP-WHT) method. Intuitively, LBP-WHT projects the gradient into a low-rank space and carries out backpropagation. This approach substantially reduces the computation needed for adapting ViT, as matrix multiplication in the low-rank space is far less resource-intensive. We conduct extensive experiments with different models (ViT, hybrid convolution-ViT model) on multiple datasets to demonstrate the effectiveness of our method. For instance, when adapting an EfficientFormer-L1 model on CIFAR100, our LBP-WHT achieves 10.4% higher accuracy than the state-of-the-art baseline, while requiring 9 MFLOPs less computation. As the first work to accelerate ViT adaptation with low-rank backpropagation, our LBP-WHT method is complementary to many prior efforts and can be combined with them for better performance.
## 1 Introduction
Vision transformers (ViT) have emerged as the latest state-of-the-art tool in numerous general computer vision tasks [1; 2; 3; 4; 5; 6; 7]. However, tailoring these models to meet specific needs (_e.g._, new dataset with different distribution) can be challenging. Indeed, adapting ViT models via finetuning demands considerable computational resources and is often impractical for most edge applications. For instance, to maintain privacy, in federated learning [8; 9; 10], model adaptation is limited to users' personal edge devices (_e.g._, smartphones), where computational power is tightly restricted [11; 12].
The primary computational bottleneck arises from gradient propagation through the dense layers of ViT. Specifically, calculating gradients for layer weights and inputs requires two computationally-intensive matrix multiplications, given the gradient for output [13]. To tackle this issue, [14] tries to simplify matrix multiplications using low-rank reparametrization. However, this method only reduces the gradient computation for weights and **not** for inputs, thus limiting the overall speedup. This observation raises the following question:
_How can we decrease the computational cost for all operations, including gradient computations for weights and inputs, involved in backpropagation (BP) through any linear layer in the ViT model?_
To answer this question, we introduce a new _Low-rank BackPropagation via **W**alsh-**Hadamard** Transformation_ (LBP-WHT) method. As shown in Figure 1, our method intuitively performs BP for gradients w.r.t. inputs and weights in a low-rank space. To achieve this, we project the gradient w.r.t. the output into a low-rank space using WHT [15], then perform low-rank matrix multiplications, and
finally project the results back. This way, all matrix multiplications occur in a low-rank space, hence the computational cost is significantly reduced. In summary, our contributions are as follows:
* We propose LBP-WHT, a new approach which greatly reduces the computational cost for adapting ViT while maintaining accuracy; our method lowers the computational barrier and enables adapting large ViT models on resource constrained edge devices.
* LBP-WHT is the first work accelerating ViT training by low-rank BP; thus, LBP-WHT is orthogonal to prior works and can be combined with them for a better performance. Additionally, LBP-WHT offers abundant flexibility that can provide a good tradeoff between accuracy and cost.
* Extensive experiments on multiple datasets demonstrate the effectiveness of our method. Indeed, LBP-WHT consistently outperforms the baseline methods both in accuracy and speed. For instance, LBP-WHT achieves 10.4% higher accuracy, while requiring 9 MFLOPs less computation than [14] for training EfficientFormer-L1 on CIFAR100 dataset.
The paper is organized as follows. Section 2 formulates the problem associated with BP for linear layers. Section 3 presents our method LBP-WHT in detail. Experimental results are presented in Section 4. Section 5 reviews relevant work. Finally, Section 6 summarizes our main contributions.
## 2 Problem Formulation
**Naming conventions:** In this paper, we treat all feature maps as matrices composed of real numbers, with dimensions \(\mathbb{R}^{C\times L}\), where \(C\) represents the number of rows and \(L\) denotes the number of columns. Each row in the matrix is regarded as a "channel" consisting of \(L\) elements, and there are a total of \(C\) channels in the feature map. We use subscripts to identify specific variables, such as \(C_{x}\) for the number of channels associated with variable \(x\). Gradients with respect to \(x\) are denoted by \(g_{x}\), with the subscript indicating the target variable \(x\).
**Backpropagation for linear layers:** We focus on the BP process for linear layers, a crucial building block for vision transformers. Given an input \(x\in\mathbb{R}^{C_{x}\times L}\) and weights \(w\in\mathbb{R}^{C_{y}\times C_{x}}\), the forward propagation to compute the output \(y\in\mathbb{R}^{C_{y}\times L}\) can be expressed as:
\[y=x\cdot w^{T} \tag{1}\]
Therefore, as shown in Figure 1(a), given the gradient with respect to the output \(y\), _i.e._, \(g_{y}\in\mathbb{R}^{C_{y}\times L}\), the back-propagation for computing the gradient with respect to the weights \(w\), \(g_{w}\in\mathbb{R}^{C_{y}\times C_{x}}\), and the gradient with respect to the input \(x\), \(g_{x}\in\mathbb{R}^{C_{x}\times L}\), can be represented as two matrix multiplications:
\[g_{w}=g_{y}\cdot x,g_{x}=g_{y}\cdot w \tag{2}\]
The gradient w.r.t. the weight (\(g_{w}\)) is utilized for updating the weights \(w\), while the gradient w.r.t. the input (\(g_{x}\)) is employed for propagating the gradient to other layers. During the BP process, each matrix multiplication incurs a computational cost of \(2C_{x}C_{y}L\) FLOPs, which amounts to \(4C_{x}C_{y}L\) FLOPs, in total. Given that in ViT models, the number of channels (\(C_{x}\) and \(C_{y}\)) and the length of the input feature map (\(L\)) are substantial [1; 2; 3; 4; 5; 6; 7], the computational cost for BP becomes significant.
**Low-rank backpropagation:** As shown in Figure 1 and 1(b), we propose reducing the computational cost for both matrix multiplications by employing low-rank approximations. Specifically, we first project variables into a low-rank space as follows:
\[\hat{g}_{y}=p(g_{y}),\hat{x}=p(x) \tag{3}\]
Here, \(\hat{g}_{y}\in\mathbb{R}^{C_{y}\times r}\) and \(\hat{x}\in\mathbb{R}^{C_{x}\times r}\) represent the low-rank space projections (\(r<<L\)) for the gradient with respect to the output (\(g_{y}\)) and input \(x\), respectively. The projection function \(p(\cdot)\) will be introduced in the next section.
Next, we execute the BP through the linear layer in the low-rank spaces as follows:
\[\hat{g}_{w}=\hat{g}_{y}\cdot\hat{x},\hat{g}_{x}=\hat{g}_{y}\cdot w \tag{4}\]
Finally, we project the low-rank gradient with respect to the input (\(\hat{g}_{x}\)) back into its original space. The reverse projection for \(\hat{g}_{w}\) can be
Figure 1: Our LBP-WHT. “Mat Mul” is short for “Matrix Multiplication”.
omitted as it already exists in the same space \(\mathbb{R}^{C_{y}\times C_{x}}\) as the target \(g_{w}\). For \(\hat{g}_{x}\), the reverse projection is accomplished using the function \(p^{-1}(\cdot)\), the details of which will be presented later:
\[\tilde{g}_{w}=\hat{g}_{w},\tilde{g}_{x}=p^{-1}(\hat{g}_{x}) \tag{5}\]
Here, \(\tilde{g}_{w}\) and \(\tilde{g}_{x}\) represent the resulting gradients for weights and input. As these gradients are generated through an approximated back-propagation process rather than the standard BP, we denote these variables with tildes.
## 3 LBP-WHT: Low-rank BackPropagation via WHT
As shown in Figure 2b, intuitively, we reduce the computational cost by performing back-propagation in a low-rank space, as described in Equation 4. For instance, using a rank \(r\) approximation, each matrix multiplication requires \(2C_{x}C_{y}r\) FLOPs, which can be substantially smaller than \(2C_{x}C_{y}L\) when \(r<\!\!<L\). Nevertheless, this approach necessitates two additional steps, projection and reverse projection (as illustrated in Equation 3 and 5), which introduce some computational overhead. Furthermore, the low-rank projection may add noise and potentially diminish the quality of training. To address these concerns, our method incorporates a low-overhead projection function based on the WHT and tackles the second issue by selecting an appropriate set of WHT bases.
WHT is a generalized Fourier transformation. Figure 2c displays the transformation basis for an order-4 WHT. For an order-\(n\) 2D WHT, there are \(n\times n\) bases \(B_{i,j}\), with each basis being an \(n\times n\) matrix containing only \(+1\) and \(-1\). Of note, in the context of ViT, 2D feature maps are flattened into 1D maps, so we utilize a flattened WHT base--a vector with a length of \(n^{2}\), _i.e._, \(B_{i,j}\in\mathbb{Z}^{n^{2}\times 1},1\leq i,j\leq n\). WHT possesses four properties that make it advantageous for us:
* The transformation bases are complete.
* The transformation bases are orthogonal.
* The transformation bases contain only \(+1\) and \(-1\).
* The transformation cost can be reduced via fast WHT algorithm with \(O(n\log n)\) complexity.
The first property (completeness) allows WHT to perform transformations ranging from lossy (when few bases are activated) to lossless (when all bases are activated). This grants flexibility in exploring the trade-off between efficiency and accuracy. The second property ensures that any variable has precisely one projection result, obtainable via matrix multiplication. For instance, the projection function for \(g_{y}\) (Equation 3) with basis \(B_{i,j}\) can be expressed as \(p(g_{y})=g_{y}\cdot B_{i,j}\). Likewise,
Figure 2: **(a-b)** Workflows for BP through a linear layer utilizing (a) the conventional method and (b) our LBP-WHT method. The intuition is to reduce the computation cost for BP by performing matrix multiplication in a low-rank space. To achieve this, we first project variables into a low-rank space using WHT \(p(\cdot)\), then carry out efficient matrix multiplications, and finally project them black using \(p^{-1}(\cdot)\), where both \(p\) and \(p^{-1}\) are implemented with WHT. **(c)** Bases \(B_{i,j}\) for order-4 2D WHT. White and Black represents +1 and -1, respectively. Of note, in the context of ViT, 2D feature maps are flattened into 1D, so we utilize a flattened version of these bases.
the reverse projection can also be implemented using a simple matrix multiplication. The third and final properties demonstrate the efficiency of WHT implementation, requiring only \(O(n\log n)\) additions/subtractions and no multiplications [16].
### Low-rank Back-Propagation with WHT
Indeed, these four properties demonstrate that WHT is an ideal fit for our needs, offering both low overhead and high flexibility for selecting an appropriate set of bases. Therefore, we employ WHT as the projection function \(p(\cdot)\) and reverse projection function \(p^{-1}(\cdot)\) in Equations 3 and 5. More specifically, for an order-\(n\) WHT with a set of \(r\) bases chosen by an index set \(\mathcal{I}\), the projection function can be written as:
\[p(x)=\text{WHT}(x;\mathcal{I})=x\cdot\left(B_{i_{1},j_{1}}\quad B_{i_{2},j_{2} }\quad\cdots B_{i_{r},j_{r}}\right),\left(i_{k},j_{k}\right)\in\mathcal{I},1 \leq k\leq r \tag{6}\]
where \(\mathcal{I}=\{(i_{k},j_{k})|1\leq i_{k},j_{k}\leq n,1\leq k\leq r\}\) indicates which bases are activated. Similarly, the reverse projection function can be expressed as:
\[p^{-1}(x)=\text{WHT}^{-1}(x;\mathcal{I})=x\cdot\left(B_{i_{1},j_{1}}\quad B_{ i_{2},j_{2}}\quad\cdots B_{i_{r},j_{r}}\right)^{T},\left(i_{k},j_{k}\right) \in\mathcal{I},1\leq k\leq r \tag{7}\]
For simplicity, both Equations 6 and 7 are presented using the vanilla WHT algorithm with computational complexity \(O(n^{2})\), rather than the fast WHT algorithm with complexity \(O(n\log n)\). Consequently, our LBP-WHT algorithm can be summarized as Algorithm 1 also shown in Figure 1(b).
```
Input: Input \(x\), weight \(w\), gradient w.r.t. output \(g_{y}\), Selected WHT base indices \(\mathcal{I}\) Output: Approximated gradient w.r.t. input \(\hat{g}_{x}\), approximated gradient w.r.t. weight \(\tilde{g}_{w}\) \(\hat{x}\gets p(x)=\text{WHT}(x;\mathcal{I})\)\(\triangleright\) Projection to a low-rank space with WHT (Equation 3) \(\hat{g}_{y}\gets p(g_{y})=\text{WHT}(g_{y};\mathcal{I})\)\(\hat{g}_{w}\leftarrow\hat{g}_{y}^{T}\cdot\hat{x}\)\(\triangleright\) Efficient matrix multiplication in a low-rank space (Equation 4) \(\hat{g}_{x}\leftarrow\hat{g}_{y}\cdot w\)\(\triangleright\) Reverse projection to a full-rank space (Equation 5) \(\hat{g}_{x}\leftarrow\hat{p}^{-1}(\hat{g}_{x})=\text{WHT}^{-1}(\hat{g}_{x}; \mathcal{I})\)\(\triangleright\) Skipped reverse projection since \(\hat{g}_{w}\) is already in a full-rank space
```
**Algorithm 1** Backpropagation through a linear layer with LBP-WHT.
Given input for BP, we first project \(x\) and \(g_{y}\) into low-rank space (Equation 3), then we performs matrix multiplication (Equation 4) and lastly we project the results back (Equation 5).
### WHT Bases Selection
Here we explore two types of basis selection strategies: low-pass and low-heuristic-error.
**Low-pass (LP) Base Selection:** Natural images have strong spatial locality, _i.e._, pronounced low-frequency components [17; 18]. We take advantage of this feature and choose bases with stronger low-frequency responses, which have smaller indices as illustrated in Figure 1(c). More specifically, we consider both \(L_{1}\)-based and \(L_{\infty}\)-based low-pass basis selection strategies (\(\text{LP}_{L_{1}}\) and \(\text{LP}_{L_{\infty}}\)):
\[\mathcal{I}_{L_{1}}=\{(i_{k},j_{k})\;\big{|}\;|i_{k}|+|j_{k}|\leq r_{L_{1}},\;1 \leq i_{k},j_{k}\leq n,\;\frac{1}{2}r_{L_{1}}(1+r_{L_{1}})=r\},\text{LP}_{L_{1 }}\text{selection} \tag{8}\]
\[\mathcal{I}_{L_{\infty}}=\{(i_{k},j_{k})\;\big{|}\;\max(i_{k},j_{k})\leq r_{L_{ \infty}},\;1\leq i_{k},\;j_{k}\leq n,\;r_{L_{\infty}}^{2}=r\},\text{LP}_{L_{ \infty}}\text{selection} \tag{9}\]
\(\mathcal{I}_{L_{1}}\) and \(\mathcal{I}_{L_{\infty}}\) are the index sets for selecting WHT bases, as described in Section 3.1.
**Low-heuristic-error (LHE) Base Selection:** According to Parseval's Theorem [19], WHT preserves the signal energy, so by selecting the WHT bases with the top-\(r\) strongest responses, we can preserve most energy during low-rank projection and minimize the error. Since profiling the energy for all WHT bases on all training steps is expensive, we profile the energy for all WHT bases only for a small number of training steps and select the bases with the top-\(r\) energy.
Considering that the \(L_{1}\)-based low-pass basis selection has a much lower profiling overhead than the low-heuristic-error basis selection and provides finer granularity in balancing accuracy and efficiency, we primarily focus on the \(\text{LP}_{L_{1}}\) selection method and explore the other two in Section 4.5.
### Overhead Analysis
Since the computational cost for the fast WHT algorithm depends on the basis selection, we simplify the analysis in this section by considering the matrix multiplication-based vanilla WHT algorithm, as shown in Equations 6 and 7. Table 1 presents the computation requirements for a linear layer with input and output channels \(C_{x}\) and \(C_{y}\), feature map size \(L\), and the rank for low-rank WHT approximation \(r\). Our LBP-WHT achieves a \(\frac{L}{r}\) times speedup with an overhead of \((2C_{x}+C_{y})Lr\) FLOPs, which is only \(\frac{(2C_{x}+C_{y})Lr}{4C_{x}C_{y}L}\) or \((\frac{1}{C_{x}}+\frac{1}{2C_{y}})\frac{r}{2}\) of the total computation required by vanilla BP. Given that ViT typically has a large number of channels, the overhead is very small.
For instance, the final linear layer in SwinV2-small [1] consists of 3072 input channels, 768 output channels, and a feature map size of 49, which means \(C_{x}=3072\), \(C_{y}=768\), and \(L=49\). As per Table 1, conventional backpropagation (BP) requires 462.3 MFLOPs. In contrast, our Low-Rank Backpropagation with WHT (LBP-WHT) method, assuming a rank of 8 (\(r=8\)), needs only 78.2 MFLOPs, which is roughly 16.9% of the computation required by vanilla BP.
Breaking down the 78.2 MFLOPs for LBP-WHT, we see that 1.5 MFLOPs are needed for the low-rank projection, 75.5 MFLOPs for BP in the low-rank space, and 1.2 MFLOPs for the reverse projection. The combined overhead is 2.7 MFLOPs, accounting for just 0.6% of vanilla BP's computation and 3.5% of LBP-WHT's computation. This demonstrates that with WHT, we can significantly reduce the computation for BP while incurring negligible overhead for low-rank projection.
## 4 Experimental Results
In this section, we first present our experimental results on image classification and semantic segmentation tasks. Then, we explore the impact of different ranks for low-rank projection and different base selection strategies. Lastly, we present our preliminary results for deploying our methods on real edge devices in the supplementary material.
### Experimental Setup
**Environment:** We setup our environment with PyTorch 1.13, MMClassification v0.25 and MMSegmentation v0.30. Models are trained with an NVIDIA-A6000 GPU.
**Classification:** We conduct experiments for image classification following [20]. We use ImageNet [21]-pretrained ViTs and finetune them on six different datasets, namely, CIFAR100 [22] (CF100), CIFAR10 [22] (CF10), Cars [23], Flowers [24], Food [25], and Pets [26]. We standardize the image resolution across all datasets to 224\(\times\)224. Each model is finetuned for 50 epochs using the AdamW [27] optimizer and a batch size of 64. The learning rate is adjusted for each dataset based on the performance of EfficientFormer-L1 [28] with vanilla BP.
**Semantic Segmentation:** We use the ADE20K [29]-pretrained Segformer-mit-b0 [30] model and finetune it on two datasets, Cityscapes [31] (City) and the enhanced Pascal-VOC 2012 [32] (VOC12A). The images are downscaled and cropped to a size of \(512\times 512\) pixels for training. Models are finetuned for 20,000 steps using the AdamW optimizer and a batch size of 8.
**Partial Training:** We primarily report on the results of training the final stage of the ViT using various methods, a common approach in transfer learning to reduce the computational cost [33, 34, 18, 35, 36]. More results for full training are included in the supplementary material.
**Baselines Comparisons:** We compare our results against three baseline methods: Full BP, "LoRA", and "LoRA-all". Full BP refers to training the model with standard full-rank backpropagation. "LoRA" and "LoRA-all" are methods derived from [14]. "LoRA" strictly follows [14], which uses low-rank reparametrization solely in the ViT's attention modules, while "LoRA-all" applies this method to all linear layers. For hybrid CNN-ViT models, where the attention modules are usually only in the final stage, we use "LoRA-all" for full training.
\begin{table}
\begin{tabular}{l|l} \hline \hline & FLOPs \\ \hline Vanilla BP & \(4C_{x}C_{y}L\) \\ \hline Projection & \((C_{x}+C_{y})Lr\) \\ Low-rank MM & \(4C_{x}C_{y}r\) \\ Reverse Projection & \(C_{x}Lr\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Computation required by vanilla BP and components in our LBP-WHT. We consider the projection and reverse projection as overhead. “MM” is short for “Matrix Multiplication”.
**Computation Measurements and Preliminary Deployment Results:** To determine the computational requirements of different models and methods, we run model training on an Intel 11900K CPU and measure the exact FLOPs using the embedded performance tools "perf" in the Linux kernel v5.15.87. For preliminary deployment results, we test our method on the last two linear layers of EfficientFormer-L1, using OpenBLAS and CuBLAS for CPU and GPU testing respectively on an NVIDIA Jetson Nano. The results for deployment are reported in the supplementary material.
### Image Classification Results
Table 2 demonstrates the effectiveness of our LBP-WHT method in adapting ViT for image classification tasks. Here are some more specific observations:
**Comparison with LoRA-based baselines:** Our LBP-WHT method consistently surpasses the LoRA-based method across all eight datasets in both partial and full training modes. For instance, when only training the final stage of EfficientFormer-L1, LBP-WHT using LP\({}_{L_{1}}\)-2 base selection requires **8.9 MFLOPs fewer computations** than LoRA, yet achieves **10% greater accuracy** on the CIFAR100 dataset. When the entire model is trained, the accuracy difference is smaller, but LBP-WHT still outperforms the LoRA-based method. For instance, in comparison to LoRA-all, LBP-WHT using LP\({}_{L_{1}}\)-7 base selection requires less computation (66.68 MFLOPs), but still improves accuracy by 2% on CIFAR100 when training the EfficientFormerV2-S0 model.
**Comparison with traditional full-rank BP:** With LP\({}_{L_{1}}\)-8 base selection, our LBP-WHT method either matches or surpasses the accuracy of full-rank BP while only requiring about 80% of the total
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{6}{c}{**Partial Training: Training the Last Stage**} \\ \hline \multicolumn{1}{c}{**Model**} & \multicolumn{1}{c}{**Method**} & \multicolumn{1}{c}{**R**} & \multicolumn{1}{c}{**Speedup**} & \multicolumn{1}{c}{**mAcc**} & \multicolumn{1}{c}{**MFLOPs**} & \multicolumn{1}{c}{**CF100**} & \multicolumn{1}{c}{**CF10**} & \multicolumn{1}{c}{**Cars**} & \multicolumn{1}{c}{**Flowers**} & \multicolumn{1}{c}{**Food**} & \multicolumn{1}{c}{**Pets**} \\ \hline \multirow{4}{*}{Efficient} & Full BP & - & 1.0\(\times\) & 88.66 & 1685.01 & 79.28 & 95.23 & 84.80 & 95.50 & 84.04 & 93.13 \\ & \multirow{2}{*}{LoRA-all} & 8 & 6.9\(\times\) & 79.59 & 242.61 & 65.28 & 87.40 & 65.76 & 90.16 & 76.46 & 92.50 \\ & & 8.1\(\times\) & 85.07 & 976.50 & 76.92 & 94.38 & 76.84 & 93.56 & 81.50 & 92.64 \\ & \multirow{2}{*}{(Hybrid)} & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 7.2\(\times\) & **83.56** & 233.62 & 75.61 & 93.35 & 76.36 & 93.07 & 79.65 & 92.34 \\ & & & **3.5\(\times\)** & **87.76** & 480.00 & 78.27 & 94.60 & 82.60 & 95.53 & 82.37 & 93.16 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 36 & 1.2\(\times\) & **86.62** & 1397.02 & 79.34 & 95.51 & 84.57 & 95.58 & 83.98 & 92.94 \\ \hline \multirow{4}{*}{Efficient} & Full BP & - & 1.0\(\times\) & 91.91 & 11071.73 & 86.40 & 97.61 & 87.48 & 97.19 & 88.85 & 94.22 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 8 & 2.0\(\times\) & 88.45 & 5520.52 & 81.66 & 95.44 & 78.95 & 95.82 & 85.15 & 93.65 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 8 & 1.9\(\times\) & 90.36 & 59934.0 & 85.09 & 97.10 & 83.66 & 96.16 & 86.60 & 93.54 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 9.2\(\times\) & **93.86** & 112.03 & 87.38 & 79.67 & 83.02 & 96.35 & 83.36 & 93.84 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 10 & **3.8\(\times\)** & **91.16** & 2905.16 & 85.10 & 92.72 & 86.01 & 97.14 & 87.48 & 94.03 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 36 & 1.2\(\times\) & **91.80** & 9241.53 & 86.19 & 97.62 & 87.32 & 97.40 & 87.77 & 94.47 \\ \hline \multirow{4}{*}{Efficient} & Full BP & - & 1.0\(\times\) & 84.27 & 454.64 & 72.37 & 92.63 & 75.90 & 92.73 & 81.44 & 90.52 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 8 & 2.2\(\times\) & 74.42 & 206.29 & 60.74 & 84.89 & 52.99 & 86.47 & 72.32 & 89.18 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 8 & 1.5\(\times\) & 78.94 & 313.19 & 65.51 & 88.95 & 63.49 & 88.94 & 76.88 & 89.89 \\ & \multirow{2}{*}{\(\text{SD}\)} & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 3 & **4.5\(\times\)** & **77.55** & 99.94 & 63.75 & 88.68 & 59.02 & 89.51 & 74.27 & 87.49 \\ \cline{2-11} & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 10 & 2.7\(\times\) & **81.29** & 168.60 & 69.03 & 90.88 & 83.40 & 90.73 & 79.45 & 89.29 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 36 & 1.1\(\times\) & **83.78** & 405.84 & 71.90 & 92.29 & 74.31 & 92.60 & 81.07 & 90.52 \\ \hline \multirow{4}{*}{Efficient} & Full BP & - & 1.0\(\times\) & 91.03 & 3605.86 & 82.26 & 96.13 & 88.78 & 96.80 & 87.63 & 94.60 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 8 & 2.5\(\times\) & 84.74 & 1469.66 & 74.35 & 92.94 & 70.99 & 92.97 & 82.81 & 94.36 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 8 & 1.7\(\times\) & 87.94 & 20924.7 & 87.97 & 94.99 & 80.39 & 94.32 & 84.94 & 94.03 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & **6.8\(\times\)** & **86.88** & 533.61 & 78.00 & 94.28 & 76.05 & 94.36 & 84.14 & 94.47 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\clubsuit\)} & 10 & **3.3\(\times\)** & **89.47** & 1088.06 & 80.15 & 95.54 & 84.64 & 95.97 & 85.85 & 94.66 \\ & \multirow{2}{*}{\(\text{LP}_{L_{1}}\)-\(\ubsuit\)} & 36 & 1.1\(\times\) & **90.79** & 1305.05 & 82.24 & 96.02 & 87.34 &
computation. When using smaller ranks, LBP-WHT significantly reduces the cost with only minor accuracy costs. For example, when training the final stage of EfficientFormer-L1 using LP\({}_{L_{1}}\)-4 base selection, LBP-WHT achieves a \(3.5\times\) speedup with just a 1% loss in accuracy on CIFAR100. With LP\({}_{L_{1}}\)-8 base selection, LBP-WHT achieves even **higher accuracy** (79.34%) with a \(1.2\times\)**speedup**.
These results underscore the merits of our method. As shown in Table 2, our method achieves computational savings by systematically reducing the computational cost for **all** operations during backpropagation, including the gradient computation for both input and weight. Specifically, when we apply a similar rank for LoRA-all and LBP-WHT, we anticipate that both methods will have similar computational costs for computing the weight gradient. However, as LoRA-all _cannot_ speed up the gradient computation for the input while LBP-WHT can, our LBP-WHT method requires only half the total computation of LoRA-all. Consequently, for a similar computational budget, LBP-WHT can employ a higher rank for low-rank projection, thus leading to a higher accuracy. For example, when training the entire EfficientFormerV2-S0 model, LBP-WHT with LP\({}_{L_{1}}\)-4 (rank 10) only requires 1187 MFLOPs, which is 62% of the computational cost for LoRA-all. Thus, for a similar budget, LBP-WHT can use a rank 28 projection (LP\({}_{L_{1}}\)-7) and achieve a higher accuracy.
### Semantic Segmentation
Table 3 presents the experimental results for adapting the ADE20K-pretrained Segformer model on Cityscapes and augmented Pascal VOC 2012 dataset. Our LBP-WHT has better results in most cases. For instance, when partially training on the Cityscapes dataset, our approach using LP\({}_{L_{1}}\)-4 base selection achieves a mIoU score approximately 0.9% higher than that of LoRA-all. Moreover, it only requires 1481.9 MFLOPs, which is 4.2\(\times\) faster. These findings not only further validate the efficacy of our method, but also demonstrate its broad applicability across key computer vision tasks.
### Exploration 1: Different Ranks for Low-rank Projection in LBP-WHT
Figure 3 shows the accuracy achieved when adapting ImageNet-pretrained ViTs for CIFAR100, with varying ranks for low-rank model adaptation. Our observations from this figure are as follows:
\begin{table}
\begin{tabular}{l c|c c c|c c c c} \hline \multicolumn{1}{c}{**Partial Training: Training Last Stage + Decoder**} & \multicolumn{5}{c}{**Full Training**} \\ \hline
**Method** & **R** & **MFLOPs** & **City** & **OC12A** & **Method** & **R** & **MFLOPs** & **City** & **VOC12A** \\ \hline Full BP & - & 10052.00 & 62.85 & 69.30 & Full BP & - & 16700.26 & 67.37 & 70.84 \\ LoRA & 8 & 5854.61 & 51.43 & 58.18 & LoRA & 8 & 11976.46 & 62.57 & 58.18 \\ LoRA-all & 8 & 6262.01 & 58.07 & 66.26 & LoRA-all & 8 & 11971.13 & 65.74 & 67.82 \\ \(\mathbf{LP}_{L_{1}}\)-2\(\boldsymbol{\times}\) & 3 & **1481.94** & **58.95** & **67.93** & LP\({}_{L_{1}}\)-2 & 3 & **5746.54** & 61.57 & **67.93** \\ LP\({}_{L_{1}}\)-4\(\boldsymbol{\times}\) & 10 & **2725.39** & **60.97** & **68.85** & LP\({}_{L_{1}}\)-4\(\boldsymbol{\times}\) & 10 & **7295.52** & **64.72** & **68.85** \\ LP\({}_{L_{1}}\)-8 & 36 & 7308.45 & **62.68** & **68.95** & LP\({}_{L_{1}}\)-8 & 36 & 13086.06 & **66.17** & **68.95** \\ \hline \end{tabular}
\end{table}
Table 3: Experimental results for semantic segmentation. Results are highlighted as in Table2.
Figure 3: Accuracy and computation for training the last stage of different models with different ranks on CIFAR100 dataset. Our method consistently outperforms the baseline LoRA-all.
1. Our LBP-WHT method consistently outperforms the LoRA-all method, _i.e._, for a similar level of computation, LBP-WHT yields higher accuracy.
2. By altering the rank, LBP-WHT provides a broader range of cost options than the baseline method.
3. LBP-WHT's accuracy monotonically improves as more ranks are employed for projection.
4. For all models with our LBP-WHT method, a generally concave accuracy-computation curve is observed. This indicates strong diminishing returns in using larger ranks.
5. LBP-WHT with \(\text{LP}_{L_{1}}\)-6 base selection achieves an accuracy very close to that of full BP.
Our first observation further confirms the superior performance of our method. The second observation indicates the broad applicability of our method. For instance, for edge devices with limited computational budgets, like Raspberry Pi, we can employ LBP-WHT with a lower rank to reduce computational cost. On the other hand, for more powerful devices, such as personal desktops equipped with GPUs, a larger rank can be chosen to enhance accuracy. This ensures that users with various computational resources and constraints can benefit from our method.
The last three observations offer guidelines for rank selection with our LBP-WHT method.
**With strict computational constraints:** Given our third observation above, if there is a hard limit on the maximum number of FLOPs allocated for training, selecting the rank for LBP-WHT is straightforward: we simply opt for the largest possible number of ranks, which in most cases will yield the highest possible accuracy.
**Without strict computational constraints:** Our final two observations suggest that training efficiency can be characterized by the marginal accuracy, or the slope of the accuracy-computation curve. As shown in Figure 4, before \(\text{LP}_{L_{1}}\)-4, the marginal accuracy is significantly greater than zero. However, after \(\text{LP}_{L_{1}}\)-6, the marginal accuracy is very close to zero. This implies that choosing fewer ranks than \(\text{LP}_{L_{1}}\)-4 or more than \(\text{LP}_{L_{1}}\)-6 may not be advantageous, as it could either forgo the opportunity to achieve good performance with a small amount of computation or waste substantial computation with little to no observable benefit. Thus, a good rank selection empirically lies between \(\text{LP}_{L_{1}}\)-4 and 6.
### Exploration 2: Different Bases Selection Method
Figure 5 shows the WHT spectrum for the gradient w.r.t. layer output (\(g_{y}\)) collected from the last attention block in EfficientFormer-L1. We observe that most energy concentrates in the low-frequency area, _i.e._, the top-left corner, which supports claim in Section 3.2 that natural images have strong spatial locality and strong low-frequency components. Furthermore, Figure 5 demonstrates that by choosing WHT
\begin{table}
\begin{tabular}{l c c c} \hline \hline Method & Rank & CF100 & CF10 \\ \hline \(\text{LP}_{L_{1}}\) & 10 & 78.27 & 94.60 \\ \(\text{LP}_{L_{\infty}}\) & 9 & 77.64 & 94.30 \\ LHE & 10 & 78.06 & 94.60 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Experimental results for adapting EfficientFormer-L1 on CIFAR100 and CIFAR10 with different base selection methods. Accuracy is in percentages (%)
Figure 4: Marginal accuracy: the slope of the accuracy-computation curve in Figure 3.
Figure 5: The WHT spectrum for gradient w.r.t. layer output (\(g_{y}\)) collected from the last attention block of EfficientFormer-L1. The brightness for each pixel \((i,j)\) in the spectrum represents the energy preserved by the WHT base \(B_{i,j}\) during projection. A brighter pixel means a larger energy. As shown in Figure 1(c), the WHT base with smaller indices corresponds to a lower frequency component.
bases with low-frequency responses - that is, using the selection methods \(\text{LP}_{L_{1}}\) and \(\text{LP}_{L_{\infty}}\) - we can preserve most of the energy and minimize error during the low-rank projection. As indicated in Table 4, both of these low-pass base selection methods yield accuracy levels very similar to those achieved with the low-heuristic-error (LHE) method. The LHE method profiles the WHT spectrum and selects the WHT bases with the strongest response. Given that the \(\text{LP}_{L_{1}}\) base selection method eliminates the need for profiling (unlike LHE) and offers a more favorable balance between accuracy and cost compared to \(\text{LP}_{L_{\infty}}\), we have selected \(\text{LP}_{L_{1}}\) as the standard method for LBP-WHT.
### Limitation and Broader Impacts
**Full training with a small number of ranks:** As shown in Table 2, we find that the accuracy degradation is not negligible when using a small number of ranks for LBP-WHT full training. We consider this is the issue of accumulating error introduced by low-rank projection during BP. We expect that an improved approximation method can perform even better. Of note, even with accuracy degradation, our method still consistently outperforms the baselines, _i.e._, LoRA-based methods.
**Broader Impact:** Our method greatly reduced the barrier for training ViTs. As a positive feature, our method may push the development of privacy-centric on-device training methods like federated learning; our method may also enable more researchers to test their ideas with powerful ViTs. On the other hand, our method may lower the barrier for irresponsible customization and use of ViT.
## 5 Related Work
**Low-rank Model Adaptation:**[14] proposes to speed up transformer training by attaching and training only low-rank branches to the linear layer. More precisely, consider a linear layer with equation \(y=x\cdot w^{T}\), where \(x\) is the input, \(y\) is the output, and \(w\) is the weight. LoRA adds a branch that contains low-rank weights \(w_{A}\) and \(w_{B}\), forming \(y_{\text{LoRA}}=x\cdot w^{T}+x\cdot(w_{A}\cdot w_{B})^{T}\). The original weight \(w\) is kept frozen, while the appended weights \(w_{A}\) and \(w_{B}\) are trained. Since the ranks of \(w_{A}\) and \(w_{B}\) are much smaller than that of \(w\), the computation needed to calculate the gradients with respect to \(w_{A}\) and \(w_{B}\) is significantly reduced. However, this method does **not** decrease the computation for calculating the gradient w.r.t. \(x\). This is because it still needs to propagate the gradient through the weights \(w\) to \(x\), which considerably limits the performance of LoRA-based methods. As demonstrated in Figure 3, our LBP-WHT, requires much less computation while having better accuracy than LoRA-based methods; this is because our method reduces the computation for procedures in BP, including the gradient calculation for both input and weights.
**Other Orthogonal Methods for On-device Training:** Previous research on efficient on-device model adaptation falls into two main categories. The first category [33; 38; 39; 40; 41; 42; 43] suggests reducing the computational cost of arithmetic operations (addition and multiplication) in BP through quantization. The second category [20; 44] proposes to append a smaller neural network to the original model and accelerate adaptation by only training the attachment. To the best of our knowledge, our paper is the first to apply low-rank BP for ViT model adaptation. Therefore, our method, LBP-WHT, is distinct from previous research and can be combined with those methods for enhanced performance.
## 6 Conclusion
In this paper, we have addressed the problem of efficient model adaptation for ViT. We have proposed a novel low-rank BP technique designed to reduce the computational load associated with the propagation of gradients through the linear layers of ViT, which is a significant bottleneck when fine-tuning ViT. In Section 3, we introduced the LBP-WHT method as a solution to accelerate model adaptation. More specifically, LBP-WHT operates by projecting the gradient w.r.t. the output (\(g_{y}\)) into a low-rank space, performing matrix multiplications within this low-rank space, and then projecting the results back into the original space. Since all matrix multiplications occur in a low-rank space, the computational cost is significantly reduced. Additionally, thanks to the properties of the Walsh-Hadamard Transform (WHT), the overhead for these projections is minimal (as discussed in Section 3.3). Through extensive experiments in Section 4, we have demonstrated the efficiency and broad applicability of our method. Our LBP-WHT approach consistently outperforms existing methods with a significant speedup and higher accuracy. |
2305.19942 | Coupled cluster simulation of impulsive stimulated X-ray Raman
scattering | Time-dependent equation-of-motion coupled cluster (TD-EOM-CC) is used to
simulate impulsive stimulated x-ray Raman scattering (ISXRS) of ultrashort
laser pulses by neon, carbon monoxide, pyrrole, and p-aminophenol. The
TD-EOM-CC equations are expressed in the basis of field-free EOM-CC states,
where the calculation of the core-excited states is simplified through the use
of the core-valence separation (CVS) approximation. The transfer of electronic
population from the ground state to the core- and valence-excited states is
calculated for different numbers of included core- and valence-excited states,
as well as for electric field pulses with different polarizations and carrier
frequencies. The results indicate that Gaussian pulses can transfer significant
electronic populations to the valence states through the Raman process. The
sensitivity of this population transfer to the model parameters is analyzed.
The time-dependent electronic density for p-aminophenol is also showcased,
supporting the interpretation that ISXRS involves localized core excitations
and can be used to rapidly generate valence wavepackets. | Alice Balbi, Andreas S. Skeidsvoll, Henrik Koch | 2023-05-31T15:25:24Z | http://arxiv.org/abs/2305.19942v1 | # Coupled cluster simulation of impulsive stimulated x-ray Raman scattering
###### Abstract
Time-dependent equation-of-motion coupled cluster (TD-EOM-CC) is used to simulate impulsive stimulated x-ray Raman scattering (ISXRS) of ultrashort laser pulses by neon, carbon monoxide, pyrrole, and p-aminophenonhed. The TD-EOM-CC equations are expressed in the basis of field-free EOM-CC states, where the calculation of the core-excited states is simplified through the use of the core-valence separation (CVS) approximation. The transfer of electronic population from the ground state to the core- and valence-excited states is calculated for different numbers of included core- and valence-excited states, as well as for electric field pulses with different polarizations and carrier frequencies. The results indicate that Gaussian pulses can transfer significant electronic populations to the valence states through the Raman process. The sensitivity of this population transfer to the model parameters is analyzed. The time-dependent electronic density for p-aminophenon is also showcased, supporting the interpretation that ISXRS involves localized core excitations and can be used to rapidly generate valence wavepackets.
+
Footnote †: These authors contributed equally to this work.
+
Footnote †: These authors contributed equally to this work.
## I Introduction
The ability to experimentally generate short and intense x-ray laser pulses has been a subject of significant interest in the field of x-ray science. Recent technological advances, specifically the realization of x-ray free electron lasers (XFELs) [1; 2] and new approaches based on high harmonic generation (HHG) [3; 4], have made it possible to generate x-ray laser pulses with high intensities and pulse durations as short as a few hundred and even tens of attoseconds [5]. This progress has enabled the development of new experimental techniques with unprecedented temporal resolution, facilitating the imaging and control of atoms and molecules on the time scale of electronic motion. [6; 7; 8; 9; 10; 11; 12] An important phenomenon in this context is impulsive stimulated x-ray Raman scattering (ISXRS), which is the extension of stimulated x-ray Raman scattering (SXRS) to the impulsive limit, where the duration of the external field interaction is short compared to the time scales of the subsequent evolution of the system.
In general, Raman scattering is a light-matter interaction phenomenon in which photons trigger an excitation of an atomic or molecular system followed by a deexcitation to an energy level different from the initial one. In the context of x-ray Raman scattering, the involved transitions are electronic in character. [6; 13; 14; 15; 16] We focus on the situation in which the electronic excitation in play is a core excitation, which is deexcited to a valence-excited state through the decay of a valence electron into a core vacancy, see Fig. 1. Core excitations are often localized on a specific atomic site and sensitive to the surrounding electronic environment, making them useful for the local initiation of charge migration. We treat the case where both the excitation and deexcitation are stimulated by an interaction with the same laser pulse. [17] This is achievable by utilizing a pulse with sufficient bandwidth to encompass the energy differences between the ground state and the core-excited states of interest, as well as between these core-excited states and the final valence-excited states. The interaction with such pulses is similar to the interactions occurring in the first experimental demonstration of electronic population transfer via ISXRS, which was made for the NO molecule at the Linac Coherent Light Source as recently as in 2020. [18]
The progress in experimental techniques has stimulated the development of methods for modeling electron dynamics based on the time-dependent Schrodinger equation. Real-time methods, which involve solving this equation in the real time domain, offer a particularly suitable approach for analyzing ultrafast phenomena. [19] Among these methods, real-time coupled coupled cluster methods offer high accuracy and computational costs that scale polynomially with the system size.
Figure 1: Illustration of the steps in the ISXRS process. Initially, the molecule is in its ground state (left). An external x-ray pulse excites a core electron, leading to a core-excited state (middle). The same pulse can trigger the decay of a valence electron into the core vacancy, leading to a valence-excited state (right).
The time evolution is described by differential equations that can be solved using standard numerical integration techniques such as Runge-Kutta methods.
A specific subcategory of real-time coupled cluster methods is the time-dependent coupled cluster (TDCC) methods, [20; 21; 22; 23; 24; 25; 26; 27; 28] where the time dependence is parametrized by cluster amplitudes and Lagrange multipliers. [29; 30] These methods offer the advantage of size-extensivity at all levels of truncation. Another subcategory, the time-dependent equation-of-motion coupled cluster (TD-EOM-CC) methods, [31; 32; 33; 34; 35; 36] provides less potential for numerical issues compared to TDCC methods, [37] since the time dependence is parametrized by the linear coefficients used in EOM-CC methods and the cluster amplitudes remain fixed at their time-independent ground state values. [38; 39; 40]
In the basis of field-free EOM-CC states, the TD-EOM-CC method requires the predetermination of the excited states that are involved in the studied processes. Computationally, the exterior eigenvalue algorithms usually employed for calculating valence-excited states are inefficient for the calculation of the core-excited states often involved in x-ray interactions. This is because the core-excited states have large eigenvalues, and the states are embedded in an ionization (pseudo-)continuum. [41] A useful scheme for the study of core excitations is the core valence separation (CVS) scheme, which disregards all excitations that do not involve at least one core orbital. [42; 43] This allows for the approximate core-excited states to be calculated as the lowest energy states within the reduced excitation space.
In this article, we use the TD-EOM-CC method together with the CVS approximation to simulate the interaction of neon, carbon monoxide, pyrrole, and p-aminophen with ultrashort laser pulses, and calculate the populations of the valence-excited states following ISXRS targeting molecular K-edges. The article is organized as follows. In Section II we briefly outline the theory behind the calculations. We provide details of the performed computations in Section III, and present and discuss the results in Section IV. Conclusions are presented in Section V.
## II Theory
The time-dependent system is described by the Hamiltonian
\[H(t)=H^{(0)}+V(t), \tag{1}\]
where \(H^{(0)}\) is the electronic Hamiltonian of the molecule in the Born-Oppenheimer approximation. We describe the interaction with the external laser field \(V(t)\) in the dipole approximation and length gauge,
\[V(t)=-\mathbf{d\cdot\mathcal{E}}(t), \tag{2}\]
where \(\mathbf{d}\) is the vector of Cartesian dipole operators, and \(\mathcal{E}(t)\) the Cartesian electric field vector.
The eigenstates of the field-free Hamiltonian,
\[\ket{\psi_{j}} =\sum_{\lambda}e^{T}\ket{\lambda}r_{\lambda j} \tag{3}\] \[\bra{\psi_{i}} =\sum_{\kappa}l_{i\kappa}\bra{\kappa}e^{-T} \tag{4}\]
can be found by first solving the ground state coupled cluster equations
\[\bra{\mu}e^{-T}H^{(0)}e^{T}\ket{\text{HF}}=0, \tag{5}\]
which determine the cluster amplitudes \(t_{\mu}\) in the cluster operator,
\[T=\sum_{\mu}t_{\mu}\tau_{\mu}. \tag{6}\]
Thereafter, the right and left vectors can be found as eigenvectors of the projected time-independent Schrodinger equation,
\[\sum_{\lambda}\bra{\kappa}e^{-T}H^{(0)}e^{T}\ket{\lambda}r_{ \lambda j} =r_{\kappa j}E_{j}, \tag{7}\] \[\sum_{\kappa}l_{i\kappa}\bra{\kappa}e^{-T}H^{(0)}e^{T}\ket{ \lambda} =E_{i}l_{i\lambda}. \tag{8}\]
These equations lead to the following eigenvalue problems [44]
\[\mathbf{AR}_{j} =\mathbf{R}_{j}\Delta E_{j}, \tag{9}\] \[\mathbf{L}_{i}^{T}\mathbf{A} =\Delta E_{i}\mathbf{L}_{i}^{T}, \tag{10}\]
where \(A_{\mu\nu}=\bra{\mu}e^{-T}\left[H^{(0)},\tau_{\nu}\right]e^{T}\ket{\text{HF}}\), \(L_{i\mu}=l_{i\mu}\) and \(R_{\nu j}=r_{\nu j}\) for \(\mu>0\) and \(\nu>0\). The excitation energy \(\Delta E_{j}=E_{j}-E_{0}\) is given as the difference between the excited state energy and the ground state energy
\[E_{0}=\bra{\text{HF}}e^{-T}He^{T}\ket{\text{HF}}. \tag{11}\]
The TD-EOM-CC ket and bra states can be expanded in the field-free EOM-CC kets and bras, \(\ket{\Psi(t)}=\sum_{j}\ket{\psi_{j}}c_{j}(t)\) and \(\bra{\widetilde{\Psi}(t)}=\sum_{i}b_{i}(t)\bra{\widetilde{\psi}_{i}}\). This gives the TD-EOM-CC equations [45]
\[i\frac{\text{d}c_{i}(t)}{\text{d}t} =\sum_{j}H_{ij}(t)c_{j}(t), \tag{12}\] \[-i\frac{\text{d}b_{j}(t)}{\text{d}t} =\sum_{i}b_{i}(t)H_{ij}(t), \tag{13}\]
where \(H_{ij}(t)=\bra{\widetilde{\psi}_{i}}H(t)\ket{\psi_{j}}=\delta_{ij}E_{j}+ \bra{\widetilde{\psi}_{i}}V(t)\ket{\psi_{j}}\). The time-dependent population of EOM-CC state \(i\) in the TD-EOM-CC superposition state can be found as the product of the projections onto the ket and bra of the EOM-CC state,
\[P_{i}(t) =\bra{\widetilde{\Psi}(t)}\ket{\psi_{i}}\bra{\widetilde{\psi}_{i}} \Psi(t) \tag{14}\] \[=b_{i}(t)c_{i}(t).\]
The eigenvalues of core-excited states are interior to the spectrum of the molecular Hamiltonian, and often hard to reach using exterior eigenvalue methods like Davidson or Lanczos algorithms. The core-valence separation (CVS) approximation [42; 46] simplifies the calculation of these states by removing the valence-core and core-valence blocks of the Hamiltonian and has become a vital tool for the calculation of NEXAFS spectra. [41] Let \(I\) denote the set indexing the core orbitals. We invoke the CVS approximation through a projector \(\mathcal{P}_{I}^{\text{CVS}}\) that removes all vector elements that do not reference excitations from at least one core orbital, in each eigensolver iteration. [43] For the coupled cluster singles and doubles (CCSD) truncation level, this can be expressed in compact form as
\[\mathcal{P}_{I}^{\text{CVS}}r_{i}^{a}=l_{i}^{a}\mathcal{P}_{I}^{ \text{CVS}}=0,\quad i\notin I \tag{15}\] \[\mathcal{P}_{I}^{\text{CVS}}r_{ij}^{ab}=l_{ij}^{ab}\mathcal{P}_{I }^{\text{CVS}}=0,\quad i\notin I\wedge j\notin I. \tag{16}\]
This projection is effectively setting all elements of the valence-valence block of the full-space elementary basis EOM-CC Jacobian matrix \(\mathbf{A}\) to zero, giving the CVS approximated Jacobian matrix, \(\mathbf{A}^{\text{CVS}}\). The core-excited EOM-CC states obtained in the CVS approximation can have a non-zero overlap with EOM-CC states obtained without invoking this approximation. The CVS states are in general also not eigenstates of the full field-free Jacobian, and can lead to TD-EOM-CC populations that are non-stationary, complicating the interpretation of the TD-EOM-CC state. To ensure that the populations are stationary, we diagonalize the Jacobian \(\mathbf{A}\) in the basis of all the CVS and non-CVS (valence) states by first constructing the Jacobian and overlap matrices
\[A_{ij}=\mathbf{L}_{i}\mathbf{A}\mathbf{R}_{j},\quad S_{ij}=\mathbf{L}_{i}\mathbf{R}_{j}. \tag{17}\]
respectively in the reduced space. Assuming linear independence of the vectors in the basis, the solution of the generalized eigenvalue problem defined by \(\mathbf{A}\) and \(\mathbf{S}\) gives a new set of right and left eigenvectors of \(\mathbf{A}\), which preserve populations when there is no interaction with the external field.
## III Computational details
The electric field in Eq. (2) is represented as
\[\mathbf{\mathcal{E}}(t)=\mathbf{\mathcal{E}}_{0}\cos(\omega_{0}(t-t_{0})+\phi)f(t), \tag{18}\]
where \(\mathbf{\mathcal{E}}_{0}\) is the peak electric field of the pulse in its polarization direction, \(\omega_{0}\) the carrier frequency and \(t_{0}\) the central time of the pulse, and \(\phi\) is the carrier-envelope phase. The envelope function \(f(t)\) is chosen to have the Gaussian shape
\[f(t)=\begin{cases}e^{-(t-t_{0})^{2}/(2\sigma^{2})},&-c\leq t\leq c,\\ 0,&\text{otherwise},\end{cases} \tag{19}\]
where the RMS width is selected as \(\sigma=0.5\) and the envelope truncated at \(c=8\sigma\). In all calculations, we use the carrier-envelope phase \(\phi=0\) and the peak electric field strength \(|\mathbf{\mathcal{E}}_{0}|\) of 10 a.u., which corresponds to the maximum intensity of \(7.019\times 10^{18}\,\text{W}\,\text{cm}^{-2}\), calculated from the intensity relation \(S_{0}=|\mathbf{\mathcal{E}}_{0}|^{2}/Z_{0}\) where \(Z_{0}\) is the impedance of free space.
All simulations are performed using a development version of the eT program [47] containing the TD-EOM-CC implementation described in Ref. [45]. The Runge-Kutta method known as RK4 is used to integrate Eq. (12) and Eq. (13), with time steps of 0.001 a.u. for neon, carbon monoxide, and p-aminophenol and 0.0001 a.u. for pyrrole.
## IV Results and discussion
### Neon
In the following, the convergence properties of the final Raman-induced populations are investigated for the neon atom. This system is used for benchmarking purposes, as its small size allows for the use of larger basis sets. We focus on the convergence of the final population of the \({B_{v}}^{1}D\) valence-excited state, the lowest valence-excited state with a significant final population.
We first study the basis set convergence with respect to the cardinal number X of Dunning basis sets for CCS and CCSD levels of theory. The employed basis sets are cc-pVDZ, aug-cc-pVXZ (with X=D,...,6) and aug-cc-pCVXZ (with X=D,...,5). As the carrier frequency \(\omega_{0}\) of the electric field, we choose the average of two frequencies. The first frequency corresponds to the transition between the ground state \(X^{1}S\) and the \({B_{c}}^{1}P\) core-excited state. The second frequency corresponds to the transition between the \({B_{c}}^{1}P\) core-excited state and the \({B_{v}}^{1}D\) valence-excited state. The \({B_{c}}^{1}P\) and \({B_{v}}^{1}D\) states are chosen as they are, respectively, the lowest core-excited and valence excited states that get significantly populated in the Raman process, except for the cc-pVDZ basis set, where the order of \({A_{c}}^{1}S\) and \({B_{c}}^{1}P\) energy levels is inverted. In these calculations, we include 4 core-excited states and 12 valence-excited states. The frequencies used for the different basis sets and levels of theory are given in the Supporting Information.
From Fig. 2, we can observe how the final populations calculated with CCS and CCSD are considerably different, implying that CCS is not accurate enough to provide an adequate description of the system. The addition of functions for describing core correlation (aug-cc-pCVXZ) leads to slightly lower final populations compared to the corresponding basis sets without these functions (aug-cc-pVXZ). For CCSD, the results for 5Z and 6Z are very similar, implying that basis-set convergence is reached for 5Z. Continuing, the convergence of the final population of the \({B_{v}}^{1}D\) state is explored with respect to the number of valence- and core-excited states included in the calculation. The total of the probabilites of all de
generacies of a state is calculated, such that for instance the probabilities for the five degenerate states of \(D\) type are added together. We perform the calculations using the CCSD truncation level and the aug-cc-pCVTZ basis set. The right panel of Fig. 2 exhibits the convergence of the final population of the \({B_{v}}\,^{1}D\) states with respect to the number of valence-excited states included in the simulation, with the number of core-excited states fixed at 4. The results indicate that more than 40 valence-excited states are needed for convergence. An analogous procedure is performed, this time keeping the number of valence-excited states fixed while varying the number of core-excited states. In the right panel of Fig. 2, we can see how the final population of \({B_{v}}\,^{1}D\) starts to converge after around 15 core-excited states are included in the calculation.
### Co
We continue by simulating ISXRS for the carbon monoxide molecule, which is linear and belongs to the \(C_{\infty v}\) symmetry point group. Since the system is not centrally symmetric, results can differ depending on the polarization of the electric field. Theoretical and experimental studies of the core-excitation spectroscopy and ISXRS of this molecule have previously been carried out. [48] In our simulations, the distance between the two nuclei is fixed at 1.128 A, corresponding to the equilibrium bond length in the NIST database. [49]. The internuclear axis of the molecule is aligned along the \(z\)-axis and the carbon atom is placed at the origin of the coordinate system while the oxygen atom is placed at 1.128 A along the \(z\)-axis. The carrier frequency of the external electric field is again chosen as the average between two frequencies. The first is the transition frequency between the ground state and the first core-excited state, which is the lowest-energy core-excited state that gets significantly populated during the Raman process. The second is the frequency of transition between this core-excited state and the third valence-excited state, which is the lowest valence-excited state that gets significantly populated. For CCS/aug-cc-pCVTZ, the frequency is 20.029 089 \(E_{\mathrm{h}}\), while for CCSD/aug-cc-pCVTZ it is 19.504 022 \(E_{\mathrm{h}}\), corresponding to the O K-edge.
To investigate transitions at the C K-edge, we choose the lowest-energy molecular orbital localized on the carbon atom as the molecular orbital used in the CVS approximation. The carrier frequency of the electric field is again chosen as average of the transition frequencies between the ground state and the lowest core-excited state that is significantly populated, and that between that core-excited state and the lowest valence-excited state that is significantly populated, resulting in a carrier fre
Figure 2: The left panel shows the final population of the \({B_{v}}\,^{1}D\) states of neon for different choices of level of theory and basis set. The blue line in the right panel shows the final population of the same \({B_{v}}\,^{1}D\) states for different numbers of valence-excited states included in the simulation and the number of core-excited states fixed at 4, calculated with CCSD/aug-cc-pCVTZ. The red line in the right panel shows the final population of the same \({B_{v}}\,^{1}D\) states for different numbers of core-excited states included in the simulation and the number of valence-excited states fixed at 79, calculated with CCSD/aug-cc-pCVTZ.
quency of \(10.402\,530\,E_{\mathrm{h}}\)
In the carbon monoxide system, linearly polarized electric fields can be decomposed into two components: the polarization component parallel to the internuclear axis (along the \(z\)-axis) and the polarization component perpendicular to it (any direction in the \(xy\)-plane). As for neon, the convergence of the final population of certain valence-excited states is assessed with respect to the number of included core-excited states. The results are shown for the \({D_{v}}^{1}\Sigma\), \({E_{v}}^{1}\Sigma\) and \({L_{v}}^{1}\Sigma\) valence-excited states in the left panel of Fig. 3, demonstrating that convergence is attained by increasing the number of considered core-excited states. About 30 core-excited states are needed for convergence when the number of valence-excited states is fixed at 20. In the central panel of the figure, we can see how the time-dependent population of the third valence-excited state depends on the polarization of the electric field and level of theory, and also how the population is constant after the interaction with the field. The final population is exactly zero when the polarization is along the \(z\)-axis, as expected from the symmetry of the molecule and field. In the right panel, we can see how the time-dependent population of the third valence-excited state differs when the carrier frequency of the electric field is tuned to the K-edge of different elements (C or O). For the different tunings, the third valence-excited state is reached through different transition pathways, involving other transition frequencies and transition moments. As for the results in the central panel, the population is exactly zero when the electric field is polarized along the \(z\)-axis, irrespective of the chosen frequency, for symmetry reasons. The populations are also constant after the interaction with the field
### Pyrrole
We further increase the complexity of the modeled system by considering pyrrole, which belongs to the \(C_{2v}\) symmetry point group. The geometry of the molecule is obtained from the NIST database. [49], for which the molecule lies in the \(yz\)-plane and the symmetry axis is along the \(z\)-axis. The Supporting Information provides the geometry of the system, along with a figure that shows its orientation relative to the Cartesian coordinate axes. The final populations after the Raman process are assessed for the electric field polarization vector set equal to \((1,0,0)\)\((1,1,0)\) and \((1,1,1)\) in the chosen coordinate system. The Raman process involving the N K-edge is studied by performing calculations at the CCSD level of theory with aug-cc-pCVDZ for the nitrogen atom and aug-cc-pVDZ for the other atoms. The carrier frequency
Figure 3: The left panel shows the final population of the \({D_{v}}^{1}\Sigma\), \({E_{v}}^{1}\Sigma\), and \({L_{v}}^{1}\Sigma\) valence-excited state of carbon monoxide for different numbers of core-excited states included in the calculation, with the number of valence states fixed at 20 and the external electric field polarization in the positive \(z\)-direction. The central panel shows the time-dependent population of the third valence-excited state of carbon monoxide, calculated with the aug-cc-pCVTZ basis set and different levels of theory and electric field polarizations. The right panel shows the time-dependent population of the third valence-excited state for external electric fields tuned to different K-edges and with different polarizations, calculated with CCSD/aug-cc-pCVTZ.
of the external electric field is chosen as the frequency of transition from the ground state to the most populated core-excited state, which is \(14.901\,363\,E_{\mathrm{h}}\). The Raman process involving the C K-edge is studied by performing calculations at the CCSD level of theory with aug-cc-pCVDZ for the carbon atoms and aug-cc-pVDZ for the other atoms. The core-excited states are calculated by using the CVS approximation restricted to the molecular orbital with the second-lowest energy. The carrier frequency of the external electric field is set to \(10.949\,885\,E_{\mathrm{h}}\), which is the transition frequency from the ground state to the fifth core-excited state, the lowest-energy core-excited state that is the most populated.
In the left panel of Fig. 4, we can see that new valence-excited states are populated as the polarization of the external electric field changes from \((1,0,0)\), to \((1,1,0)\), and to \((1,1,1)\). In particular, when the electric field is only polarized along the \(x\)-axis, there are no excitation to the valence-excited states. When the electric field has components along all three axes, all considered valence-excited states have a nonzero final population. An intermediate situation occurs when the electric field has components along both the \(x\)- and \(y\)-axes but not along the \(z\)-axis. This is since the different polarizations of the external electric field has components in different numbers of irreducible representation, enabling transitions to electronic states belonging to different irreducible representations. In the right panel of Fig. 4, we can see how the final population of valence-excited states differs when the carrier frequency of the electric field is tuned to the \(N\) K-edge and \(C\) K-edge, calculated using the CVS approximation with the lowest- and next-to-lowest-energy molecular orbitals, respectively. In both cases, the polarization vector of the field is set to \((1,1,1)\). The valence-excited states that become populated are the same for the two K-edge frequencies, while the populations of the states are different.
### \(\mathbf{p}\)-aminophenol
Finally, we consider the planar \(p\)-aminophenol molecule. The molecule belongs to the \(C_{s}\) symmetry point group, which only contains the mirror plane and the identity as symmetry elements. This molecule is chosen in order to investigate if charge migration between the functional groups located at the opposite side of the aromatic ring can be observed, as the electronic charge can easily travel along the aromatic electron cloud. [50]
Compared to the systems analyzed previously, which offer only limited potential for charge migration due to their small sizes, the \(p\)-aminophenol molecule is a larger system containing two strongly electron donor substituents (amino and hydroxyl) on a benzene ring. [51]
Figure 4: The left panel displays the final populations of various excited states of pyrrole following ISXRS with different electric field polarizations, computed at the CCSD level of theory and with the aug-cc-pCVDZ basis set for the nitrogen atom and the aug-cc-pVDZ basis set for the other atoms. The right panel displays the final populations of different excited states of pyrrole for electric fields tuned to different K-edges, computed at the CCSD level of theory and with the aug-cc-pCVDZ basis set for the atom with the targeted K-edge shown in the inset and aug-cc-pVDZ basis set for the remaining atoms.
We can thus expect a localized excitation to be followed by long-range charge migration.
The geometry of _p_-aminophenol is calculated at the B3LYP/aug-cc-pVDZ level of theory, and the molecule is placed in the \(xy\)-plane. The Supporting Information includes the geometry and a figure that illustrates the orientation of the molecule relative to the Cartesian coordinate axes. For the subsequent calculations, aug-cc-pCVDZ is used for the oxygen atom and aug-cc-pVDZ for all other atoms. The carrier frequency is chosen as \(19.883\,479\,E_{\mathrm{h}}\), which corresponds to the frequency of transition from the ground state to the fourth core-excited state, which is the most populated state among the two lowest-energy core-excited states that have a non-zero population after the Raman process.
In Fig. 5, the charge migration is illustrated through isodensity surfaces of the time-dependent density after subtracting the ground state density, calculated at different points in time. After the interaction with the external electromagnetic pulse, we can observe how the core excitation of the oxygen atom is reflected in a positive charge arising around that nucleus, enclosed in a negatively charged region at a bigger distance from the oxygen nucleus. This is followed by an alternating pattern of regions with increased or decreased electronic charge throughout the entire benzene ring up to the nitrogen atom of the amino group. In particular, the atoms of the ring gain some negative charge while the bonds be
Figure 5: Positive (gray) and negative (red) electronic isodensity surfaces of the time-dependent density after subtracting the ground state density of _p_-aminophenol, at the times specified at the top right corner of each subfigure. The structure of the _p_-aminophenol molecule is also shown in each subfigure.
come more positively charged, and the bonds are thus expected to be weakened. Finally, we can observe how the nitrogen atom becomes negatively charged. As predicted, we observe a localized excitation at the hydroxyl substituent following oxygen K-edge excitation, followed by long-range charge migration, in accordance with what one could expect from a superposition of valence-excited states generated by ISXRS.
In the supplemental material we have included a movie that shows the temporal evolution of the electronic density depicted through isodensity surfaces of the time-dependent density difference, illustrating how the density oscillates after the interaction with the external electric field. The generation of electronic wavepackets with external laser pulses is interesting from an experimental point of view, as it represents the first step of controlling chemical reactions with laser pulses.
## V Conclusion
In this work, a time-dependent equation-of-motion coupled cluster model of ISXRS has been presented. First, we assessed the convergence of the final population of neon valence states with respect to different calculation parameters: the level of coupled cluster theory, the choice of basis set, and choices of the total number of valence- and core-excited states. We observed how the adequate description of the system required a proper representation of correlation and a sufficiently flexible basis set, since the CCS level of theory and basis sets without augmentation performed poorly. We also demonstrated that convergence of the population of a valence-excited state of neon was achieved when increasing the number of valence- and core-excited states for the given level of theory and basis set. Subsequently, the final populations of carbon monoxide states were assessed with respect to the number of included core-excited states. The results showed convergence for several valence-excited states for the given level of theory and basis set.
Furthermore, we demonstrated that the final populations of states of both carbon monoxide and pyrrole are significantly affected by the polarization of the external electric field, as symmetry can enable and forbid the transition to some of the excited states within the bandwidth of the pulse. We also assessed how the results were affected by tuning the external electric field to the K-edge of the different atoms, where the frequencies were calculated with the CVS approximation targeting the core molecular orbitals of the atoms. We observed how a different choice of K-edge led to changes in final populations as the final states were reached through different transition pathways.
After investigating ISXRS by neon, carbon monoxide, and pyrrole, we studied the time evolution of the electronic density of _p_-aminophenol. The ground-state density was subtracted from the time-dependent density, and the density difference was visualized through isodensity surfaces in real space. We observed the rapid formation of a valence wavepacket and subsequent charge migration in the molecule. Simulations of field-induced charge migration in molecular systems can be used to predict how chemical reactions can be controlled by external electric fields, which we believe will be a subject of further interest in the near future.
###### Acknowledgements.
We acknowledge Gioia Marazzini for helpful discussions. We acknowledge the financial support from The Research Council of Norway through FRINATEK Project No. 275506. Computing resources provided by Sigma2--the National Infrastructure for High Performance Computing and Data Storage in Norway (Project No. NN2962k) and the Center for High Performance Computing (CHPC) at SNS are also acknowledged.
|
2309.11013 | ModelGiF: Gradient Fields for Model Functional Distance | The last decade has witnessed the success of deep learning and the surge of
publicly released trained models, which necessitates the quantification of the
model functional distance for various purposes. However, quantifying the model
functional distance is always challenging due to the opacity in inner workings
and the heterogeneity in architectures or tasks. Inspired by the concept of
"field" in physics, in this work we introduce Model Gradient Field (abbr.
ModelGiF) to extract homogeneous representations from the heterogeneous
pre-trained models. Our main assumption underlying ModelGiF is that each
pre-trained deep model uniquely determines a ModelGiF over the input space. The
distance between models can thus be measured by the similarity between their
ModelGiFs. We validate the effectiveness of the proposed ModelGiF with a suite
of testbeds, including task relatedness estimation, intellectual property
protection, and model unlearning verification. Experimental results demonstrate
the versatility of the proposed ModelGiF on these tasks, with significantly
superiority performance to state-of-the-art competitors. Codes are available at
https://github.com/zju-vipa/modelgif. | Jie Song, Zhengqi Xu, Sai Wu, Gang Chen, Mingli Song | 2023-09-20T02:27:40Z | http://arxiv.org/abs/2309.11013v1 | # ModelGiF: Gradient Fields for Model Functional Distance
###### Abstract
The last decade has witnessed the success of deep learning and the surge of publicly released trained models, which necessitates the quantification of the model functional distance for various purposes. However, quantifying the model functional distance is always challenging due to the opacity in inner workings and the heterogeneity in architectures or tasks. Inspired by the concept of "field" in physics, in this work we introduce **Model Gradient Field** (abbr. ModelGiF) to extract homogeneous representations from the heterogeneous pre-trained models. Our main assumption underlying ModelGiF is that each pre-trained deep model uniquely determines a ModelGiF over the input space. The distance between models can thus be measured by the similarity between their ModelGiFs. We validate the effectiveness of the proposed ModelGiF with a suite of testbeds, including task relatedness estimation, intellectual property protection, and model unlearning verification. Experimental results demonstrate the versatility of the proposed ModelGiF on these tasks, with significantly superiority performance to state-of-the-art competitors. Codes are available at [https://github.com/zju-vipa/modelgif](https://github.com/zju-vipa/modelgif).
## 1 Introduction
The last decade has witnessed the great progress of deep learning in various fields, and a plethora of deep neural networks are developed and released publicly, with either their architectures and trained parameters (, Tensorflow Hub1, Pytorch Hub2) for research, or the prediction API (, BigML, Amazon Machine Learning) as ML-as-a-Service (MLaaS) for commercial purposes. These off-the-shelf pre-trained models become extremely important resources for not only practitioners to solve their own problems, but also researchers to explore and exploit the huge potential underlying these pre-trained models.
Footnote 1: [https://www.tensorflow.org/hub](https://www.tensorflow.org/hub)
Footnote 2: [https://pytorch.org/hub/](https://pytorch.org/hub/)
With the surge of pre-trained models released, quantifying the model relationship emerges as an important question. The large-scale open-sourced deep models, heterogeneous in architectures and tasks and trained isolatedly or dependently, are related to each other in various manners. For example, a student model trained by knowledge distillation [19] should behave more similarly with the teacher than an independently trained identical model. Likewise, a fine-tuned model should be more closely related to its pre-trained model than the one trained from scratch. More generally, models trained in isolation on heterogeneous tasks should inherit the intrinsic task relatedness [53] as task-specific features are extracted by these models. Broadly speaking, there exists a _model metric space_ where models with similar functional behaviors are clustered together and dissimilar ones are far apart. The model functional distance between models, if left unresolved, leaves existing model repositories still simple unstructured collections of many isolated open-sourced models, hindering the exploitation of their great value as a whole.
Despite the ever-increasing number of publicly available pre-trained models, the study on the model functional distance,, structure of the model metric space, lags far behind. This phenomenon can be largely attributed to the great challenge of computing the functional distance between any deep models, where the barriers are three-folds: (1) _Hetero
Figure 1: An illustrative diagram of the _magnetic field_ and the proposed _model gradient field_ (ModelGiF) defined on the input space.
_geneity_: deep models usually differ significantly in architectures. Even with the same architecture, models trained on different datasets or tasks can also behave quite differently; (2) _Opacity_: the opaque inner workings of deep models render computing the model similarity extremely difficult; (3) _Efficiency_: as the cost of computing pairwise distance grows quadratically with the number of models, the computation of the functional distance should be efficient.
A few prior works have been devoted to computing model distance, in either weight space [1, 24] or representation space [28, 12]. For example, Task2Vec [1] computes task or model representations based on estimates of the Fisher information matrix associated with the probe network parameters, which provides a fixed-dimensional embedding of the model agnostic to the task variations. However, Task2Vec assumes all probe models share the same architecture, which is highly restrictive in practice. ModelD-iff [28] and representation similarity analysis (RSA) [12], on the other hand, adopt the similarity of representations on a small set of reference inputs as the model representations, and the model distance is thus calculated resorting to these model representations, which is shown effective for model reuse detection and transferability estimation. However, these methods only capture the point-wise behavior of the models at a small number of chosen reference points, which is limited in representation capacity and fails at harder tasks such as detecting model extraction [28]. Recently, DEPARA [43, 44] and ZEST [22] are proposed as pilot studies by applying explanation methods for computing model distance. However, they are validated on disjoint downstream tasks, and the performance on comprehensive tasks remains unclear. Moreover, these methods also use a small number of chosen reference points to extract the model representations, which makes them suffer from low representation capacity.
In this work, inspired by the concept of "field" in physics, we propose **Model Gradient Field** (as shown in Figure 1), abbreviated as ModelGiF, as the proxy to extract homogeneous representations from pre-trained heterogeneous models to derive their functionality distance. Specially, the proposed ModelGiF is defined on the input space, _i.e_., every point in the ModelGiF denotes the gradient vector of the model output _w.r.t._ the input on the same point. The main assumption underlying ModelGiF is that each pre-trained deep model uniquely determines a ModelGiF over the input space. The functional distance between any two models can thus be measured by the similarity between their ModelGiFs. Unlike prior methods where the point-wise features are adopted for representing the model, ModelGiFs represents these models by their gradient on the whole input space, which makes it more capable of differentiating highly related models (_i.e_., model extraction detection). Moreover, we provide theoretical insights into the proposed ModelGiFs for model functional distance, and make extensive discussions on different implementation details. We validate the effectiveness of the proposed ModelGiF with a suite of testbeds, including transferability estimation, intellectual property protection, and model unlearning verification. Experimental results demonstrate the versatility of the proposed ModelGiF on these tasks, with significantly superiority to state-of-the-art (SOTA) methods.
To sum up, we make the following contributions: (1) we propose the concept of "model gradient field", a novel method for quantifying the functionality similarity between pre-trained deep models; (2) we provide theoretical insights into the proposed ModelGiFs for model functional distance, and make extensive discussions on different implementation details; (3) extensive experiments demonstrate the effectiveness and the superiority of ModelGiF on various tasks, including transferability estimation, intellectual property protection, and model unlearning verification.
## 2 Related Work
This section briefly reviews some related topics to model functional distance, including task relatedness estimation, intellectual property protection, and model unlearning.
**Task Relatedness Estimation.** Recent studies [53, 43, 44, 5, 51, 3, 6, 26] reveals the existence of task relatedness or task structure among visual tasks. It is the concept underlying transfer learning and provides a principled way to seamlessly reuse supervision among related tasks or solve many tasks in one system [53]. Existing works to obtain task relatedness can be roughly divided into empirical and analytical approaches. Taskonomy [53] is the most representative work of empirical methods. It proposes a fully computational approach to obtain task relatedness by exhaustively computing the actual transfer learning performance, which is thus usually taken as the ground truth for evaluating other estimators. Despite the indisputable results, the computational cost of empirical methods is extremely high. On the other hand, analytical methods, _e.g_., DEPARA [44, 43], Task2Vec [1] and RSA [12], try to estimate the task relatedness without conducting the actual transfer learning. Analytical methods generally significantly reduce the computation overhead, but can be highly restrictive in model architectures or estimation performance. The proposed ModelGiF relaxes these restrictions and achieves task relatedness more consistent to taskonomy than existing approaches.
**Intellectual Property Protection.** Since training deep models usually consumes expensive data collection and large computation resources, the trained models constitute valuable _intellectual property (IP)_ that should be protected in cases where reusing is not allowed without authorization. However, model IP can be infringed in a variety of forms, such as fine-tuning [2, 48], pruning [29, 33], transfer learn
ing [39, 49, 50] and model extraction [34, 47, 25, 21], which poses great challenge to IP protection. Existing IP protection approaches can be roughly categorized into _watermarking_[23, 48, 13, 54, 2] and _fingerprinting_[30, 8, 28, 17]. Watermarking methods usually leverage weight regularization [48, 13] to put secret watermark in the model parameters or train models on a triggered set to leave backdoor [2, 54] in them. While being able to provide exact ownership verification, these techniques are invasive,, they need to tamper with the training process, which may affect the model utility or introduce new security risks into the model. Fingerprinting, on the contrary, extracts a unique identifier,, fingerprint, from the owner model to differentiate it from other models. Latest fingerprinting (, ModelDiff [28], IPGuard [8], SAC [17]) on deep learning models, though being non-invasive, also fall short when facing the diverse and ever-growing attack scenarios [10]. In this work, we propose ModelGiF as the homogeneous representations for heterogeneous pre-trained models, which can be naturally adopted as the fingerprint for IP protection and yields superior IP protection performance to existing approaches.
**Model Unlearning Verification.** Model unlearning aims to remove the effects of data points from the trained model, which has been attracting increasing attentions in recent years due to legislation and privacy ethics. Cao [9] are dedicated to making learning systems forget and present an unlearning approach capable of forgetting specific data. Graves [16] propose an effective method to eliminate the influence of personal data from the model while maintaining the validity. Bourtoule [7] particularly propose a model unlearning method to decrease the time it takes to retrain exactly. However, how to identify whether the impact of data has been eliminated is an essential but rarely studied problem. Recently ZEST [22] verifies the unlearning of data by comparing the Local Interpretable Model-Agnostic Explanations (LIME) [37]. In this work, we adopt the proposed ModelGiF for unlearning verifacation, which yields competitive performance in our experiments.
## 3 Methodology
### Problem Setup
Assume there is a model repository consisting of \(N\) pre-trained models \(\mathcal{M}=\{M_{1},M_{2},...,M_{N}\}\). With mild assumptions, these models are defined on the same input space \(\mathbb{R}^{D}\) (\(D=WHC\), where \(W\), \(H\) and \(C\) denotes the width, height and channel of the input space3), yet trained with data sampled from different data distributions \(\mathcal{P}=\{p_{1},p_{2},...,p_{N}\}\). Note that here we made no assumptions on the model architectures and the tasks, which means that these models can be different in architectures (, ResNet [18] and VGG [42]), and tasks (, visual classification [27], detection [36] or segmentation [11]). The goal in this work is to construct a model metric space where models with similar functional behaviors are clustered together and dissimilar ones are far apart, where the vital step is quantifying the functional distance between these models.
Footnote 3: Without losing generality, we use vector instead of tensor for notation simplicity.
### The Proposed ModelGiF
As aforementioned, quantifying the functional distance between pre-trained models is challenging due to the heterogeneity, opacity and efficiency issues. Our main idea is extracting homogeneous descriptors from these heterogeneous models to make them comparable. To this end, we proposed the concept ModelGiF, as shown in Figure 1, to derive the comparable identifier of the pre-trained models for quantifying model distance.
ModelGiF is inspired by the concept "field" in physics where a field is a mapping from space to some physical quantity, represented by a scalar, vector, or tensor, that has a value for each point in the space [32]. For example, the magnetic field describes the magnetic influence on moving electric charges, electric currents, and magnetic materials [14]. Likewise, we define ModelGiF as a mapping from the input space to the gradient space such that each point in the input space is assigned to a gradient vector. The formal definition is provided as follows.
**Definition 3.1** (Model Gradient Field): _Let \(M\) be a deep model trained on the data which is sampled according to some distribution \(p\) from the data space \(\mathcal{X}\), outputting a scalar4 prediction in the label space \(\mathcal{Y}\). A point \(\mathbf{x}\) can be described as \(\mathbf{x}=(x_{1},x_{2},...,x_{D})\). We define the model gradient field of \(M\) as the gradient of every possible input point in \(\mathcal{X}\) w.r.t. the model output:_
Footnote 4: For vector or tensor predictions, we simply take their \(l_{2}\) norm as the scalar output.
\[\textsc{ModelGiF}(M)\triangleq\nabla_{\mathbf{x}}M, \tag{1}\]
which is a mapping from the input space \(\mathbb{R}^{D}\) to the gradient space \(\mathbb{G}^{D}\), ModelGiF : \(\mathbb{R}^{D}\rightarrow\mathbb{G}^{D}\). Note that the ModelGiF is defined on the whole input space (which is shared by all the models) rather than some discretely points sampled from the data distribution \(p\) specific to the model \(M\). As the architecture of \(M\) and the tasks (, the label space \(\mathcal{Y}\) ) can be heterogeneous, the ModelGiF of a model can be seen as a projection of the model to the common input space.
### Field Curves for ModelGiF
With the proposed ModelGiF, the model functional distance can be quantified by the ModelGiF similarity. However, as ModelGiF is defined on the whole input space that is usually extremely huge, how to extract its representation
or signature becomes the vital step for quantifying model distance. In physics, a field is usually depicted by the _field curves_ (as shown in Figure 1), _i.e_., the integral curves for the field, and can be constructed by starting at a point and tracing a curve through space that follows the direction of the vector field, by making the field curve tangent to the field vector at each point [46]. Inspired by this, we propose _ModelGiF Curves_ as the descriptors of ModelGiF to measure the model functional distance.
**Definition 3.2** (Field Curves): _For a field \(F\): \(\mathbb{R}^{D}\rightarrow\mathbb{G}^{D}\), a curve \(\mathbf{x}(t)=\left(x_{1}(t),x_{2}(t),...,x_{D}(t)\right)\) is called a field curve of the field F if the following condition is satisfied: For all points \(P\in\mathbf{x}(t)\), the tangent vector of the curve in the point \(P\) has the same direction as the vector \(F(P)\):_
\[\frac{dx_{1}(t)}{dt}=F\big{(}\mathbf{x}(t)\big{)}_{1},\frac{dx_{2}(t)}{dt}=F \big{(}\mathbf{x}(t)\big{)}_{2},... \tag{2}\]
The field curve can be described as the solution of the system of differential equations in Eqn. 2, and it plays an important role for both analysis and visualization of vector fields [46]. Unfortunately, as the ModelGiF is usually complicated for a pre-trained deep model, there is no closed solution. The field curves are in general not describable as parameterized curves, which hinders the comparisons between different ModelGiFs. To resolve this issue, we introduce the definition of ModelGiF curves as follows.
**Definition 3.3** (ModelGiF Curves): _For a ModelGiF F: \(\mathbb{R}^{D}\rightarrow\mathbb{G}^{D}\), a curve \(\mathbf{g}(t)=\left(g_{1}\left(t\right),g_{2}\left(t\right),...,g_{D}\left(t \right)\right)\) is called the ModelGiF curve of F if the following condition is satisfied:_
\[\frac{dg_{1}(t)}{dt}=F(\mathbf{x}(t))_{1},\frac{dg_{2}(t)}{dt}=F(\mathbf{x}(t ))_{2},... \tag{3}\]
Note that different from the definition of field curves, the ModelGiF curves are defined on the gradient space \(\mathbb{G}^{D}\) instead of the input space \(\mathbb{R}^{D}\).
**Proposition 1**: _For a ModelGiF F: \(\mathbb{R}^{D}\rightarrow\mathbb{G}^{D}\), and two points \(\mathbf{x}_{0}\), \(\mathbf{x}_{1}\) in \(\mathbb{R}^{D}\), the gradient integral along the straight line from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{1}\) is a ModelGiF curve of F:_
\[\mathbf{g}(t)=\int_{\alpha=0}^{t}F\big{(}\mathbf{x}_{0}+\alpha(\mathbf{x}_{1}- \mathbf{x}_{0})\big{)}d\alpha,\ \ \ t\in[0,1] \tag{4}\]
With the proposition, we can make comparisons between ModelGiFs by simply comparing their ModelGiF curves between some predefined pairs of points. Now we provide the detailed pipeline of quantifying the Model functional distance with the proposed ModelGiF.
### ModelGiF for Model Distance
Provided with the trained deep models as described in Section 3.1, we compute the model distance with the proposed ModelGiF in two steps (in Figure 2): _1) Sampling reference points_, and _2) Computing the model distance_.
**Sampling Reference Points.** As described in Proposition 1, the first step before obtaining ModelGiF curves is
Figure 2: An illustrative diagram of the overall pipeline of obtaining ModelGiF curves. The more similar the ModelGiFs, the more similar the corresponding trained models. Note that ModelGiF Curves are in high-dimensional space in reality.
determining the reference points (_i.e_., \(\mathbf{x}_{1}\)5 in Eqn. 4.), which determines the domain of these curves. Let \(K\) be the number of reference points to be sampled. Intuitively, these reference points should _representative_. However, as the models in \(\mathcal{M}\) can be heterogeneous in architectures and tasks and trained on data sampled from different distributions, it is hard to determine a common set of reference points which are representative for all these models. In this work, we investigate three types of reference points as follows.
Footnote 5: Here we simply set \(\mathbf{x}_{0}\) to the zero vector. Other settings do not yield significantly superior performance in our experiments.
1) _Random samples_ drawn from \(\mathcal{P}\). Each data point is sampled in two stages: first randomly sampling the distribution \(p\) from \(\mathcal{P}\) and then randomly sampling the reference point from \(p\).
2) _Augmented samples_ using CutMix [52]. Let \(\mathbf{x}\) and \(\mathbf{x}^{*}\) be two randomly sampled points in \(\mathcal{P}\). CutMix generates a new sample \(\tilde{\mathbf{x}}\) using the two samples by cutting off and pasting patches among them: \(\tilde{\mathbf{x}}=\mathbf{m}\odot\mathbf{x}+(1-\mathbf{m})\odot\mathbf{x}^{*}\), where \(\mathbf{m}\) represents the binary mask to combine images with different parts, and \(\odot\) the element-wise multiplication.
3) _Adversarial samples_ generated with PGD [31]. PGD attack is a multi-step variant of Fast Gradient Sign Method (FGSM) [15]: \(\mathbf{x}_{t+1}=\prod\big{(}\mathbf{x}_{t}+\alpha\textsc{sgn}(\nabla\mathcal{ J}(\mathbf{x}_{t}))\big{)}\), where \(\prod\) denotes the projection to the allowed space.
The performance of these types of reference points are discussed in Section 4.1. We also provide the sensitivity analysis of the number of reference points \(K\), which demonstrates that the performance of ModelGiF quickly becomes superior to existing methods as \(K\) increases.
**Computing the model distance.** Let \(\mathcal{S}=\{\mathbf{x}_{1},\mathbf{x}_{2},,\mathbf{x}_{K}\}\) be the reference points obtained from the last step. For the \(i\)-th model \(M_{i}\), we can get \(K\) ModelGiF curves by substituting each point in \(\mathcal{S}\) for the \(\mathbf{x}_{1}\) in Eqn. 4. These curves are used as the identifier of the ModelGiF, and thus serve as the representations of the trained deep model. There are several distance metrics to measure the similarity between curves, _e.g_., Hausdorff distance and Frechet distance. As the method for curve distance is not our contribution in this work, we simply adopt the integrated point-wise cosine distance to validate the performance of the proposed method:
\[d(M_{i},M_{j})=\sum_{k=1}^{K}\int_{t=0}^{1}\big{(}1-\textsc{cos}(\mathbf{g}^{ i,k}(t),\mathbf{g}^{j,k}(t))\big{)}dt, \tag{5}\]
where \(\mathbf{g}^{i,k}\) denotes the \(k\)-th curve of the \(i\)-th model in \(\mathcal{M}\), and \(\cdot\) denotes the inner product between \(\mathbf{g}^{i,k}(t)\) and \(\mathbf{g}^{i,k}(t)\). In practice, we approximate the integral with summation by sampling with some fixed intervals.
### Theoretical Analysis of ModelGiF
From the definition of model distance in Eqn. 5, we can see that the defined distance can be seen as the summation of point-wise similarities over all the curves. Here we provide some theoretical insights into ModelGiF by dissecting the model-level distance into point-level distance.
**Proposition 2**: _For the point \(\mathbf{g}(t_{1})\) along the ModelGiF curves \(\mathbf{g}(t)\) defined from \(\mathbf{x}^{*}\) to \(\mathbf{x}\), then_
\[\sum\nolimits_{i}t_{1}(x_{i}-x_{i}^{*})g_{i}(t_{1})=M(\mathbf{x})-M(\mathbf{x }^{*}), \tag{6}\]
where the left side is actually the summation over Integrated Gradients [45]. Eqn. 6 tells us that the every point in proposed ModelGiF curve is strongly related to the model prediction differences from this point and the baseline point, thus the proposed model distance is a powerful distance metric to quantifying the model function distance.
## 4 Experiments
In this section, we verify the effectiveness of ModelGiF with a suite of testbeds, including task relatedness estimation, intellectual property protection and model unlearning verification. Details are provided as follows.
### Application: Task Relatedness Estimation
**Experimental Setup.** We adopt 17 pre-trained heterogeneous models from Taskonomy [53] to compare their functionality similarity. These models are trained on various tasks (including Autoencoder, Curvature, Denoise, Edge 2D, Edge 3D, Keypoint 2D, Keypoint 3D, Reshade, Rgb2depth, Rgb2mist, Rgb2sfnorm, Room Layout, Segment25D, Segment2D, Vanishing Point, Segmentation, and Classification) and are different in architectures. Generally speaking, the architectures of these models follows an encoder-decoder scheme, in which the encoder is implemented by fully convolutional layers and the decoder varies according to the tasks. Please refer to [53] for more detailed information. In our experiments, only the encoders of these models are adopted for generating their ModelGiFs.
**Competitors and Evaluation Metric.** The proposed ModelGiF is compared with several prior approaches to task relatedness estimation, including RSA [12], Attribution-Maps [43], and ZEST [22]. Note that AttributionMaps adopts saliency [41], DeepLIFT [40] and \(\epsilon\)-LRP [4] to generate attributions for task relatedness estimation. We compare ModelGiF with all these variants to demonstrate its superiority in task relatedness prediction. As the affinity matrix from Taskonomy is obtained based on the actual transfer learning performance, it is used as the ground truth of the affinity among these tasks. In this experiment, we will
verify that the functionality similarity obtained by ModelGiF positively correlates with that from taskonomy. In order to quantify the similarity between the affinity matrix representing functionality similarity obtained by ModelGiF and the affinity matrix obtained by Taskonomy, we use Spearman's correlation as the metric for evaluation.
**Results.** We first test the proposed ModelGiF with by randomly sampling \(1,000\) reference points. The visualization of the affinity matrix (as well as that from Taskonomy [53] and RSA [12]) and the similarity tree are provided in Figure 3. This similarity tree is constructed by agglomerative hierarchical clustering based on the affinity matrix. It can be seen that the affinity matrix from the proposed ModelGiF is visually highly similar to that from Taskonomy in most regions. The similarity tree derived from ModelGiF perfectly matches the results from taskonomy where 3D (in green font) tasks, 2D (in blue font), geometric (in red font) tasks, and semantic (in purple font) tasks cluster into corresponding groups as expected.
Table 1 provide a quantitative comparison between the proposed ModelGiF with existing works in Spearman's correlation. It can be easily seen that the proposed ModelGiF yields significantly superior performance to existing methods, improving the SOTA Spearman's correlation from \(0.777\) to \(0.835\). Furthermore, all the three types of reference points produce Spearman's correlation more than \(0.83\), which implies that the proposed method is quite robust to the choice of the reference points. To make a more comprehensive study, we test ModelGiF with varying number of reference points. The correlation curves are depicted in Figure 2(e), where different types of reference points are compared. It can be seen that as the number of reference points increases, the correlation steadily grows. Surprisingly, with only 16 points, ModelGiF already achieves comparable performance to RSA [12] in Spearman's correlation. It is attractive as the computation overhead of the proposed method grows linearly with the the number of reference points. The results indicate that we can safely reduce the computation cost by decreasing the number of reference points, and we can also strive for higher performance by adding more reference points, which makes ModelGiF flexible and applicable in a variety of scenarios. Another conclusion we can draw from Figure 2(e) is that the adversarial samples generally yields superior performance to other ref
\begin{table}
\begin{tabular}{l c} \hline \hline
**Method** & **Spearman’s Correlation** \\ \hline Zest [22] & 0.359 \\ Attribution\(\text{Maps}_{\text{SAILENCY}}\)[43] & 0.619 \\ Attribution\(\text{Maps}_{\text{DeepLIFT}}\)[43] & 0.685 \\ Attribution\(\text{Maps}_{\text{e-LRP}}\)[43] & 0.682 \\ RSA [12] & 0.777 \\ \hline ModelGiF\({}_{\text{RANDOM}}\) & 0.834 \\ ModelGiF\({}_{\text{AUGENT}}\) & 0.835 \\ ModelGiF\({}_{\text{ADVESARIAL}}\) & 0.830 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison between the proposed ModelGiF and existing methods. Here \(1,000\) reference points are sampled in ModelGiF.
Figure 3: Results of task related estimation: (a) affinity matrix from Taskonomy; (b) affinity matrix from RSA; (c) affinity matrix from ModelGiF; (d) the similarity tree derived from ModelGiF; (e) performance with varying \(K\) of different reference points; (e) performance with varying \(K\) of different implementations.
erence points, which gives implications of future work to strive for higher performance of task relatedness estimation.
In Figure 2(f), we make comparisons between some different implementations of ModelGiF. "ModelGiF with random baselines" denotes \(\mathbf{x}_{0}\) in Eqn. 4 is fixed to be zero, and "ModelGiF with random baselines" denotes \(\mathbf{x}_{0}\) is randomly sampled. We also include Integrated Gradient (IG, with zero as the baseline) and IG with random baselines for comparisons. Albeit bearing some similarity with IG, it can be seen that ModelGiF with zero baselines significantly outperforms these baselines in most cases. ModelGiF with random baselines, however, is more promising when the number of reference points becomes sufficiently larger.
### Application: Intellectual Property Protection
We evaluate the performance of ModelGiF for IP protection against different model stealing attacks. To make fair comparisons with SOTA methods, we follow the experimental settings of [17] to conduct our experiments.
**Experimental Setup.** Five categories of stealing attacks are considered here to test the protection performance. including finetuning, pruning, transfer learning, model extraction and adversarial model extraction. For finetuning, Finetune-L denotes fine-tuning only the last layer and leaves the other layers unchanged. Finetune-A fine-tunes all the layers in the model. For transfer learning, the CIFAR10 model is transferred to CIFAR10-C and CIFAR100. The tiny-ImageNet model (trained with the front 100 labels in Tiny-ImageNet) is transferred to the 100 labels left behind in tiny-ImageNet dataset. In model extraction, the victim model can be extracted in two manners: probability-based model extraction (Extract-P), and label-based model extraction (Extract-L). However, the attacker can evade the detection by applying adversarial training after the label-based model extraction [30]. In our experiment, adversarial model extraction (Extract-adv) adopts the predicted label to evade the detection by adversarial training.
**Models and Competitors.** Different IP protection methods are evaluated on most of the common model architectures, including VGG [42], ResNet [18], DenseNet [20] and MobielNet [38]. To demonstrate the superiority of the proposed ModelGiF, we make comparisons with SOTA IP protection methods, including IPGuard [8], CAE [30], EWE [23] and SAC [17]. IPGuard and CAE utilizes the transferability of adversarial examples and test the attack success rate of these adversarial examples on the suspect models. A model will be recognized as a stolen model if its attack success rate is larger than a threshold. SAC propose to leverage the pairwise relationship between samples as the model fingerprint. EWE, on the contrary, trains the source model on backdoor data and leaves the watermark in the model. Please refer to [17] for more detailed experimental settings.
**Results.** To validate the effectiveness and the superiority of the proposed ModelGiF, we conduct experiments on different datasets for the defender and the attacker. We we leverage AUC-ROC curve and use AUC value between the fingerprinting scores of the irrelevant models and the stolen models to measure the fingerprinting effectiveness. Results are listed in Table 2. It can be seen that the proposed ModelGiF, although not tailored for IP protection, achieve superior performance in all the attack scenarios. The AUC value is \(1.0\) across all attacks (including the challenging "Extract-adv" attack), which means it perfectly recognize all the attacks to protect the IP, outperforming existing SOTA approaches like SAC [17]. We acknowledge the existing benchmark for IP protection methods is not sufficient large and challenging to provide thorough comparisons between IP methods. However, the superiority of ModelGiF to SOTA methods on this benchmark still provides us a strong confidence of the proposed method for quantifying model functional distance.
\begin{table}
\begin{tabular}{l|c c c c c|c c c c c} \hline \hline & \multicolumn{6}{c|}{**CIFAR**} & \multicolumn{6}{c}{**tiny-ImageNet**} \\ \hline
**Attack** & IPGuard & CAE & EWE & SAC & ModelGiF & IPGuard & CAE & EWE & SAC & ModelGiF \\ & [8] & [30] & [23] & [17] & (Ours) & [8] & [30] & [23] & [17] & (Ours) \\ \hline Finetune-A & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 0.48 & 1.00 & 1.00 \\ Finetune-L & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 \\ Pruning & 1.00 & 0.95 & 0.87 & 1.00 & 1.00 & 1.00 & 1.00 & 0.58 & 1.00 & 1.00 \\ Extract-L & 0.81 & 0.83 & 0.97 & 1.00 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 \\ Extract-P & 0.81 & 0.90 & 0.97 & 1.00 & 1.00 & 0.97 & 1.00 & 1.00 & 1.00 & 1.00 \\ Extract-Adv & 0.54 & 0.52 & 0.91 & 0.92 & 1.00 & 0.65 & 0.78 & 1.00 & 0.91 & 1.00 \\ Transfer-10C & 1.00 & 1.00 & 1.00 & 1.00 & 1.00 & N/A & N/A & N/A & N/A & N/A \\ Transfer-A & – & – & – & 1.00 & 1.00 & – & – & – & 1.00 & 1.00 \\ Transfer-L & – & – & – & 1.00 & 1.00 & – & – & – & 1.00 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Comparison between the proposed ModelGiF and existing methods for IP protection. The performance is measured in AUC value under the AUC-ROC curve. “–” represent the IP protection method can not detect this kind of attack. “N/A” denotes not applicable.
### Application: Model Unlearning Verification
Machine unlearning studies how we can efficiently delete data points used to train models without retraining from scratch [7]. In this section, we demonstrate the effectiveness of the proposed ModelGiF for model unlearning verification,, ModelGiF can distinguish the models trained with and without the data points that is requested to be unlearned.
**Experimental Setup.** Following the experimental settings of [22], we training four classifiers to evaluate ModelGiF for model unlearning verification. The first classifier is called the _reference classifier_\(C_{ref}^{(t)}\), where \(t\) denotes that the classifier is obtained after \(t\) epochs training. The reference classifier serves as the original classifier trained on all the training data, including the data points which are requested to be deleted later. The second classifier is called the _unrelated classifier_, which is another classifier trained on all the training data, but from different initialization. The third classifier is called the _exactly unlearned classifier_, which is trained on the remaining data after removing the data points requested to be deleted. Note that the exactly unlearned classifier is trained from scratch and has never seen those data points which are requested to be removed, and it thus can be seen as the exactly unlearned classifier [7]. The last classifier is called _approximately unlearned classifier_, which is obtained by directly optimizing the original reference classifier to remove the knowledge learned from those data points requested to be unlearned [16]. We compare the ModelGiFs of the unrelated classifier, the exactly unlearned classifier and the approximately unlearned classifier to that of the reference classifier. The goal is to test whether the proposed ModelGiF can recognize the exactly unlearned classifier from the unrelated classifier.
**Experimental Details and Results.** Experiments are conducted on CIFAR10 and CIFAR100. On CIFAR10, all classifiers are implemented by ResNet20. On CIFAR100, all classifiers are implemented by ResNet50. We randomly sample \(128\) data points from the training data to be unlearned, and use these data as the reference points to compute the distance between ModelGIFs of these classifiers. Experimental results are provided in Figure 4. It can be seen that with the proposed ModelGiF, exactly unlearned classifier gets significantly higher distance with the reference classifier than the unrelated classifier. It implies that ModelGiF can also be used as a tool to verify the unlearning performance of existing unlearning methods. Another observation from Figure 4 is that the distance between the approximately unlearned classifier and the reference classifier is much lower than unrelated classifier, which implies that existing approximately unlearning methods can not delete the data from the model thoroughly. However, as the training for unlearning continues, the information can be gradually forgotten, as implied by the increasing distance shown in Figure 4. These results also accord with prior finding that continual learning easily lead to catastrophic forgetting problem [35], which validate the rationality of the results from the proposed ModelGiF.
## 5 Conclusion and Future Work
In this work, we propose ModelGiF to quantify model functional distance. The main assumption underlying ModelGiF is that each pre-trained deep model uniquely determines a ModelGiF over the input space. The distance between models can thus be measured by the similarity between their ModelGiFs. We apply the proposed ModelGiF to task relatedness estimation, intellectual property protection, and model unlearning verification. Experimental results demonstrate the versatility of the proposed ModelGiF on these tasks, with significantly superiority performance to state-of-the-art competitors. There are several directions for future work with the proposed ModelGiF. For example, exploring more scenarios where ModelGiF can be applied. Another interesting research direction is proposing more informative reference points to extract stronger representations of ModelGiF. Finally, how to further speed up the computation of the proposed method is also important to make it easier to use.
Figure 4: Cosine distances between the reference classifier \(C_{ref}\) and unrelated classifier \(C_{unrelated}\), the directly unlearned classifier \(C_{direct}\) and the approximately unlearned classifier \(C_{approx}\).
**Acknowledgement.** This work is funded by the National Key Research and Development Project (Grant No: 2022YFB2703100), National Natural Science Foundation of China (62106220, 61976186, U20B2066), and Ningbo Natural Science Foundation (2021J189).
|
2309.08542 | Generalized Ginsparg-Wilson relations | We give a general derivation of Ginsparg-Wilson relations for both Dirac and
Majorana fermions in any dimension. These relations encode continuous and
discrete chiral, parity and time reversal anomalies and will apply to the
various classes of free fermion topological insulators and superconductors (in
the framework of a relativistic quantum field theory in Euclidean spacetime).
We show how to formulate the exact symmetries of the lattice action and the
relevant index theorems for the anomalies. | Michael Clancy, David B. Kaplan, Hersh Singh | 2023-09-15T17:06:55Z | http://arxiv.org/abs/2309.08542v2 | # Generalized Ginsparg-Wilson relations
###### Abstract
We give a general derivation of Ginsparg-Wilson relations for both Dirac and Majorana fermions in any dimension. These relations encode continuous and discrete chiral, parity and time reversal anomalies and will apply to the various classes of free fermion topological insulators and superconductors (in the framework of a relativistic quantum field theory in Euclidean spacetime). We show how to formulate the exact symmetries of the lattice action and the relevant index theorems for the anomalies.
+
Footnote †: preprint: INT-PUB-23-024,IQuS@UW-21-064,FERMILAB-PUB-23-541-T
## I Introduction
The Ginsparg-Wilson (GW) relations govern how massless lattice fermions without doublers can optimally realize anomalous continuum symmetries [1; 2; 3; 4]. They were originally derived for describing massless Dirac fermions with chiral symmetries in even spacetime dimensions, while analogous relations were posited for a massless Dirac fermion in three dimension with a parity anomaly [5]. Lattice operators which satisfy these relations realize anomalous symmetries in the "best" possible way: the fermion propagator respects the symmetry at any nonzero spacetime separation, and as in the continuum, the lattice action possesses an exact, nearly local form of the symmetry [4], which is therefore respected by the Feynman rules in perturbative calculations. On the other hand, the lattice integration measure is not invariant under this "Luscher symmetry", and the resultant Jacobian in the lattice theory correctly reproduces the continuum anomaly expressed in terms of the index of the fermion operator. Here we give a unified derivation of such relations for Dirac and Majorana fermions alike in any dimension, and show how these continuous and discrete anomalous symmetries are realized. The connection between GW fermions and extra dimensions is well established -- the first explicit solution to the GW equations being the overlap operator [2; 6; 7; 8; 9] which was derived to describe edge states of domain wall fermions in one higher dimension [10; 11; 12; 13]. It has since been understood that these relativistic systems are equivalent to the topological insulators and superconductors studied in condensed matter physics, and so the generalized GW relations we derive apply to the massless edge states of the wide variety of topological classes [14; 15] of such materials1.
Footnote 1: Soon after this paper appeared on the arXiv, another work appeared which discusses the topological classes of relativistic lattice fermions in detail, along with their GW relations [16].
In the following analysis we are interested in the cases of \(N_{F}\) flavors of Dirac or Majorana fermions where (i) the massless theory respects a symmetry \(G\); (ii) a mass term is possible for regulating the theory; (iii) the mass term necessarily breaks the symmetry \(G\). In this set of circumstances we expect the massless theory to have an t'Hooft anomaly involving the \(G\) symmetry, a GW relation to exist for the ideally regulated fermion operator, and the existence of an exact \(G\) symmetry obeyed by the regulated action, for which the Jacobian reproduces the anomaly of the continuum theory, a generalization of Luscher symmetry2.
Footnote 2: Notation: we use upper case Greek letters such as \(\Psi(x)\) to denote continuum fields, and lower case, such as \(\psi_{\mathbf{n}}\) for lattice variables, generally suppressing indices for the latter. We take Euclidean \(\gamma\) matrices to be Hermitian with \(\{\gamma^{\mu},\gamma^{\nu}\}=2g^{\mu\nu}\); the gauge covariant Dirac operator \(\not{D}=\gamma_{\mu}D_{\mu}\) is therefore anti-Hermitian with imaginary eigenvalues. For a regulated Dirac operator, such as a generic Ginsparg-Wilson operator, overlap operator, or Pauli-Villars regulated operator, we use the notation \(\partial_{\text{GW}},\partial_{\text{ov}},\partial_{\text{PV}},\partial_{\text {PV}}\) or simply \(\not{\Omega}\). For Majorana fermions, we work with an antisymmetric kinetic and mass operators denoted as D and m. We use the mostly plus convention for our Minkowski metric.
## II Generalized Ginsparg-Wilson relations for Dirac fermions
### Derivation of the relations
Following the logic of the original derivation, we start by considering the continuum theory of a free Dirac fermion \(\Psi\) in Euclidean spacetime of arbitrary dimension, possibly in background gauge or gravitational fields, described by the path integral
\[Z=\int d\Psi\,d\bar{\Psi}e^{-S(\bar{\Psi},\bar{\Psi})}. \tag{1}\]
We now do a block transformation, defining a function \(f(\mathbf{x})\) whose support lies in a volume \(a^{d}\) about the origin, and our block averaged variables to be
\[\psi_{\mathbf{n}}=\int d^{d}\mathbf{x}\ \Psi(\mathbf{x})f(\mathbf{x}-\mathbf{n}a) \tag{2}\]
and similarly for \(\bar{\psi}_{\mathbf{n}}\). The parameter \(a\) will be our lattice spacing, and for the rest of this article we will work in
"lattice units" with \(a=1\). Lattice variables \(\chi_{\bf n}\) and a lattice action \(S_{\rm lat}=\bar{\chi}\partial\chi\) are defined by
\[e^{-\bar{\chi}\partial\chi}=\int d\Psi\,d\bar{\Psi}e^{-S(\bar{\Psi},\Psi)}\,e^{- (\bar{\psi}-\bar{\chi})m(\psi-\chi)} \tag{3}\]
so that up to an overall normalization,
\[Z=\int\prod_{n}d\chi d\bar{\chi}\,e^{-\bar{\chi}\partial\chi}. \tag{4}\]
The parameter \(m\) is an invertible Hermitian matrix which we can take to be a real number \(m\) times the identity matrix, but we will leave it in matrix form for now so that the identities for Dirac fermions and Majorana fermions (for which \(m\) is replaced by \({\sf m}\), an imaginary antisymmetric matrix) look similar.
We now assume that the continuum action \(S\) is invariant under a global symmetry transformation \(\Psi\to\Omega\Psi\), \(\bar{\Psi}\to\bar{\Psi}\bar{\Omega}\), where \(\bar{\Omega}\) and \(\Omega\) are some operators. The symmetry transformations of interest are those which are broken by the Gaussian term proportional to \(m\) that we have added to the path integral. Examples we will consider include a \(U(1)_{A}\) chiral transformation, a discrete chiral transformation (not contained in \(U(1)_{A}\)), and a coordinate reflection:
\[\Omega =\bar{\Omega}=e^{i\alpha\bar{\gamma}}\] (chiral symmetry), (5) \[\Omega =\bar{\Omega}=\bar{\gamma}\] (discrete chiral symmetry), (6) \[\Omega =-\bar{\Omega}=\varepsilon\mathcal{R}_{1}\gamma_{1}\] (reflection symmetry), (7)
with \(\bar{\gamma}\) being the analog of \(\gamma_{5}\) in arbitrary even dimension, where \(\mathcal{R}_{1}\) reflects the sign of the \(x_{1}\) coordinate; generally \(\varepsilon=1\), but in certain Majorana theories \(\varepsilon=i\). Under reflections we assume that background fields are similarly reflected. We will subsequently consider an antilinear symmetry in Euclidean space related to time reversal in Minkowski spacetime. We focus primarily on a single flavor of fermion, and hence do not discuss nonabelian flavor symmetries, but our analysis can be easily extended to include those. Other symmetries which are directly broken by the discretization function \(f\), such as translation symmetry, spacetime rotations, conformal transformations or supersymmetry transformations do not seem to yield useful relations and we do not consider these (see [17; 18] for interesting attempts in these directions).
While the action is invariant under the \(\Omega,\bar{\Omega}\) transformation, the measure generally transforms as \(d\Psi d\bar{\Psi}\to d\Psi d\bar{\Psi}\,e^{2i\mathcal{A}}\), where \(\mathcal{A}\) is called the anomaly and arises from the Jacobian of the transformation [19].
We wish to distinguish between the continuum transformation \(\Omega\) and the transformation \(\omega\) of the block averaged variables,
\[\psi_{\bf m}\to\int\Omega\,\Psi({\bf x})f({\bf x}-a{\bf m})\,d^{d}{\bf x}= \omega_{\bf mn}\,\psi_{\bf n}. \tag{8}\]
The matrices \(\omega,\bar{\omega}\) are the lattice regulated forms of \(\Omega,\bar{\Omega}\). They act as ordinary matrices on the lattice variables \(\psi_{\bf n}\), but in the case of reflections, they also reflect the background fields. Defining
\[\mathcal{D}_{\omega}=\bar{\omega}\mathcal{D}\omega\,\quad m_{\omega}=\bar{ \omega}m\omega\, \tag{9}\]
it follows that
\[e^{-\bar{\chi}\mathcal{D}_{\omega}\chi}=\int d\Psi\,d\bar{\Psi}\,e^{2i \mathcal{A}}\,e^{-S(\bar{\Psi},\bar{\Psi})}\,e^{-(\bar{\psi}-\bar{\chi})m_{ \omega}(\psi-\chi)}. \tag{10}\]
Using the relation Eq. (10) we have
\[e^{-(\bar{\psi}-\bar{\chi})m_{\omega}(\psi-\chi)}=e^{\text{Tr}\,\ln m_{\omega }m^{-1}}\,e^{\partial_{\chi}X_{\omega}\partial_{\bar{\chi}}}\,e^{-(\bar{\psi}- \bar{\chi})m(\psi-\chi)}, \tag{11}\]
where
\[X_{\omega}=m^{-1}-m_{\omega}^{-1}\, \tag{12}\]
and so
\[e^{-\bar{\chi}\mathcal{D}_{\omega}\chi} =e^{2i\mathcal{A}}\,e^{\text{Tr}\,\ln m_{\omega}m^{-1}}\,e^{ \partial_{\chi}X_{\omega}\partial_{\bar{\chi}}}\,e^{-\bar{\chi}\mathcal{D}_{ \bar{\chi}}}\] \[=e^{2i\mathcal{A}}\,e^{\text{Tr}\,\ln m_{\omega}m^{-1}+\text{Tr} \,\ln Q_{\omega}}\,e^{-\bar{\chi}\frac{1}{Q_{\omega}}\mathcal{D}_{\bar{\chi}}}\, \tag{13}\]
where
\[Q_{\omega}\equiv\left(1-\mathcal{D}X_{\omega}\right)\, \tag{14}\]
and in the last step we used the identity Eq. (10) for a second time.
By equating the \(\chi\) dependence on both sides of Eq. (13) we arrive at two equations. The first requires the prefactors of the exponentials to be equal, and we will refer to this as the "anomaly equation":
\[e^{2i\mathcal{A}}=\det\left(m_{\omega}m^{-1}Q_{\omega}\right)^{-1}=\det\left( \bar{\omega}\omega Q_{\omega}\right)^{-1}. \tag{15}\]
The second equation follows from requiring that the fermion operators in the exponents must be equal,
\[\mathcal{D}_{\omega}=Q_{\omega}^{-1}\mathcal{D}\, \tag{16}\]
or equivalently,
\[\mathcal{D}_{\omega}-\mathcal{D}=\mathcal{D}X_{\omega}\mathcal{D}_{\omega}\, \tag{17}\]
and this we call the generalized GW equation. If \(\mathcal{D}\) is invertible, the GW equation may be written in the simple form
\[\omega\left(\frac{1}{\mathcal{D}}-\frac{1}{m}\right)\bar{\omega}=\left(\frac{ 1}{\mathcal{D}}-\frac{1}{m}\right)\, \tag{18}\]
which states that the propagator is symmetric up to a constant local subtraction. Assuming \(m\) does not couple neighboring sites, this subtraction is a delta-function in coordinate space. This relation can be further transformed to a yet simpler form by writing
\[\mathcal{D}=m\frac{ih}{1+ih}\, \tag{19}\]
in which case the gw relation Eq. (18) reduces to the statement that \(mh\) is invariant under the \(\omega\) transformation,
\[\bar{\omega}(mh)\omega=mh\, \tag{20}\]
or if \(m\) commutes with \(\bar{\omega}\), \(h\) itself is invariant. The requirement that \(\mathcal{D}\) describes a massless Dirac fermion in the continuum limit means that \(\mathcal{D}\to i\not{p}\) for \(p^{2}\ll m^{2}\); thus \(h\to\not{p}/m\) in that limit, which is hermitian (assuming for now that \(m\) is just a number). If we assume that \(h\) both satisfies Eq. (20) and is hermitian for _all_ momenta, then we can define the unitary matrix \(V=-(1-ih)/(1+ih)\) and arrive at another useful expression for \(\mathcal{D}\),
\[\mathcal{D}=\frac{m}{2}\left(1+V\right),\qquad V^{\dagger}V=1\, \tag{21}\]
with
\[V\to-1+\frac{2i\not{p}}{m}+O\left[\left(\frac{\not{p}}{m}\right)^{2}\right] \tag{22}\]
The eigenvalues of V lie on a unit circle centered at the origin in the complex plane, and those of \(\mathcal{D}\) lie on a circle of radius \(m/2\) centered at \(m/2\). When the theory is gauged, low-lying eigenvalues of \(\not{D}\) lie near \(V=-1\), while large ones are mapped to the neighborhood of \(V=+1\). This is familiar from the discussion in Ref. [2].
### Solutions to the Ginsparg-Wilson equation
We now examine solutions to the gw equation, which not only satisfy Eq. (16), but also satisfy \(\mathcal{D}\to\not{D}\) in the continuum limit \(m\gg p\), in order to describe a massless Dirac fermion, and which for free fermions only vanish at zero momentum, so as to describe a single flavor in the continuum limit.
#### ii.2.1 The Pauli-Villars solution
Although the gw equation was derived in the context of a lattice regularization, it is in fact more general, and a simple continuum solution to the gw and anomaly equations existed decades before Ginsparg and Wilson wrote their paper: a fermion regulated by a Pauli-Villars (pv) ghost. Examining this case yields insights into the nature of lattice solutions and symmetries.
We have seen that \(\mathcal{D}=mih/(1+ih)\) will solve the gw equation and describe a massless Dirac fermion in the low eigenvalue limit if \(mh\) obeys the continuum symmetries of a massless Dirac fermion, and \(mh\to\not{p}\) for a free fermion at low \(p\). The simplest possible solution to these criteria is to simply set \(ih=\not{D}/m\), in which case the gw solution describes a pv regulated fermion:
\[\not{D}\to\mathcal{D}_{\rm PV}=m\frac{\not{D}}{\not{D}+m}=\frac{m}{2}\left(1 -\frac{1-\not{D}/m}{1+\not{D}/m}\right)\, \tag{23}\]
where we will take \(m>0\) with the "continuum" limit being \(m\to\infty\). The operator \(\mathcal{D}_{\rm PV}\) is not fully regulated, but the phase of its determinant is, which is where anomalies appear. The unitary matrix \(V\) in Eq. (21) is given by
\[h=-i\not{D}/m\,\quad V=-\frac{1-\not{D}/m}{1+\not{D}/m}. \tag{24}\]
We will show that the operator \(\mathcal{D}_{\rm PV}\) simply illustrates two general properties of solutions to the gw equation which we discuss below. The first is that the regulated \(\eta\)-invariant of the continuum operator - which describes the phase of the fermion determinant - is realized in terms of \(\ln\det V\). The second is that when ghost fields are introduced to represent the PV-regulated fermion, the exact symmetry of the regulated action discovered by Luscher can be simply related to the symmetry of the unregulated action. The pv solution will also help inform our analysis of massless Majorana fermions in Sec. III.3.
#### ii.2.2 The overlap solution
The first explicit lattice solution to the gw equation was the overlap operator of Neuberger [2], based on the earlier work in conjunction with Narayanan in Refs. [6; 7; 8] and on the domain wall fermion construction in [10]. This solution takes the \(V\) matrix to be
\[V=\frac{D_{\rm w}}{\sqrt{D_{\rm w}^{\dagger}D_{\rm w}}}\, \tag{25}\]
where \(D_{\rm w}\) is the lattice operator for a Wilson fermion with mass \(-M<0\) and Wilson coupling \(r=M\)3,
Footnote 3: As shown in [11; 12] there is actually an interesting sequence of topological phase transitions as a function of \(M/r\), and taking \(M/r=1\) places the theory in one of several possible topological phases.
\[D_{\rm w}=\sum_{\mu}\delta_{\mu}\gamma_{\mu}-M-\frac{M}{2}\Delta\, \tag{26}\]
where \(\delta_{\mu}\) is the covariant symmetric difference operator, and \(\Delta\) is the covariant lattice Laplacian. Without gauge fields, this gives
\[\tilde{D}_{\rm w}(p) =\sum_{\mu}(i\gamma^{\mu}\sin p_{\mu})+M\left[-1+\sum_{\mu}(1-\cos p _{\mu})\right]\] \[\to M\left(-1+i\frac{\not{p}}{M}+O(p^{2}/M^{2})\right). \tag{27}\]
Evidently \(V\to\left(-1+i\frac{\not{p}}{M}+O(p^{2}/M^{2})\right)\) and one can see that near the corners of the Brillouin zone where doublers reside for naive lattice fermions one finds \(V=+1\). Therefore this operator behaves correctly as a massless Dirac fermion in the continuum limit.
In even spacetime dimensions, one has chiral symmetry with \(\omega=\bar{\omega}=e^{i\alpha\bar{\gamma}}\). Then the GW equation as expressed by Eq. (20) is equivalent to \(\{\bar{\gamma},h\}=0\) or \(\bar{\gamma}V\bar{\gamma}=V^{\dagger}\). This latter property readily seen to be satisfied by the overlap solution. In odd spacetime dimensions one is interested in reflection symmetry for which \(\omega=-\bar{\omega}=\not{R}_{1}\gamma_{1}\) and Eq. (20) requires \(\{h,\omega\}=0\), implying that \(\omega V\omega^{-1}=V^{\dagger}\), which is also seen to be satisfied by the overlap operator.
### An exact symmetry of the lattice action
Equation (16) together with Eq. (9) implies that the action \(\bar{\chi}\partial\chi\) for a GW fermion obeys an exact Luscher symmetry,
\[\bar{\chi}\to\bar{\chi}Q_{\omega}\bar{\omega}\,\quad\chi\to\omega\chi. \tag{28}\]
This symmetry constrains the Feynman rules for the theory, eliminating the possibility of an additive mass renormalization for \(\chi\) in perturbation theory since a mass term breaks the symmetry with
\[\bar{\chi}\chi\to\bar{\chi}Q_{\omega}\bar{\omega}\omega\chi\, \tag{29}\]
where \(Q_{\omega}\bar{\omega}\omega\neq 1\) for the symmetry transformations of interest4. The transformation is also not a symmetry of the \(\chi\) measure, with Jacobian equal to \((1/\det\bar{\omega}\omega Q_{\omega})\), which we see from the anomaly equation Eq. (15) exactly reproduces the \(\exp(2i\mathcal{A})\) anomaly in the original continuum theory. This symmetry was discovered in the context of infinitesimal chiral transformations in even spacetime dimension by Luscher [4; 21] with \(\omega=\bar{\omega}=1+i\alpha\bar{\gamma}+O(\alpha^{2})\), which we have generalized here to include discrete symmetries.
Footnote 4: This symmetry does not protect against finite nonperturbative additive mass renormalizations, such as those that can be generated by instantons as discussed in [20].
This symmetry may seem somewhat peculiar, but becomes transparent when considering the pv solution. First one simply adds a gaussian term for a spinor ghost with Bose statistics,
\[S_{\chi}\to\bar{\chi}\partial_{\text{PV}}\chi+m\bar{\phi}\phi=m\left(\bar{ \chi}\,\frac{\not{D}}{\not{D}+m}\chi+\bar{\phi}\phi\right), \tag{30}\]
integrating over the \(\phi\) fields, which has no effect other than modifying the normalization of the path integral. The fermion operator \(\not{D}_{\text{PV}}\) is defined in Eq. (23). We then make the simultaneous change of variables
\[\bar{\chi}=\bar{\chi}^{\prime}(1+\not{D}/m)\,\qquad\bar{\phi}=\bar{\phi}^{ \prime}(1+\not{D}/m)\, \tag{31}\]
leaving \(\chi\) and \(\phi\) unchanged. Because \(\bar{\chi}\) and \(\bar{\phi}\) have opposite statistics, the Jacobians from these transformations cancel in the integration measure. The action now looks like
\[S_{\chi}=\left[\bar{\chi}^{\prime}\not{D}\chi+\bar{\phi}^{\prime}(\not{D}+m) \phi\right], \tag{32}\]
which is the conventional form for pv regularization in perturbative applications with a massless Dirac fermion and a ghost of mass \(m\).
Using the identity
\[Q_{\omega}\bar{\omega}=\frac{1}{(1+\not{D}/m)}\bar{\omega}(1+\not{D}/m). \tag{33}\]
the Luscher symmetry transformation of Eq. (28) becomes very simple in terms of our new variables,
\[\chi\to\omega\chi\,\qquad\bar{\chi}^{\prime}\to\bar{\chi}^{\prime}\bar{ \omega}\, \tag{34}\]
with \(\phi\) and \(\bar{\phi}^{\prime}\) not transforming at all. In other words, the transformations of the \(\chi\) and \(\bar{\chi}^{\prime}\) fields are just the symmetry transformations that leave the continuum Dirac action invariant. Furthermore, as in the continuum, violation of the symmetry comes from the path integral measure since Eq. (34) has no compensating transformation of the ghost field. It is clear that since the Feynman rules for \(\chi\) and \(\bar{\chi}^{\prime}\) in this theory with ghosts respect the \(\omega\) symmetry, no symmetry-violating operators will be generated by radiative corrections in perturbation theory.
### The anomaly equation
The anomaly equation Eq. (15) states that the continuum anomaly \(\exp(2i\mathcal{A})=1/\det Q_{\omega}\) for chiral symmetry transformations (for which \(\det\bar{\omega}\omega=1\)), while \(\exp(2i\mathcal{A})=1/\det(-Q_{\omega})\) for reflections (where \(\bar{\omega}\omega=-1\)), which in both cases equals the Jacobian for the symmetry transformation in Eq. (28). This relates \(\mathcal{A}\), which is a functional of the background fields, to properties of the fermion spectrum. Here we show that in even spacetime dimensions the equation reproduces the Atiyah-Singer index theorem as shown in Ref. [4], while in odd spacetime dimensions it reproduces the relation between the parity anomaly and the \(\eta\)-invariant discovered in Ref. [22]. For recent work on the \(\eta\)-invariant in the context of the overlap operator, see Refs. [23; 24].
We first consider the pv solution in both odd and even dimensions. The phase of the determinant for a massless Dirac fermion may be expressed as \(\exp(-i\pi\eta_{D}(0)/2)\), where \(\eta_{D}\) is defined as a regulated sum of the signs of eigenvalues of \(i\not{D}\), and \(\eta_{D}(0)\) is the universal value as the regulator is removed [25]. The pv solution to the GW equation replaces \(\not{D}\) by its regulated form \(\mathcal{D}_{\text{PV}}=(m/2)(1+V)\) where \(V\) is unitary. It follows that
\[\frac{\det\mathcal{D}_{\text{PV}}}{\det\mathcal{D}_{\text{PV}}^{\dagger}}=e^{ \text{Tr}\,\ln\frac{i+V}{1+V^{\dagger}}}=e^{\text{Tr}\,\ln V}. \tag{35}\]
The eigenvalues of \(V\) are \((-i\lambda/m-1)/(-i\lambda/m+1)=-1-2i\lambda/m+O(1/m^{2})\), and so we have
\[\text{Tr}\,\ln V=-i\pi\sum_{\lambda}\frac{\lambda}{|\lambda|}+O(1/m)\equiv-i \pi\eta_{D}(1/m). \tag{36}\]
Thus we see that
\[\eta_{D}(0)=\lim_{m\to\infty}\frac{i}{\pi}\ln\det V \tag{37}\]
and the phase of the fermion determinant \(\det\mathscr{D}_{\rm PV}\) may be written as \(e^{-i\frac{\pi}{2}\eta_{D}}\). This result applies generally to solutions of the GW equation.
In odd spacetime dimensions with a space reflection transformation as in Eq. (7) we have \(\bar{\omega}\omega=-1\), \(m_{\omega}=-m\) and \(-Q_{\omega}=-1+2\mathscr{D}/m=V\). Therefore the anomaly equation states that \(\mathcal{A}=-\frac{1}{2}{\rm Tr}\,\ln V=i\pi\eta_{D}/2\), correctly realizing the parity anomaly as the regulator is removed [22]. The perturbative expansion of \(\eta_{D}\) yields the Chern-Simons action, a result also consistent with Ref. [26].
In even spacetime dimensions for a \(U(1)_{A}\) chiral transformation the anomaly equation states \(\exp(2i\mathcal{A})=1/\det Q_{\omega}\). In this case it is simplest to expand to linear order in \(\alpha\) and one finds
\[Q_{\omega}=1-2i\alpha/m\mathscr{D}\bar{\gamma}+O(\alpha^{2})\, \tag{38}\]
and the anomaly equation states that
\[2i\mathcal{A}=\frac{2i\alpha}{m}{\rm Tr}\,\bar{\gamma}\mathscr{D} \tag{39}\]
where the continuum anomaly functional \(\mathcal{A}\) is proportional to \(\alpha\). The Atiyah-Singer index theorem states that the right side of the above equation should equal \(-2i\alpha\) times the index of the Dirac operator, \((n_{+}-n_{-})\), where \(m_{\pm}\) equals the number of \(\pm 1\) chirality zeromodes. This result follows from the analysis by Luscher [4], after taking into account the relative normalization of \(am/2\) between \(\mathscr{D}\) and the GW operator analyzed in that paper.
### Anti-linear symmetry
A theory that possesses an anti-linear time reversal symmetry \(\psi({\bf x},t)\to\mathscr{T}\psi({\bf x},-t)\) in Minkowski spacetime will respect a related anti-linear symmetry in Euclidean spacetime that does not reverse any coordinates. This is simply because after replacement of \(t\) with \(-i\tau\), the conjugation of the \(i\) in \(-i\tau\) has the same effect as \(t\to-t\). For this symmetry \(\Omega=\bar{\Omega}^{\dagger}=\tilde{\mathcal{T}}T\) where the operator \(\tilde{\mathcal{T}}\) reverses time in Minkowski spacetime but acts trivially in Euclidean, while \(T\) is a unitary matrix satisfying \(T^{\dagger}\gamma_{\mu}T=\pm\gamma_{\mu}^{T}\). When this transformation is a symmetry of the massless theory but is necessarily broken by a fermion mass term, then it will in general be anomalous and there will be corresponding GW relations. A simple example is a massless Dirac fermion in \(2+1\) dimensions where we can take the \(\gamma\) matrices to be \(\gamma^{0}=i\sigma_{1}\), \(\gamma^{1}=\sigma_{2}\), \(\gamma^{2}=\sigma_{3}\) and \(T=\sigma_{2}\). Under time reversal the fields transform as \(\psi({\bf x},t)\to T\psi({\bf x},-t)\) and \(\bar{\psi}({\bf x},t)\to\bar{\psi}({\bf x},-t)T\) which is a symmetry of the action for a massless Dirac fermion, but for a massive fermion the transformation flips the sign of the mass term. In Euclidean spacetime the symmetry transformation is identical, \(\psi\to T\psi\) and \(\bar{\psi}\to\bar{\psi}T\), except that there is no change in the coordinates; again one finds that the massless Dirac action is invariant but that a mass term is odd.
Our derivation of the generalized GW relations proceed as above, only now \(\Omega\) and \(\bar{\Omega}\) are anti-linear, while the \(\omega\) and \(\bar{\omega}\) remain as ordinary matrices. This change results in Eq. (9) being replaced by
\[\mathscr{D}_{\omega}=\bar{\omega}\mathscr{D}^{*}\omega\,\quad m_{\omega}=\bar{ \omega}m^{*}\omega\, \tag{40}\]
With these changes, the anomaly equation Eq. (15) and the GW equation Eq. (16) remain valid. It is evident that \(\mathscr{D}_{\rm PV}\) satisfies this antilinear GW equation since \(h\propto\not{D}\); one can easily check that \(\mathscr{D}_{\rm ov}\) satisfies it as well.
## III Generalized Ginsparg-Wilson relations for Majorana fermions
The edge states of topological insulators are typically massless Dirac fermions such as described in the previous section; on the other hand, the edge states of topological superconductors without a conserved fermion number are massless Majorana fermions. Majorana edge states were first discussed in Ref. [27] in the context of simulating gluinos in \(d=3+1\) dimensions, and in Ref. [28] for \(d=1+1\) condensed matter systems. Here we derive the GW relations for Majorana fermions.
### Continuum Majorana fermions
We begin by summarizing properties of continuum Majorana fermions in arbitrary \(d\) dimensions, and enumerate the symmetries of interest.5
Footnote 5: For a detailed discussion of Majorana fermions in Minkowski and Euclidean spacetimes, see Ref. [29].
#### iii.1.1 The Majorana constraint
To obtain a single flavor of massless Majorana fermion we impose a Lorentz-covariant Majorana constraint on a massless Dirac fermion,
\[\psi=\psi^{\mathscr{K}}\,\qquad\psi^{\mathscr{K}}\equiv\mathscr{K}^{\dagger} \bar{\psi}^{T}\, \tag{41}\]
where for Lorentz invariance and self-consistency of the constraint, \(\mathscr{K}\) must equal either an _antisymmetric_\(\mathcal{C}\) matrix, or a _symmetric_\(\mathcal{F}\) matrix, \(\mathcal{C}\) and \(\mathscr{F}\) being unitary matrices which satisfy
\[C\gamma_{\mu}C^{\dagger}=-(\gamma_{\mu})^{T}\,\qquad\mathscr{F}\gamma_{\mu} \mathscr{F}^{\dagger}=(\gamma_{\mu})^{T}. \tag{42}\]
The Majorana constraint as expressed above is equally valid in Minkowski and Euclidean spacetimes. In Ref. [29]
fermions satisfying these constraints are referred to as Majorana (\(\mathcal{K}=C\)) or pseudo-Majorana (\(\mathcal{K}=\mathcal{T}\)); here we will refer to them as \(\mathcal{C}\)-Majorana and \(\mathcal{T}\)-Majorana respectively when distinguishing between them, and simply by "Majorana" when not. The massless Majorana action can then take the form6
Footnote 6: A Majorana fermion may carry gauge charges so long as it is in a (pseudo-)real representation of the gauge group. In that case, \(\mathcal{C}\) and \(\mathcal{T}\) will have to include the appropriate matrices to effect the similarity transformation from the generators \(T_{a}\) to the conjugate generators \(-T_{a}^{T}\).
\[S=\int d^{d}x\,\tfrac{1}{2}\psi^{T}\mathcal{K}\not{D}\psi. \tag{43}\]
Table 1 lists the properties of the \(C\) and \(\mathcal{T}\) matrices in different dimensions, and we see that for a single Majorana flavor we can take \(\mathcal{K}=C\) in \(d=2,3,4\mod 8\), and \(\mathcal{K}=\mathcal{T}\) in \(d=1,2,8\mod 8\), while there is no solution in \(d=5,6,7\mod 8\). Instead of one flavor, one could consider two flavors and replace \(\mathcal{K}\to\mathcal{K}\otimes\tau_{2}\), where \(\tau_{2}\) is the antisymmetric Pauli matrix in flavor space. Then one requires \(\mathcal{K}\) to equal either a _symmetric_\(\mathcal{C}\) matrix, or an _antisymmetric_\(\mathcal{T}\) matrix. Such fermions are sometimes referred to as symplectic Majorana fermions. In this way one can discuss massless fermions with a reality constraint (\(C\)-Majorana, \(\mathcal{T}\)-Majorana, symplectic Majorana) in any dimension. In this section we will only discuss a single flavor of massless Majorana and are therefore restricted to \(d=2,3,4\). We give examples of these theories with discrete symmetry anomalies, as well as an anomalous example of symplectic Majoranas.
In order to follow the GW program we must be able to define a mass term for the Majorana fermion. This can be included in the Euclidean action as \(\tfrac{1}{2}\int\psi^{T}\mathsf{m}\psi\) where
\[\mathsf{m}=\mu\not{\Pi}=-\mathsf{m}^{T}\, \tag{44}\]
\(\mu\) being a number with dimension of mass, while \(\not{\Pi}\) is required by Lorentz invariance and fermion statistics to be either an antisymmetric \(C\) or antisymmetric \(\mathcal{T}\) matrix. No such matrix exists in \(d=1,7,8\mod 8\). In these cases we can consider symplectic Majoranas (two flavors) in which case \(\mu\) may be replaced by \(\mu\,\tau_{2}\) acting in flavor space, and \(\not{\Pi}\) must now be a _symmetric_\(C\) or \(\mathcal{T}\) matrix7.
Footnote 7: It is stated in Ref. [29] that \(\mathcal{T}\)-Majorana fermions are necessarily massless, but that assumes that a mass term must have the form \(\psi^{T}\mathcal{T}\psi\). When allowing for a \(\psi^{T}C\psi\) mass term the statement is no longer true. This can be generated from a Dirac action by applying the \(\mathcal{T}\)-Majorana constraint to a Dirac mass term of the form \(i\overline{\psi}\overline{\psi}\).
As can be seen from Table 1, the requirement that both \(\mathcal{K}\) and \(\not{\Pi}\) exist still restricts us to discussing \(d=2,3,4\) for a single flavor. In \(d=3\) there is the unique choice \(\mathcal{K}=\not{\Pi}=C\). In \(d=2\) we have the single choice \(\not{\Pi}=C\) while \(\not{\Pi}\) may equal \(C\) or \(\mathcal{T}\). In \(d=4\), the reverse is true: \(\mathcal{K}=C\) while \(\not{\Pi}\) may equal \(C\) or \(\mathcal{T}\). For the two mixed cases \((\mathcal{K},\not{\Pi})=(\mathcal{T},C)\) in \(d=2\) and \((C,\mathcal{T})\) in \(d=4\) we have \(\mathcal{T}\) equal to \(\bar{\gamma}C\), up to a phase, and hermiticity in Minkowski spacetime is guaranteed if we take
\[\not{\Pi}^{-1}\mathcal{K}=\begin{cases}1&(C,C)\\ i\bar{\gamma}&(\mathcal{T},C),\,(C,\mathcal{T})\end{cases}. \tag{45}\]
#### iii.2.2 Symmetries
In dimensions \(d=2,3,4\) the massless Dirac action possesses a \(U(1)_{V}\) fermion number, reflection symmetry and charge conjugation symmetries, while in \(d=2,4\) it also possesses a \(U(1)_{A}\) chiral symmetry. Here we examine what subgroup is left unbroken by the Majorana constraint, and then what is the effect of the regulator.
In all dimensions \(U(1)_{V}\) fermion number symmetry is broken to a \(\mathbf{Z}_{2}\) subgroup which acts as \((-1)^{F}\), an element of the Lorentz group. What happens to the \(U(1)_{A}\) chiral symmetry in \(d=2,4\) depends on the fact that \(\mathcal{K}\bar{\gamma}^{T}\mathcal{K}^{-1}=-\bar{\gamma}\) in \(d=2\) and \(+\bar{\gamma}\) in \(d=4\). In \(d=2\)
\begin{table}
\begin{tabular}{c|c c
in addition to \((-1)^{F}\) the Majorana constraint leaves unbroken a \(\mathbf{Z}_{2}\) subgroup of \(U(1)_{V}\times U(1)_{A}\) corresponding to \(\psi\to\bar{\gamma}\psi\), while in \(d=4\) the entire \(U(1)_{A}\) remains unbroken. The latter result should not be surprising since a massless Majorana fermion in \(d=4\) Minkowski spacetime is equivalent to a massless Weyl fermion, whose action possesses a \(U(1)\) symmetry; this is not true in \(d=2\).
The charge conjugation symmetry of the Dirac fermion survives the Majorana constraint, but either acts trivially on the Majorana fermion, or as \((-1)^{F}\).
For reflections we consider transformations of the Dirac field \(\psi(x)\to\mathsf{R}\psi(x)=\varepsilon\gamma_{1}\psi(\tilde{x})\) and \(\bar{\psi}(x)\to\mathsf{R}\bar{\psi}(x)=-\varepsilon^{*}\bar{\psi}(\tilde{x}) \gamma_{1}\), where \(\varepsilon\) is a phase and \(\tilde{x}\) has the sign of \(x_{1}\) flipped. This is consistent with the Majorana condition Eq. (41) if \(\varepsilon=1\) when \(\mathscr{K}=\mathscr{C}\) and \(\varepsilon=i\) when \(\mathscr{K}=\mathscr{F}\) and is therefore always a symmetry for the massless Majorana action. Note that this means that for \(C\)-Majoranas we have \(\mathsf{R}^{2}=1\) while for \(\mathscr{F}\)-Majoranas, \(\mathsf{R}^{2}=(-1)^{F}\).
When a Majorana mass term \(\mathsf{m}\) is included the \((-1)^{F}\) symmetry is not broken, but the discrete chiral symmetry in \(d=2\) and the continuous chiral symmetry in \(d=4\) are; therefore it is reasonable to expect anomalies and gw relations for these transformations. The situation for reflection symmetry is more complicated. Reflection symmetry is broken by the mass term if the \(\mathscr{m}\) matrix is the same as the \(\mathscr{K}\) matrix, and unbroken if they are unlike (e.g. \((\mathscr{K},\mathcal{m})=(\mathscr{C},\mathscr{F})\) or \((\mathscr{K},\mathcal{m})=(\mathscr{F},\mathscr{C})\)). Therefore we should expect reflection symmetry to be anomalous for Majorana fermions in \(d=2,3\) and in \(d=4\) when \(\mathcal{m}=\mathscr{C}\). It will not be anomalous for \(\mathscr{F}\)-Majorana fermions in \(d=2\) or \(\mathscr{C}\)-Majorana fermions in \(d=4\) with \(\mathcal{m}=\mathscr{F}\). These two cases are quite different from each other, however: in \(d=2\) both \(\mathscr{C}\)- and \(\mathscr{F}\)-Majoranas exist with only one way to regulate them (with \(\mathcal{m}=\mathscr{C}\)), and we find that reflections are anomalous in the former but not the latter. For \(d=4\) we only have a \(\mathscr{C}\)-Majorana, but two ways to regulate, with \(\mathcal{m}=\mathscr{C}\) or \(\mathcal{m}=\mathscr{F}\), the former breaking reflections symmetry and the latter not. In this case we would say that choosing \(\mathcal{m}=\mathscr{C}\) is a poor choice of regulator, needlessly breaking the symmetry of the massless fermion, and we would not expect the symmetry to be anomalous.
We have summarized the situation with reflection and chiral symmetries in Table 2; cases for which gw relations pertain are the entries with the "\(\mathbf{\chi}\)".
### Derivation of the relations
Similar to the discussion of Dirac fermions in Sec. II, we can derive a gw relation for Majorana fermions, which we denote as \(\Xi\) in the continuum. We follow the same block-spin prescription as for Dirac fermions and perform a transformation \(\Xi\to\Omega\Xi\) which is assumed to be a symmetry of the continuum action but not a symmetry of either the block-spin gaussian or the measure. The analogue of Eq. (10) is
\[e^{-\frac{1}{2}\eta^{T}\mathsf{D}_{\omega}\eta}\ =\int d\Xi\ e^{i \mathscr{A}}e^{-S[\Xi]-(\eta-\xi)^{T}\mathsf{m}_{\omega}(\eta-\xi)}\, \tag{46}\]
where \(\xi_{\mathbf{n}}\) are block-averaged lattice fields related to \(\Xi\) as in Eq. (2),
\[\xi_{\mathbf{n}}=\int\!d^{d}\mathbf{x}\ \Xi(\mathbf{x})f(\mathbf{x}-\mathbf{n}a) \tag{47}\]
and \(\mathsf{m}\) is an invertible, imaginary, antisymmetric matrix. We have defined
\[\mathsf{D}_{\omega}=\omega^{T}\mathsf{D}\omega\,\qquad\mathsf{m}_{\omega}= \omega^{T}\mathsf{m}\omega\, \tag{48}\]
where \(\omega\) is related to \(\Omega\) in analogy with Eq. (8), and suppress lattice indices as before. The path integral identity we derive in Eq. (100) allows us to recast this equation as
\[e^{-\frac{1}{2}\eta\mathsf{D}_{\omega}\eta}=e^{i\mathscr{A}}e^{ \frac{1}{2}\mathrm{Tr}\,\ln\frac{\mathsf{m}_{\omega}}{\pi}Q_{\omega}}e^{-\frac{ 1}{2}\eta Q_{\omega}^{-1}\mathsf{D}_{\eta}}\, \tag{49}\]
where
\[Q_{\omega}=\left(1-\mathsf{D}\mathsf{X}_{\omega}\right),\quad \mathsf{X}_{\omega}=\mathsf{m}^{-1}-\mathsf{m}_{\omega}^{-1}. \tag{50}\]
Comparing both sides, we find two equations, the first of which is a generalized gw relation for Majorana fermions
\[\mathsf{D}_{\omega}=Q_{\omega}^{-1}\mathsf{D}. \tag{51}\]
This can be rewritten in a form analogous to the conventional gw relation as
\[\mathsf{D}_{\omega}-\mathsf{D}=\mathsf{D}\mathsf{X}_{\omega}\mathsf{D}_{\omega}. \tag{52}\]
If there are no zeromodes, then \(\mathsf{D}\) is invertible and the gw equation is equivalent to
\[\omega^{T}\left(\frac{1}{\mathsf{D}}-\frac{1}{\mathsf{m}}\right) \omega=\left(\frac{1}{\mathsf{D}}-\frac{1}{\mathsf{m}}\right)\, \tag{53}\]
similar to what we found for the Dirac case in Eq. (18).
As in the Dirac case, the second equation obtained is the anomaly equation,
\[e^{i\mathscr{A}}=\frac{1}{\sqrt{\det\frac{\mathsf{m}_{\omega}}{\mathsf{m}}Q_{ \omega}}}. \tag{54}\]
As we shall show, the square root is well defined.
### Solutions to the Majorana Ginsparg-Wilson equation
Just as we identified both the pv and overlap solutions to the gw relations for Dirac fermions, we can do the same for Majoranas. The pv solution allows one to easily derive certain useful properties of a solution which generalize.
Pauli-Villars solution
If we write
\[\mathsf{D}=\mathsf{m}\frac{ih}{ih+1} \tag{55}\]
then the GW relation in Eq. (53) is equivalent to the statement
\[\omega^{T}\mathsf{m}h\omega=\mathsf{m}h\, \tag{56}\]
or that \(\mathsf{m}h\) possesses the same symmetry as the continuum operator for a massless Majorana fermion, \(\mathcal{K}\not{D}\). Furthermore, the continuum limit requiring that \(\mathsf{D}\to i\mathcal{K}\not{p}\) in the low momentum limit for a free fermion implies that \(h\to\mathsf{m}^{-1}\mathcal{K}\not{p}\). As in the Dirac example discussed in Sec. II.2.1, the simplest solution to simply set \(\mathsf{m}h=\mathcal{K}\not{D}\), and the interpretation to this solution of the GW equation is a PV regulated Majorana fermion,
\[\mathsf{D}_{\text{PV}}=\mu\mathcal{K}\not{D}\frac{1}{\mathcal{M}^{-1} \mathcal{K}\not{D}+\mu}\, \tag{57}\]
where \(\mathcal{M}^{-1}\mathcal{K}=1\) or \(\mathcal{M}^{-1}\mathcal{K}=\pm i\bar{\gamma}\), depending on which of the "\(\mathcal{K}\)" cases in Table 2 one is discussing, while \(\mu\) is the PV mass scale. Given that \(\mathcal{K}\not{D}\) and \(\mathcal{M}\) are antisymmetric, it is easy to show that \(\mathsf{D}_{\text{PV}}\) is antisymmetric as well.
This solution can be written as
\[\mathsf{D}_{\text{PV}}=\frac{\mathsf{m}}{2}\left(1+V_{\text{maj}}\right)\,\quad V_{\text{maj}}=-\frac{\mu-\mathcal{M}^{-1}\mathcal{K}\not{D}}{\mu+ \mathcal{M}^{-1}\mathcal{K}\not{D}}\, \tag{58}\]
where \(V_{\text{maj}}\) is a unitary matrix. The eigenvalues of \(V_{\text{maj}}\) lie on a circle, as in the Dirac case, where zeromodes of \(\not{D}\) are mapped to \(V_{\text{maj}}=-1\), while infinite eigenvalues are mapped to \(V_{\text{maj}}=+1\). For the cases where \(\mathcal{M}=\mathcal{K}=C\), \(V_{\text{maj}}\) is the same matrix we found for Dirac PV solution, Eq. (24).
Various general properties of \(V_{\text{maj}}\) can be derived from the expression in Eq. (58). Antisymmetry of \(\mathsf{D}_{\text{PV}}\) implies that
\[\mathsf{m}V_{\text{maj}}\mathsf{m}^{-1}=\mathcal{M}V_{\text{maj}}\mathcal{M}^ {-1}=V_{\text{maj}}^{T} \tag{59}\]
Since \(V_{\text{maj}}\) is unitary, we can its eigenvalue equation as \(V_{\text{maj}}\psi_{n}=e^{i\theta_{n}}\psi_{n}\), while it follows from Eq. (59) that \(V_{\text{maj}}\mathcal{M}^{\dagger}\psi_{n}^{*}=e^{i\theta_{n}}\mathcal{M}^{ \dagger}\psi_{n}^{*}\). Furthermore, \(\psi_{n}\) and \(\mathcal{M}^{\dagger}\psi_{n}^{*}\) are mutually orthogonal due to the antisymmetry of \(\mathcal{M}\). Therefore it follows that the eigenvalues of \(V_{\text{maj}}\) are all doubly degenerate. This will be relevant below when we discuss the square root of the determinant of \(V_{\text{maj}}\).
Next we show how symmetries impact the eigenvalue spectrum of \(V_{\text{maj}}\). In the continuum, reflection symmetry for a Dirac fermion takes \(\psi\to(\gamma_{1}\mathcal{R}_{1})\psi\) where \(\mathcal{R}_{1}\) reflects the \(x_{1}\) coordinate, with \((\gamma_{1}\mathcal{R}_{1})\not{D}(A)(\gamma_{1}\mathcal{R}_{1})=-\not{D}( \tilde{A})\), assuming that background fields \(A\) are also suitably reflected to \(\tilde{A}\). It follows that since \(\mathcal{M}^{-1}\mathcal{K}\) equals one in the \((C,C)\) theories and \(i\bar{\gamma}\) in the \((C,\mathcal{J})\) and \((\mathcal{J},C)\) theories that
\[(\gamma_{1}\mathcal{R}_{1})V_{\text{maj}}(\gamma_{1}\mathcal{R}_{1})=\begin{cases} V_{\text{maj}}^{\dagger}&(C,C)\\ V_{\text{maj}}&(C,\mathcal{J}),\,(\mathcal{J},C)\end{cases}\, \tag{60}\]
again assuming a reflection of background fields in the \(V_{\text{maj}}\) matrices on the right.
The effect of \(\bar{\gamma}\) in \(d=2,4\) is seen to be the same as seen in the Dirac case, namely
\[\bar{\gamma}V_{\text{maj}}\bar{\gamma}=V_{\text{maj}}^{\dagger}. \tag{61}\]
We will be interested in the anomalous symmetries marked by the "\(\mathcal{K}\)" in Table 2. We see that in each of these cases we have a unitary matrix \(\mathcal{U}\) satisfying \(\mathcal{U}V_{\text{maj}}\mathcal{U}\mathcal{U}^{\dagger}=V_{\text{maj}}^{\dagger}\). This implies that if \(V_{\text{maj}}\psi_{n}=e^{i\theta_{n}}\psi_{n}\), then \(V_{\text{maj}}\mathcal{U}^{\dagger}\psi_{n}=e^{-i\theta_{n}}\mathcal{U}^{ \dagger}\psi_{n}\), and therefore, not only are all eigenvalues of \(V_{\text{maj}}\) doubly degenerate, but the \(V\neq\pm 1\) eigenvalues also come in complex conjugate pairs8.
Footnote 8: One can relax the assumption that \(V_{\text{maj}}\) is unitary and still conclude the eigenvalues come in \(\{\lambda,\lambda^{-1}\}\) pairs for \(\lambda\neq\pm 1\).
#### iii.2.2 Overlap solution
Armed with insight from the above PV solution, it is straightforward to find a lattice overlap solution to the Majorana GW equation,
\[\mathsf{D}_{\text{ov}} =\frac{\mathsf{m}}{2}(1+V_{\text{maj}}) \tag{62}\] \[V_{\text{maj}} =\frac{D_{\text{w}}}{\sqrt{D_{\text{w}}^{\dagger}D_{\text{w}}}} \tag{63}\]
where
\[D_{\text{w}}=\mathcal{M}^{-1}\mathcal{K}\gamma^{\mu}\delta_{\mu}-\mu(1+ \Delta/2), \tag{64}\]
where \(\delta_{\mu}\) and \(\Delta\) are the lattice derivative and Laplacian respectively. The overlap solution for \(V_{\text{maj}}\) obeys the properties we found for the PV solution, Eqs. (59) to (61). Without gauge fields and in momentum space,
\[\tilde{D}_{\text{w}}(p) =\mathcal{M}^{-1}\mathcal{K}\sum_{\mu}\gamma^{\mu}i\sin(p_{\mu})\] \[\quad+\mu\left[-1+\sum_{\mu}(1-\cos(p_{\mu}))\right]. \tag{65}\]
Near the origin \(p\ll\pi/a\) we have
\[\tilde{D}_{\text{w}}(p)=\mathcal{M}^{-1}\mathcal{K}i\not{p}+O(p^{2}/\mu^{2}). \tag{66}\]
and thus
\[V_{\text{maj}} =-1+\frac{\mathcal{M}^{-1}\mathcal{K}i\not{p}}{|\mu|}+O(p^{2}/\mu^ {2}),\] \[\mathsf{D}_{\text{ov}}(p) =\frac{\mathsf{m}}{2}(1+V_{\text{maj}})=\frac{i}{2}\mathcal{K}\not{p}+O (p^{2}/\mu^{2}), \tag{67}\]
the correct continuum dispersion relation for a massless Majorana fermion. At the corners of the Brillouin zone, however, \(\mu\left[-1+\sum_{\mu}(1-\cos(p_{\mu}))\right]>0\) and \(V_{\text{maj}}\simeq 1\) so that \(\mathsf{D}_{\text{ov}}\) does not have low-lying eigenvalues associated with these states.
### Exact lattice symmetry for Majorana fermions
As in the Dirac case for the anomalous chiral and parity symmetries, the Majorana gw action respects exact versions of the various anomalous symmetries listed in Table 2, with the Jacobians of the transformations reproducing the anomaly \(\mathcal{A}\). Here we discuss the exact form respected by the gw operator for each of the symmetries listed in that table. In the next subsection we examine the anomaly equation Eq. (54) and show how the Jacobians of the exact lattice symmetry transformations correctly reproduce the known continuum anomaly \(\mathcal{A}\).
The Majorana gw equation in Eq. (52) implies an exact Luscher symmetry for any antisymmetric \(\mathsf{D}\) which satisfies it. To see this, we can rearrange the Majorana gw relation as
\[\mathsf{D}=\sqrt{Q_{\omega}}\ \mathsf{D}_{\omega}\ \sqrt{Q_{\omega}}^{T} \tag{68}\]
where \(Q_{\omega}=(1-\mathsf{DX}_{\omega})\) and \(Q_{\omega}^{T}=(1-\mathsf{X}_{\omega}\mathsf{D})\).
Care must be taken in the definition of the square root. Our convention is to define the square root of \(Q_{\omega}\) to be the unique matrix with the same eigenvectors as \(Q_{\omega}\) and whose eigenvalues are the square roots of the eigenvalues of \(Q_{\omega}\) with non-negative real part. We take the cut for the square root to be along the negative real axis, and for negative real eigenvalues of \(Q_{\omega}\) we will either define the corresponding eigenvalues of \(\sqrt{Q_{\omega}}\) to all lie on the positive imaginary or negative imaginary axes, denoting the choice by \(\sqrt[4]{Q_{\omega}}\) respectively. We will see in Sec. III.4.2 that both choices come into play. When giving general arguments we will omit the \(\pm\) designation.
Equation (68) can be derived by noting that \(Q_{\omega}\mathsf{D}=\mathsf{D}Q_{\omega}^{T}\), and so \(\sqrt{Q_{\omega}}\mathsf{D}=\mathsf{D}\sqrt{Q_{\omega}}^{T}\). For a discrete symmetry transformation, \(Q_{\omega}=\sqrt{-V_{\rm maj}^{T}}\). Therefore, corresponding to the continuum symmetry \(\eta\to\omega\eta\), any gw regulated lattice action has an exact Luscher symmetry
\[\eta\to\omega\sqrt{Q_{\omega}}^{T}\eta. \tag{69}\]
In terms of \(\mathsf{D}=\frac{\mathsf{m}}{2}(1+V_{\rm maj})\), we can write
\[\sqrt{Q_{\omega}}^{T} = [1-\mathsf{X}_{\omega}\mathsf{D}]^{1/2} \tag{70}\] \[= \left[\frac{1}{2}(1+\mathsf{m}_{\omega}^{-1}\mathsf{m})-\frac{1 }{2}(1-\mathsf{m}_{\omega}^{-1}\mathsf{m})V_{\rm maj}\right]^{1/2}\]
The low-energy (\(\mathsf{m}\to\infty\)) limit we have \(X_{\omega}\to 0\) and \(Q_{\omega}\to 1\). The symmetry transformation then reduces to \(\eta\to\omega\eta\), as would be expected in the continuum limit.
Although the action is invariant under this symmetry, the fermion measure is, in general, not. The transformation in Eq. (69) produces a Jacobian \(\det(\omega\sqrt{Q_{\omega}}^{T})\). We will see in next subsection that this Jacobian reproduces the correct anomaly, once care is taken with eigenvalues of \(Q_{\omega}\) which lie on the cut of the square root. While the exact symmetry in Eq. (69) is completely general for any (continuous or discrete) symmetry, we will restrict now to the symmetries discussed in Table 2 for a single-flavor Majorana. We will also assume \(V_{\rm maj}\) is unitary for simplicity, and obeys the properties in Eqs. (59) to (61), but the arguments can be generalized for the non-unitary case.
#### iv.4.1 Discrete chiral and reflection \(\mathbb{Z}_{2}\) symmetries in \(d=2,3\)
In \(d=2,3\) a massless \(C\)-Majorana has a \(\mathbb{Z}_{2}\) reflection symmetry which is anomalously broken by the regulating mass term. The same is true in \(d=2\) for the discrete chiral symmetry for either type of Majorana.
In all these cases of a \(\mathbb{Z}_{2}\) symmetry broken by the regulator, the mass term flips sign, \(\mathsf{m}_{\omega}\mathsf{m}^{-1}=-1\). In this case \(Q_{\omega}^{T}=-V_{\rm maj}\) and the exact symmetry takes the simple form
\[\eta\to\omega\sqrt{-V_{\rm maj}}\eta. \tag{71}\]
where \(\omega=\mathscr{R}_{1}\gamma_{1}\) for the reflection symmetry and \(\omega=\bar{\gamma}\) for the discrete chiral symmetry. We can equally well define the square root as either \(\sqrt[4]{-V_{\rm maj}}\) for these discrete symmetries. We will analyze the Jacobian in the next subsection and compare with the continuum anomaly.
The massless \(C\)-Majorana in \(d=4\) and the \(\mathcal{T}\)-Majorana in \(d=2\) also have reflection symmetries \(\mathsf{R}\), but they are nonanomalous since a regulating mass term exists which is \(\mathsf{R}\)-invariant. In such cases, a gw formulation is trivially invariant under the corresponding continuum symmetry, without any modification.
#### iv.4.2 \(U(1)_{a}\) symmetry in \(d=4\)
In \(d=4\), the continuum \(C\)-Majorana fermion has an anomalous continuous \(U(1)_{A}\) symmetry \(\eta\to e^{i\alpha\bar{\gamma}}\eta\), since either choice of the regulating mass term breaks this symmetry, as discussed in Table 2. Under the \(U(1)_{A}\) transformation \(\omega=e^{i\alpha\bar{\gamma}}\), the mass term transforms such that \(\mathsf{m}_{\omega}^{-1}\mathsf{m}=e^{-2i\alpha\bar{\gamma}}\). The exact lattice symmetry of Eq. (69) can then be simplified to
\[\eta\to e^{i\alpha\bar{\gamma}/2}\left\{\cos\alpha-i\bar{\gamma}V_{\rm maj} \sin\alpha\right\}^{1/2}\eta. \tag{72}\]
In the low-energy limit, \(V_{\rm maj}\to-1\), and this reduces to the continuum symmetry, \(\eta\to e^{i\alpha\bar{\gamma}}\eta\).
This continuum \(U(1)_{A}\) for Majorana fermions descends from the anomalous \(U(1)_{A}\) symmetry for Dirac fermions upon imposing a reality condition. However, the Majorana \(U(1)_{A}\) symmetry in Eq. (72) is distinct from the Dirac case of Eq. (28). So one might wonder how these two definitions of the symmetry are related. To reconcile this, we note that for Majorana fermions, a straightforward analogy of Eq. (28) is not possible, since for Dirac fermions we exploited the freedom to transform \(\bar{\psi}\) and \(\psi\) independently, which is not consistent with the Majorana constraint. However, that choice for how the Dirac fields
transform was not unique. To illustrate this, we consider the same example considered in Ref. [4], a Dirac fermion in \(d=4\) with \(\mathcal{D}=\frac{m}{2}(1+V)\), only assuming that \(\mathcal{D}\) obeys the GW equation so that \(\bar{\gamma}V\bar{\gamma}=V^{-1}\). The infinitesimal transformation corresponding to Eq. (28) is
\[\delta\chi=\bar{\gamma}\chi,\qquad\delta\bar{\chi}=\bar{\chi}(-V\bar{\gamma})\, \tag{73}\]
where in the continuum limit (\(V\to-1\)) this reduces to the conventional chiral symmetry transformation. However the action \(\frac{m}{2}\int\bar{\chi}(1+V)\chi\) is invariant under the more general transformation, namely
\[\delta\chi=\bar{\gamma}f(V)\chi,\qquad\delta\bar{\chi}=\bar{\chi}g(V)\bar{ \gamma}\, \tag{74}\]
with \(f(-1)=g(-1)=1\), provided that the functions \(f,g\) satisfy
\[g(V)V^{-1}=\bar{\gamma}f(V)\bar{\gamma} \tag{75}\]
projected on the subspace orthogonal to \(V=-1\). Furthermore, one finds that so long as Eq. (75) is satisfied, the Jacobian of the transformation reproduces the correct anomaly. Equation (28) satisfies this with \(f=1\) and \(g=-V\); alternatively, a symmetric form compatible with Minkowski spacetime where \(\chi\) and \(\bar{\chi}\) are not independent is \(f=g=(1-V)/2\)[4; 30]. It is easily checked that this infinitesimal transformation keeps the Majorana action invariant. This result holds equally well for both \((C,C)\) and \((C,\mathcal{T})\) regularizations. However, a drawback with this transformation is that \(\bar{\gamma}(1-V)/2\) does not generate a compact \(U(1)\) symmetry, its eigenvalues not in general being integer.
Equation (75) suggests a different symmetric form consistent with the Majorana constraint, however: \(f=g=\sqrt{-V}\), which is precisely Eq. (69). This choice has the feature that \(\bar{\gamma}\sqrt{-V}\) is hermitian and has \(\pm 1\) eigenvalues so that it generates a compact \(U(1)\) symmetry; on the other hand, one must take care of the branch cut of the square root, as discussed following Eq. (68), where we defined \(\sqrt[4]{-V}\) as \(\pm i\) when acting on the eigenstate of \(V\) with eigenvalue \(V=1\) which lies on the cut for the square root. Such eigenvalues correspond to the corners of the Brillouin zone for the overlap solution, or infinite momentum for the \(\mathrm{\sc py}\) solution. The solution to Eq. (75) is then \(f=\sqrt[4]{-V_{\mathrm{maj}}}\) and \(g=\sqrt[4]{-V_{\mathrm{maj}}}\) (or with the \(\pm\) reversed). However, this is still not a satisfactory symmetry for the \(d=4\) Majorana fermion because the different treatment of the branch cut for \(\psi\) and \(\bar{\psi}\) is not consistent with the Majorana constraint, \(\psi=C^{\dagger}\bar{\psi}^{T}\).
We are forced then to define the "pseudo-Luscher symmetry" with \(f=g=\sqrt[4]{-V_{\mathrm{maj}}}\) which is consistent with the Majorana constraint, but fails to be a symmetry of the action for \(V_{\mathrm{maj}}=+1\) eigenstates. This is a failure at short distance and does not destroy the desirable feature of Luscher symmetry that chiral symmetry violating operators can only be multiplicatively renormalized. One does lose the feature that the Jacobian of the transformation reproduces the correct anomaly, as there now appears a spurious contribution \(2(\bar{n}_{+}-\bar{n}_{-})\) where \(\bar{n}_{\pm}\) are the number of \(\pm\) chirality \(V_{\mathrm{maj}}=1\) modes, but this is exactly compensated by a symmetry violation in the action under such a transformation. Typically, the chiral anomaly comes from a transformation under which the action is invariant but the measure is not, so that the path integral acquires a phase under a transformation which is classically a symmetry. Although this symmetry is violated in \(V=1\) subspace, the path integral acquires the same phase under such a transformation as it would choosing \(f=\sqrt[4]{-V_{\mathrm{maj}}}\) and \(g=\sqrt[4]{-V_{\mathrm{maj}}}\). Integrating over such modes one recovers the expected anomalous Ward-Takahashi identities, so that this symmetry has the same properties as any anomalous quantum symmetry.
If gauge fields or other parameters in the theory are varied such that an eigenvalue of \(V_{\mathrm{maj}}\) passes through \(+1\), the operator \(\sqrt[4]{-V_{\mathrm{maj}}}\) will be discontinuous. Because of this nonanalyticity, our \(U(1)_{A}\) transformation is nonlocal in spacetime, thereby evading a recent no-go theorem [31]. As we showed at the end of Sec. III.3.1, however, the eigenvalues of \(V_{\mathrm{maj}}\) are doubly degenerate, and therefore the determinant of \(\sqrt[4]{-V_{\mathrm{maj}}}\) is continuous at such points.
### The anomaly equation
We have seen in Eq. (54) that the anomaly equation gives \(e^{i\mathcal{A}}=(\det\frac{m_{\mathrm{a}}}{m}Q_{\omega})^{-1/2}\). On the other hand, the exact symmetry of the GW operator is not symmetry of the path integral measure and gives rise to a Jacobian \(1/\det(\omega\sqrt{Q_{\omega}}^{T})\). The first thing we will show is that these are equivalent. Note that the square of the anomaly from Eq. (54) is clearly equal to the square of the Jacobian, so these two agree up to a sign. It is easy to see that the anomaly equation and the Jacobian agree for any infinitesimal symmetry transformation, and so it is only the case of discrete symmetries that needs careful examination.
For the anomalous discrete symmetries in Table 2 we have \(\mathcal{m}_{\omega}\mathcal{m}^{-1}=-1\) and so Eq. (70) gives us \(Q_{\omega}^{T}=-V_{\mathrm{maj}}\). The matrix \(V_{\mathrm{maj}}\) has eigenvalues \(-e^{i\theta_{n}}\) with \(-\pi<\theta_{n}\leq\pi\), where the \(\theta_{n}\) are doubly degenerate and which occur in \(\pm\) pairs for \(\theta_{n}\neq 0,\pi\) (due to reflection and chiral symmetry in odd and even dimensions, respectively). Thus there we can write
\[\mathrm{dim}\ V_{\mathrm{maj}}=\nu_{+}+\nu_{-}+\nu_{c}\, \tag{76}\]
where \(\nu_{\pm}\) are the numbers of eigenvalues of \(V_{\mathrm{maj}}\) equal to \(\pm 1\) and \(\nu_{c}\) is the number of complex eigenvalues (the \(\pm\) is not related to chirality). Here, \(\nu_{\pm}\) are even integers and \(\nu_{c}\) is a multiple of \(4\). The eigenvalues of \(\sqrt[4]{-V_{\mathrm{maj}}}\) are then \(e^{i\theta_{n}/2}\) and only the \(\theta_{n}=\pi\) eigenvalues contribute nontrivially to its determinant, so that \(\det\sqrt[4]{Q_{\omega}}^{T}=i^{\nu_{+}}=(-1)^{\nu_{+}/2}\). Since \(\nu_{+}\) is even and \(i^{\nu_{+}}=(-i)^{\nu_{+}}\), it makes no difference which of the two definitions of the square root \(\sqrt[4]{-V_{\mathrm{maj}}}\) is used. The matrix \(\omega\) is traceless and squares to \(1\), so \(\det\omega=(-1)^{\dim\ V_{\mathrm{maj}}/2}\).
Thus we get
\[\det(\omega\sqrt{Q_{\omega}}^{T})=(-1)^{\dim\,V_{\rm maj}/2}(-1)^{ \nu_{+}/2}=(-1)^{\nu_{-}/2}\, \tag{77}\]
where we used Eq. (76). Since \(\nu_{-}\) corresponds to the zeromodes of \(\mathsf{D}\), we find that the Jacobian of our exact symmetry yields the mod 2 index of \(\mathsf{D}\). In comparison, for our anomaly equation in Eq. (54) we compute \(\sqrt{\det\mathsf{m}_{\omega}\mathsf{m}^{-1}Q_{\omega}}=\sqrt{\det V_{\rm maj}}\) which directly gives the same result, \((-1)^{\nu_{-}/2}\), since only the \(-1\) eigenvalues of \(V_{\rm maj}\) contribute.
### Examples
In this section, we present examples of the Majorana anomaly equation in which the gw construction reproduces global anomalies of Majorana fermions. In all the examples below we have \(\mathsf{m}_{-}^{-1}\mathsf{m}=-1\) and \(\mathsf{X}_{\omega}=2\mu^{-1}\mathcal{H}\), so the specification of \((\mathcal{K},\mathcal{H})\) matrices completely fixes the gw equation and its solutions.
#### iii.6.1 Two dimensions
In two dimensions it is possible to have either a 2-component \(\mathpzc{C}\)- or \(\mathcal{T}\)-Majorana fermion, but only \(\mathpzc{C}\) can be chosen as the mass term in the regulator. In this section, we show that the gw formulation reproduces known nonperturbative anomalies for both these theories.
The continuum \(\mathcal{T}\)-Majorana theory with the action \(\int\eta^{T}\mathcal{T}\,\mathcal{D}\eta+m\eta^{T}\mathpzc{C}\eta\) corresponds to the field theory of the Kitaev chain. This has an exact (non-anomalous) reflection symmetry with \(\mathsf{R}^{2}=(-1)^{F}\) (equivalent to \(\mathsf{T}^{2}=1\) in Minkowski space), but the mass term breaks a discrete chiral symmetry: \(\eta\to\bar{\gamma}\eta\), suggesting an anomaly for the discrete chiral symmetry. Indeed, the anomaly is given by the mod-2 index of the Dirac operator on modes of one chirality [32; 33; 34]. With the choice \((\mathcal{K},\mathcal{H})=(\mathcal{T},\mathpzc{C})\), we can formulate gw equation for the massless \(\mathcal{T}\)-Majorana fermion and solutions to it. The exact Luscher symmetry corresponds to \(\eta\to\bar{\gamma}\sqrt{-V_{\rm maj}}\). As shown in the previous section, the Jacobian gives \(\det\omega\sqrt{-V_{\rm maj}}=(-1)^{\nu_{-}/2}\), where \(\nu_{-}\) is the number of modes with \(V_{\rm maj}=-1\), which correspond to exact zeromodes of \(\mathsf{D}\). We have seen in Sec. III.3.1 that \(\bar{\gamma}V_{\rm maj}\bar{\gamma}=V_{\rm maj}^{\dagger}\), so the \(V_{\rm maj}=-1\) eigenmodes can be taken to be simultaneous eigenstates of \(\bar{\gamma}\). We also showed that the eigenvalues of \(V_{\rm maj}\) are doubly degenerate with eigenfunctions \(\psi\) and \(\mathcal{H}\mathcal{H}^{\dagger}\psi^{*}\). The \(d=2\) relation \(\mathcal{H}\bar{\gamma}\mathcal{H}^{-1}=-\bar{\gamma}^{T}\) then tells use that the eigenvalues of the \(V_{\rm maj}=-1\) eigenmodes come in \(\pm\) chiral pairs. Thus we can write \(\nu_{-}=n_{+}+n_{-}=2n_{+}\), where \(n_{\pm}\) are the number of positive and negative chirality zero modes of \(\mathsf{D}\). Therefore, the Jacobian of the discrete chiral Luscher symmetry reduces to \((-1)^{n+}\), which is precisely the continuum result. On a torus with periodic boundary conditions in both directions, \(n_{+}=n_{-}=1\), and therefore we find a nontrivial anomaly.
Next we consider the case of a single \(\mathpzc{C}\)-Majorana fermion in \(d=2\). This theory has a reflection symmetry \(\mathsf{R}\eta(x)=\gamma_{1}\eta(\tilde{x})\) with \(\mathsf{R}^{2}=1\) and a discrete chiral symmetry, but the \(\mathpzc{C}\) mass term violates them both. It is known that this theory has a mixed anomaly between \(\mathsf{R}\) and \((-1)^{F}\) symmetry which can be detected in the continuum by computing a mod-2 index on a two-dimensional unorientable manifold [33]. In the gw formulation defined with \((\mathcal{K},\mathcal{H})=(\mathpzc{C},\mathpzc{C})\), this can again be obtained simply from the Jacobian of the exact reflection symmetry for the gw Majorana fermion. The Luscher symmetry is \(\eta\to\gamma_{1}\sqrt{-V_{\rm maj}}\eta(\tilde{x})\). By the same argument as before, the Jacobian for this symmetry reduces to \((-1)^{\nu_{-}/2}\). On a torus with periodic boundary conditions, we have two zero modes. Then \((-1)^{\nu_{-}/2}=-1\) and therefore the measure acquires a sign under the reflection symmetry.
#### iii.6.2 One dimension
In one dimension, fermi statistics forbid any mass term for a \(N=1\) flavor 1-component Majorana, To apply the gw construction, we therefore need at least \(N=2\) flavors, which allows for the choice \((\mathcal{K},\mathcal{H})=(1,\tau_{2})\) with the continuum action \(S=\int\eta^{T}\partial_{0}\eta+\mu\ \eta^{T}\tau_{2}\eta\), where \(\eta^{T}=(\eta_{1},\,\eta_{2})\) and \(\eta_{1,2}\) are one-component Majoranas. Note that the kinetic term is invariant under a \(\mathsf{R}^{2}=(-1)^{F}\) reflection symmetry which acts as \(\mathsf{R}\eta(t)=i\eta(-t)\), but the mass term is odd under this symmetry. Indeed, this system corresponds to the edge modes of the Fidkowski-Kitaev chain and is afflicted by a well-known \(\mathbb{Z}_{8}\) anomaly between \(\mathsf{R}\) and \((-1)^{F}\)[35; 33].
With \((\mathcal{K},\mathcal{H})=(1,\tau_{2})\), we can proceed with the gw construction for \(N=2\) flavors. If \(n_{a}\) is the number of zero modes corresponding to flavor \(a\), the antisymmetric mass matrix \(\mathcal{H}=\tau_{2}\) ensures a doubling of spectrum and \(n_{1}=n_{2}\). As before, the Jacobian for the exact reflection symmetry produces a phase of \((-1)^{\nu_{-}/2}\) and \(\nu_{-}=2n_{1}\). Since \(n_{1}=n_{2}=1\) on a circle with periodic boundary conditions, this represents an anomaly. It is interesting to note that since for two flavors we find a \(\mathbb{Z}_{2}\) anomaly, the gw formulation implies a \(\mathbb{Z}_{4}\) anomaly for a single Majorana flavor, even though a mass term cannot be written in such a theory. The correct answer though is that there should be a \(\mathbb{Z}_{8}\) anomaly. See a discussion in Ref. [36], eq. (2.26), which suggests that the \(\mathbb{Z}_{4}\) follows from being insensitive to a bosonic anomaly.
## IV Conclusions
The early work on anomaly descent equations [37; 38; 39] and their embodiment in the bulk/boundary correspondence of gapped fermions [40] has been greatly expanded upon in recent years with the discussions about more
general classes of topological materials and a wider variety of anomalies (see, for example, [25]). A parallel development from lattice gauge theory had shown that for the case where the boundary theory is described by a Dirac fermion, one can describe the physics, including chiral anomalies, in terms of a theory that makes no reference to the bulk. Such a theory is governed by the Ginsparg-Wilson equation [1] which has an explicit solution in the form of the overlap operator [2]. In this paper we have shown how to generalize the GW analysis to encompass a wide range of topological materials that have been classified in the condensed matter literature, focusing on topological superconductors with Majorana edge states, which are less familiar to those working in lattice gauge theory. In each case we have generalized the notion of a Luscher symmetry: an exact symmetry of the lattice action which becomes identical to the continuum symmetry in the continuum limit, under which the the lattice integration measure transforms by the appropriate phase to account for the anomaly. The class of theories for which we can derive GW relations contain only those for which a fermion mass term can be included, and therefore does not include chiral gauge theories, for example.
Open questions remain. In particular the Dai-Freed anomalies discussed in the literature [41; 33; 42] do not seem apparent in this approach. Thus, for example, one of the results in this work was the derivation of a \(\mathbb{Z}_{4}\) discrete time reversal anomaly for the Fidkowski-Kitaev Majorana chain, but not the full \(\mathbb{Z}_{8}\) anomaly known to be correct [42]. On the other hand, we know that the overlap operator which solves the GW equation is derived by integrating out bulk modes from a higher dimension theory [9; 10], which one would expect "knows" about such anomalies.
The solutions presented here for the generalized GW are all formulated in Euclidean spacetime, and are not amenable to a Hamiltonian description of the physics in continuous time. Furthermore, not being ultra-local in Euclidean time makes the derivation of a transfer matrix and Hamiltonian problematic. We note, though, that we defined the anomalous \(U(1)_{A}\) pseudo-Luscher symmetry that acts on \(\psi\) and \(\bar{\psi}\) in a way consistent with a Minkowski interpretation, and find that it is not analytic in momentum, and hence not a local operator in spacetime, evading the no-go theorem in Ref. [31]. Pursuing a Hamiltonian formulation of the ideas presented here in order to render the results more applicable to real condensed matter systems seems like another avenue to explore in the future.
Finally, while it has been assumed that the fermions we consider are propagating in smooth, background gauge and gravitational fields, we have not examined in any detail the role played by the role played by unorientable manifolds, which are understood to play an important role in understanding the reflection (time reversal) anomalies [42].
## V Acknowledgements
We wish to thank L. Fidkowski and J. Kaidi for useful comments. HS would like to thank Hanqing Liu, Mendel Nguyen and Yi-Zhuang You for helpful conversations. This research is supported in part by DOE Grant No. DE-FG02-00ER41132. HS is also supported by the Department of Energy through the Fermilab QuantisED program in the area of "Intersections of QIS and Theoretical Particle Physics." Fermilab is operated by Fermi Research Alliance, LLC under contract number DE-AC02-07CH11359 with the United States Department of Energy. HS was also supported in part by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, InQubator for Quantum Simulation (IQuS) under Award No. DOE (NP) DE-SC0020970. DBK acknowledges the hospitality of both the CCPP at NYU and the IFT at UAM, Madrid, where parts of this work were completed. HS acknowledges the hospitality of Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452; his participation was supported in part by the Simons Foundation.
## Appendix A Derivation of path integral identities
Here we derive two identities used in this paper. For Dirac fermions and an invertible hermitian operator \(A\) we write
\[e^{-\bar{\chi}A\chi}=\det A\,\int d\psi d\bar{\psi}\,e^{\bar{\psi}A^{-1}\psi+ \bar{\psi}\chi+\bar{\chi}\psi}. \tag{10}\]
It follows that
\[e^{\partial_{\chi}B\partial_{\bar{\chi}}}\,e^{-\bar{\chi}A\chi} =\det A\,\int d\psi d\bar{\psi}\,e^{-\bar{\psi}\left(A^{-1}-B \right)\psi+\bar{\psi}\chi+\bar{\chi}\psi}\] \[=\det\left(1-AB\right)\,e^{-\bar{\chi}\left(\frac{1}{1-AB}A\right) \chi}\] \[=e^{\mathrm{Tr}\,\log(1-AB)}\,e^{-\bar{\chi}\left(\frac{1}{1-AB}A \right)\chi}. \tag{11}\]
The above result extends to non-invertible \(A\).
An analogous identity can be derived for Majorana fermions. Assuming an invertible imaginary antisymmetric operator \(\mathsf{A}\) we have
\[e^{\frac{1}{2}\eta\mathsf{A}\eta} =\frac{1}{\mathrm{Pf}\left(\mathsf{A}^{-1}\right)}\,\int d\nu\,e ^{\frac{1}{2}\nu\mathsf{A}^{-1}\nu+\nu\eta}. \tag{12}\]
From this one derives for antisymmetric \(\mathsf{B}\)
\[e^{\frac{1}{2}\partial_{\eta}\mathsf{B}\partial_{\eta}}e^{- \frac{1}{2}\eta\mathsf{A}\eta} =\frac{1}{\mathrm{Pf}(-\mathsf{A})^{-1}}\,\int d\nu\,e^{\frac{1}{2} \nu\left(-\mathsf{A}^{-1}+\mathsf{B}\right)\nu+\ \nu\eta}\] \[=\mathrm{Pf}\left(\mathsf{A}\right)\,\mathrm{Pf}\left(-\mathsf{A} ^{-1}+\mathsf{B}\right)\,e^{-\frac{1}{2}\eta\left(\frac{1}{1-AB}\mathsf{A} \right)\eta}\,\] \[=e^{\frac{1}{2}\mathrm{Tr}\,\ln(1-\mathsf{AB})}e^{-\frac{1}{2} \eta\left(\frac{1}{1-AB}\mathsf{A}\right)\eta}\, \tag{13}\]
where for the last line we used the identity \(\mathrm{Pf}\left(\mathsf{A}\right)\,\mathrm{Pf}\left(\mathsf{B}\right)=\exp \frac{1}{2}\mathrm{Tr}\,\ln(-\mathsf{AB})\). The above result also extends to non-invertible \(\mathsf{A}\).
The Majorana result of Eq. (A4) can be seen to be consistent with the Dirac result of Eq. (A2) by writing a Dirac fermion as a Majorana one with
\[\eta=\begin{pmatrix}\chi\\ \bar{\chi}\end{pmatrix}\quad\mathsf{A}=\begin{pmatrix}0&-A^{T}\\ A&0\end{pmatrix},\quad\mathsf{B}=\begin{pmatrix}0&B\\ -B^{T}&0\end{pmatrix}. \tag{45}\]
Then the left and right sides of Eq. (A4) are equal to
\[e^{\frac{1}{2}\partial_{\eta}\mathsf{B}\partial_{\eta}}e^{- \frac{1}{2}\eta\mathsf{A}\eta} =e^{\partial_{\chi}B\partial_{\bar{\chi}}}e^{-\bar{\chi}A\chi}\, \tag{46}\] \[e^{\frac{1}{2}\mathrm{Tr}\,\ln(1-\mathsf{A}\mathsf{B})}e^{- \frac{1}{2}\eta(\frac{1}{1-\mathsf{A}\mathsf{B}}\mathsf{A})\eta} =e^{\mathrm{Tr}\,\log(1-AB)}\,e^{-\bar{\chi}\left(\frac{1}{1-\mathsf{A} \mathsf{B}}\mathsf{A}\right)\chi}\, \tag{47}\]
which match the two sides of Eq. (A2).
|
2309.16768 | Encountered-Type Haptic Display via Tracking Calibrated Robot | In the past decades, a variety of haptic devices have been developed to
facilitate high-fidelity human-computer interaction (HCI) in virtual reality
(VR). In particular, passive haptic feedback can create a compelling sensation
based on real objects spatially overlapping with their virtual counterparts.
However, these approaches require pre-deployment efforts, hindering their
democratizing use in practice. We propose the Tracking Calibrated Robot (TCR),
a novel and general haptic approach to free developers from deployment efforts,
which can be potentially deployed in any scenario. Specifically, we augment the
VR with a collaborative robot that renders haptic contact in the real world
while the user touches a virtual object in the virtual world. The distance
between the user's finger and the robot end-effector is controlled over time.
The distance starts to smoothly reduce to zero when the user intends to touch
the virtual object. A mock user study tested users' perception of three virtual
objects, and the result shows that TCR is effective in terms of conveying
discriminative shape information. | Chenxi Xiao, Yuan Tian | 2023-09-28T18:04:48Z | http://arxiv.org/abs/2309.16768v1 | # Encountered-Type Haptic Display _via_ Tracking Calibrated Robot
###### Abstract
In the past decades, a variety of haptic devices have been developed to facilitate high-fidelity human-computer interaction (HCI) in virtual reality (VR). In particular, passive haptic feedback can create a compelling sensation based on real objects spatially overlapping with their virtual counterparts. However, these approaches require pre-deployment efforts, hindering their democratizing use in practice. We propose _Tracking Calibrated Robot_ (TCR), a novel and general haptic approach to free developers from deployment efforts, which can be potentially deployed in any scenario. Specifically, we augment the VR with a collaborative robot that renders haptic contact in the real world while the user touches a virtual object in the virtual world. The distance between the user's finger and the robot end-effector is controlled over time. The distance starts to smoothly reduce to zero when the user intends to touch the virtual object. A mock user study tested users' perception of three virtual objects, and the result shows that TCR is effective in terms of conveying discriminative shape information.
## I Introduction
In virtual reality (VR), haptic feedback enables users to interact with the virtual world with augmented fidelity. Haptic feedback is used to provide tactile sensations and enhance the immersive experience of VR. By simulating touch, haptic feedback makes VR interactions feel more real and tangible to the user, supplementing visual and audible sensations. Aiming to create such feedback, it has motivated the development of a variety of haptic rendering technologies including active haptics controllers [2], passive feedback and retargeting techniques [3].
To generate haptic feedback, passive approaches have shown the effectiveness of providing compelling sensations. The haptic feedback is generated by making contact with a real scene, which has a spatial correlation with its virtual counterparts. However, such technology requires the corresponding real objects to be deployed ahead of time, which lacks the flexibility to be generalized to general scenes with complex object curvatures, and dynamic objects. For instance, changes made to virtual objects can require time-consuming changes to their physical passive-haptic counterparts [7]. While this limitation can be partially relieved by wrapping and retargeting, the inconsistency could still be perceivable if the difference is significant.
To extend the application of passive haptic feedback to general scenes, the proposed work aims to propose and implement a novel approach. To be specific, the haptic feedback is provided by physical contact with objects in the real world. However, the main difference is that the contact sensation is conveyed through a robot dynamically, which moves a prop towards the user's fingertip when emulating a contact. Compared to the existing passive approaches, we hypothesize that the proposed approach has the following advantages. First, the robot system could provide the flexibility for emulating versatile scenes. As a result, the deployment of stationary props may not be needed. Second, the proposed approach can simulate the interaction processing with dynamic objects, such as frictional and damping effects when pushing an object. Third, by reorientating a real prop (such as a board), different contact forces could be generated on the user's fingertip, which allows for the simulation of different surface curvatures. Last, the proposed approach can simulate large surfaces when transiting the end-effector with the user's hand motion.
To implement the above idea, the proposed work is divided into two submodules. The first module aims to control a collaborative robot that is used to convey skin contact. The second module is a virtual reality program that tracks the relative spatial relationship between the user's hand movement and an object to be interacted with. This will be explained in detail in the methodology section.
## II Related Work
### _Passive Haptic Feedback_
Passive haptic feedback is a feedback mechanism that utilizes physical objects to simulate interactions in a virtual environment. To be specific, this type of feedback is generated through a mapping proxy between virtual objects and tangible objects in the real world. For instance, haptic effects can be replicated when touching virtual objects on a desk [3]. Similarly, a handheld golf club prop can recreate the haptic experience in a virtual golf game [5].
The advantage of passive feedback lies in its ability to provide high-fidelity sensations derived from real props, which often feature intricate details. However, despite its effectiveness, obtaining the necessary real-world objects to replicate haptic cues may not always be straightforward. As these real-world objects must be arranged in advance by humans, it requires human effort to locate and create them. This is especially true when the simulated objects lack real-world counterparts (potentially require additional manufacturing work) or with large dimensions (making deployment challenging). Therefore, the utilization of passive haptic feedback remains restricted [3].
To enhance the versatility of passive haptic feedback, active haptic feedback has been adopted. This form of active haptic feedback is developed using devices such as gloves [2] and force feedback controllers [8]. However, such devices are usually based on specialized actuators. Given that these actuators are typically worn by the operator, their usage can be cumbersome and potentially limit mobility. As a solution,
there is a need for an active haptic rendering approach in which the devices are loosely coupled to human operators, such as Encountered-type haptic displays.
### _Encountered-type haptic displays_
Encountered-type haptic displays (ETHDs) has been developed to create haptic feedback, which provide haptic sensation by actively positioning encountered-type objects at proper times and locations [10]. Compared to worn haptic devices (such as gloves) or passive haptic displays, this technology facilitates a natural way of interacting with objects while avoiding the complexity of scene deployment. The hardware scheme of the ETHD system is composed of an actuated haptics display module and the corresponding Virtual Reality device. When the user interacts with a virtual object, the actuator delivers the haptic display module to the corresponding location in the real world, which allows realistic touch events. This location usually changes according to the user's intention, which is predictable based on the user's hand or body tracking data.
Over the past decades, a wide variety of ETHDs systems has been developed. Based on the application, the system design varies according to the type of surface display, actuator, and the type of body/hand tracking devices. For instance, The locomotion can be generated by a collaborative robot, or a drone [4, 6, 9, 1, 11]. Different props can be used to deliver different sensations of interaction. A board can be used to render the contact force with a wall [6]. Real props that are rich in features can render complex features such as object edges [9].
## III Method
In this section, we describe the approach and implementation of the tracking calibrated robot (TCR). The key idea is to generate contact with a user's hand in the real world using a collaborative robot when a virtual contact happens in the virtual world correspondingly. Here we leverage a robot as the haptic device, which can change the position of its end effector to interact with the user's hand, or to change the pose of the end effector to generate a variety of contact forces. This allows us to simulate a wide range of haptic applications including point contact, sliding motion, or force-based object interaction.
### _System architecture_
The system consists of the following components. First, a collaborative robot is used to provide the interaction events in the real world. Here we leverage the UR16e robot, a collaborative robot with a reach radius of 0.9m. Using a collaborative robot benefits in terms of safety because its moving velocity as well as the interaction force could be monitored and controlled in real time. Besides, having the force information enables it to provide haptic feedback for simulating object interaction, such as pushing an object or simulating the damping effect of a mass-spring system. The robot could be remotely controlled by Real-Time Data Exchange (RTDE) protocol over a standard TCP/IP connection. Second, Oculus Quest, a VR HMD device is used to provide a virtual experience to the user. For simplicity, we assume the user's fingertip location could be substituted by the location of the controller plus a small offset. Besides, a desktop PC with an Ubuntu 18.04 system bridges the headset and the robot, providing flexibility for state control, data logging, and real-time processing. The system architecture is shown in Fig. 1.
### _User interaction with the robot_
In this section, we introduce how a robot generates haptic feedback by interacting with the user's hand. While interacting with the whole hand is desired, the current robot system does not have a sufficient degree of freedom to simulate a pressure distribution on the skin surface. Therefore, the process for generating such interaction is studied on a point only, which is at the user's fingertip in our case.
In the virtual world, we define the location of the user's fingertip as a point \(\mathbf{x}_{f}^{v}\in\mathbb{R}^{3}\), and a proximity point on the object surface as \(\mathbf{x}_{o}^{v}\in\mathbb{R}^{3}\). The Euclidean distance between two points is expressed as \(||\mathbf{x}_{f}^{v}-\mathbf{x}_{o}^{v}||\). Correspondingly, in the real world, the user's fingertip is located at \(\mathbf{x}_{f}^{v}\in\mathbb{R}^{3}\), and the robot end effector is located at \(\mathbf{x}_{o}^{v}\in\mathbb{R}^{3}\). A sufficient condition for simulating the virtual contact is to find \(\mathbf{x}_{o}^{v}\) that could satisfy: \(||\mathbf{x}_{f}^{v}-\mathbf{x}_{o}^{v}||-||\mathbf{x}_{f}^{v}-\mathbf{x}_{o }^{v}||=0\). This is because a real contact will always be generated when \(||\mathbf{x}_{f}^{v}-\mathbf{x}_{o}^{v}||=0\) in the virtual world. Note that there is an infinite number of \(\mathbf{x}_{o}^{v}\) that could satisfy this constraint. Therefore, the selection of \(\mathbf{x}_{o}^{v}\) does not necessarily be aligned with \(\mathbf{x}_{o}^{v}\) exactly.
Fig. 1: The diagram of the system architecture.
Consider the reachability of the robot, the \(\mathbf{x}^{\prime}_{o}\) is chosen to be in front of the user, i.e., satisfies \((\mathbf{x}^{\prime}_{o}-\mathbf{x}^{\prime}_{f})\cdot(\mathbf{O}^{\prime}_{robot }-\mathbf{O}^{\prime}_{headset})>0\), where \(\mathbf{O}^{\prime}_{robot}\) and \(\mathbf{O}^{\prime}_{headset}\) are the origin points of robot and headset in the world axis, respectively. By letting the robot end-effector be always in front of the user, it avoids collision with the user's arm. For simplicity, the degree of freedom for controlling \(\mathbf{x}^{\prime}_{o}\) is assigned to be along the vector \(\mathbf{O}^{\prime}_{robot}-\mathbf{O}^{\prime}_{headset}\). The other components are subject to the user's hand location i.e., always be in front of the user's fingertip.
A calibration step is required to align the virtual world with the robot coordinate axis. This is subject to the definition of orthogonal Procrustes problem, which is given as Eq. (1):
\[\mathbf{R}=\arg\min\|\Omega\mathbf{A}-\mathbf{B}\|_{F}\quad\text{ subject to}\quad\Omega^{T}\Omega=I \tag{1}\]
Where \(\mathbf{A}\) and \(\mathbf{B}\) are centered point cloud sets sampled from the VR controller, and the robot end-effector, respectively. The solution to the orthogonal Procrustes problem could be solved by singular value decomposition (SVD) of matrix \(\mathbf{B}^{T}\mathbf{A}\), which is defined as Eq. (2):
\[\mathbf{M} =\mathbf{B}\mathbf{A}^{T} \tag{2}\] \[\mathbf{M} \rightarrow\mathbf{U}\mathbf{\Sigma}V^{T}\] \[\mathbf{R} =\mathbf{U}\mathbf{V}^{T}\]
Once the optimal rotation \(\mathbf{R}\) is obtained, the transition could be trivially obtained as \(\mathbf{t}=\mathbf{\hat{B}}-\mathbf{R}\mathbf{\hat{A}}^{T}\), where \(\mathbf{\hat{A}}\) and \(\mathbf{\hat{B}}\) are raw (i.e., not centered) point cloud sampled from VR controller, and the robot end-effector, respectively.
### _Design of virtual reality environment_
A virtual environment is created in Unity to convey the visual effects to the user. The scene is composed of 1) a user, and 2) a virtual object to be interacted with. Since the IP address of the server may change over time, the user could modify the server's IP address inside the virtual environment through the text field. The connection is established when the user clicks on a button.
After connecting to the server, the user is prompted to interact with the red virtual object inside the blue frame, as shown in Fig. 3 (a). The user can touch the virtual object using the right hand with the Oculus controller. The client, which is the Oculus headset, will keep tracking the user's right controller as well as calculating the distance between it and the nearest front projected surface, and continuously send the distance as well as the controller's location to the server through a JSON format message. Based on the real-time data, the robot is able to track the user's hand and generate the corresponding contact events.
In the virtual environment, we currently have included 3 potential objects that the user could interact with. They are: 1) a vertical plane, 2) a sphere, 3) a 45\({}^{\circ}\) rotated cube, as shown in Fig. 5. By pressing different controller buttons, the user has the ability to 1) randomly switch the virtual object, and 2) make the virtual object invisible, as shown in Fig. 3 (b).
## IV Experiment
### _Calibration_
The result of the calibration process is provided. To achieve this, 486 pairs of points were collected. The aligned point cloud is shown in Fig. 4. It can be observed that the two point clouds have been aligned by the transformation calculated. The alignment error, when being quantified by the Mean Squared Error (MSE), is 0.00485 m. This error value is relatively small compared to the motion range, which is usable in the following applications.
A limitation is that the user needs to recalibrate the system when the VR coordinate system loses track of the real world, which usually happens when the program restarts or when the user removes the headset. This induces increased human efforts in the following user studies.
### _Mock User Study_
Two mock user study was conducted to test the fidelity and usability of our approach.
Fig. 3: (a) Interact with the virtual object inside a blue frame. (b) The virtual object is made to be invisible.
Fig. 2: Connect to the server in VE.
#### Iv-B1 Task I
In the first task, we put a virtual object in front of the user. There were three types of virtual objects in total, namely cube, sphere, and plane (refer to Fig. 5). The goal of this task is to recognize the object, in order to prove our assumption that the haptics interface could convey discriminative geometric features about the object. The ground truth object was randomly selected. In the experiment, this object was set to transparent, so that the user would not be able to recognize it through the visual feedback. To guess what object it is, the user needs to infer the shape of this object by touching it. The haptic feedback is conveyed by the approach through the robot, which is described in Sec. III. After making the guess, the ground truth object is shown to the user.
The mock user study is conducted with two participants, with 30 evaluation trials in total. An example photo for this experiment is shown in Fig. 6. We have observed 26 success trials, which takes around 86.7%. This accuracy proves that the system could convey discriminative features about the object. We also visualize the confusion matrix, as shown in Fig. 7. Here we refer to the controlled condition as the plane case, which has 100% precision and \(\frac{11}{12}\) recall. The experimental conditions are those non-smooth surfaces, i.e., cube, sphere. It can be seen that the complex object shape led to confusion, leading to misclassifications. The most common mistake is that the plane, or cube, can be misrecognized as a sphere. We believe this is due to the shape similarity between the cube and the sphere.
#### Iv-B2 Task II
In the second task, we aim to test whether the user can slide on the object surface. In this task, the user will try to draw a trajectory on the object's surface. A trajectory segment will be recorded when a button in the controller is pressed, and visualized on the computer screen.
Without loss of generality, a sphere object in the virtual reality world is used for the experiment. This is shown in Fig. 8. The average error distance between the user's hand and the sphere is around 2.5 cm.
## V Limitations & Future work
### _Render contacts from arbitrary directions_
For simplicity and safety considerations, we currently only assume the virtual object is in front of the user. However, this is a limitation of the system since the motion range of the
Fig. 4: Result of aligning VR coordinate system with the robot coordinate system. Blue dots are the hand positions after being transformed by the calculated homography. Red dots are the positions of the robot end-effector
Fig. 5: 3 types of virtual objects that are placed in the virtual reality environment. (a) cube, (b) sphere, (c) plane.
Fig. 6: Experiment configuration of the mock user study. The virtual scene is shown in the top right corner.
Fig. 7: Confusion matrix of the virtual object recognition.
robot is limited because it is not possible to render objects behind the user.
As one of the future work, we propose to make the robot contact the user from an arbitrary direction. The main challenge is how to move the robot efficiently while reducing the risk that the robot may collide with the user's body. This problem remains to be challenging since the full-body collision avoidance algorithm for a robotic system in a dynamic environment is still an open problem to be addressed.
### _Render frictional force_
Another issue identified is that when rendering sliding effects onto the object surface, the relative velocity between the robot and the user's hand is not consistent with the friction force direction in the virtual world. This is because the robot needs to track the user's hand using its velocity. It is not clear so far whether it is possible to use the current system to achieve the subgoal of rendering frictional force, while at the same time satisfying the positional constraint.
### _Reducing latency & User intention prediction_
The current method is based on tracking the location of the Oculus controller, which induces latency between the robot and the hand. The latency of the haptic feedback is determined by the hardware. While this can be reduced by increasing the control gain or increasing the maximum velocity of the robot, latency always exists. Potential issues to this solution are that 1) a high locomotion speed could induce a high impact force when in contact due to inertia, which could be unnatural and reduce the fidelity, 2) potential safety issues.
A potential solution to this problem is to leverage user-intention prediction. For instance, using the user's hand pose, body gesture, and other optical information from the scene, it may be possible to determine the object that the user intends to interact with. This makes it possible to place the robot at the target location ahead of time, which could be an improvement compared to the current solution based on real-time hand tracking. We envision such system could be implemented by predicting a target location through machine learning models, based on processing the user's intention and other profiles collected ahead of time.
### _Haptic feedback through props_
Currently, we are using a 3D-printed plate (PLA material) as the prop. In future work, it is possible to add different kinds of props to the robot. To create a smooth sensation during sliding motion, it is possible to leverage a ball attached to a universal joint. Another idea is to further enhance the haptic feedback using a vibration motor.
### _More complicated scenario with multiple robots_
With multiple robots working cooperatively, we can fulfill more complicated scenarios. First, it enlarges the working area. In our current design, the workspace is limited by the moving range of the robot. If the virtual object is too large, it may lead to infeasible solutions that are not able to be reached. Second, multiple robots can simulate multiple contacts at the same time. Third, when collision avoidance with the user's body is needed, it reduces the complexity of robot planning by dividing the workspace into a few subregions, with each subregion assigned to a robot.
## VI Conclusions
In this report, we present TCR, a novel encountered-type haptic display system that can convey the object's shape. A virtual environment is created through an Oculus HMD headset, while the locomotion system is implemented based on a UR16e robot. We demonstrate that TCR could be used to convey object shape effectively. In our user study, the user can discriminate the object category by making discrete touches on a surface. We also demonstrate that the user could slide on the object's surface, allowing for potential applications such as drawing.
## VII Acknowledgement
Chenxi Xiao contributed to the idea, of robotic control, the communication protocol, documents, and data analysis. Yuan Tian designed a VR scene for the user study, worked on the communication protocol, and documents, and proposed future work. We believe the contribution is equally divided.
This project was originally developed in Purdue's CS 590 (Introduction to AR/VR) class. We sincerely thank Prof. Voicu Popescu for the valuable knowledge transfer.
|
2309.03403 | Sources of capital growth | Data from national accounts show no effect of change in net saving or
consumption, in ratio to market-value capital, on change in growth rate of
market-value capital (capital acceleration). Thus it appears that capital
growth and acceleration arrive without help from net saving or consumption
restraint. We explore ways in which this is possible, and discuss implications
for economic teaching and public policy | Gordon Getty, Nikita Tkachenko | 2023-09-06T23:38:23Z | http://arxiv.org/abs/2309.03403v7 | # Sources of capital growth
###### Abstract
Data from national accounts show no effect of change in net saving or consumption, in ratio to market-value capital, on change in growth rate of market-value capital (capital acceleration). Thus it appears that capital growth and acceleration arrive without help from net saving or consumption restraint. We explore ways in which this is possible, and discuss implications for economic teaching and public policy.
ARTICLE INFO
ASTRACT
Data from national accounts show no effect of change in net saving or consumption, in ratio to market-value capital, on change in growth rate of market-value capital (capital acceleration). Thus it appears that capital growth and acceleration arrive without help from net saving or consumption restraint. We explore ways in which this is possible, and discuss implications for economic teaching and public policy.
CABSTRE
## 1 Introduction and overview
Many economists over the centuries have reasoned that net saving, or equivalently net investment1, should tend to give equal capital growth. Economists since the early nineteenth century have added the proviso that net saving cannot safely outpace innovation; more capital must mean capital redesigned for greater productivity if economies are to escape risk of capital glut and diminishing returns (West (1815), Ricardo (1815), Malthus (1815)). Roy Harrod (1939) described that limit for safe net saving, meaning the rate of imagining and developing new ideas for more productive forms of capital, as the "warranted rate". Harrod, and many other economists of his time and since, have focused on growth of output rather than of capital, but have modeled growth of output by first assuming the equivalence of net saving and capital growth, within the warranted rate, and then looking for effects of that capital growth on later output growth.
Footnote 1: As reported in national accounts; they differ only by statistical discrepancy.
Some other economists, including John Rae (1834) and John Stuart Mill (1848), argued that capital growth might also be explained by a rise in productivity of capital and labor already in place. Ways might found for existing factors to produce more, that is, and so to allow more consumption, or more capital growth, or any mix of the two, without inputs of net saving. Robert Solow (1957), allowed that possibility for "disembodied" growth, where plant and products already existing are repurposed or redeployed in more productive ways.
We test between those two explanations of capital growth, by net saving or by increase in productivity of capital and labor already in place, by comparing net saving to concurrent change in market-value capital in 88 countries. As changes in net saving are expected to be associated with opposite changes in consumption, we also compare change in consumption to concurrent change in capital growth (capital acceleration). All data are drawn from national accounts of those countries as collated on the free website World Inequality Database.
Tests show no effect of net saving or of change in consumption on growth or acceleration of market-value capital. These findings support the views of Rae and Mill, and of Solow as to disembodied growth. They suggest that capital growth, even in acceleration, arrives without help from net saving or consumption restraint. Net saving, if so, raises the physical quantity of capital, but not the aggregate value, and so reduces the value per unit.
Our findings are most easily explained by the present value principle, and by production efficiencies enabled through innovation. Value is created in the mind of the market at the moment when prospective cash flows are discounted. It is created only if the market sees a path, step by step, from the start, to practical realization of those prospective cash flows. Then capital growth arrives when the market first evaluates prospective cash flows, and is realized eventually in physical outcomes insofar as the market has predicted correctly. Meanwhile the innovator acquires materials and plant capacity and labor skills at market prices determined by their uses in current technology, but applies them more productively until competition catches up. It is that temporary market advantage to the innovator which explains capital
growth without net saving in a practical and mechanical sense, while the present value principle gives the explanation in terms of market valuation. This idea will be called "free growth theory" for easy reference.
It predicts only at the largest scales, and only for the private sector. Individuals and groups and even small economies can grow through investment from outside. That possibility is foreclosed only at the scale of all capital and all economies together. The public sector, meanwhile, responds to political rather than market choices, and grows or shrinks accordingly.
If free growth theory is right, tax policy and other policy to encourage saving over consumption should be reconsidered. These policies include the higher tax on ordinary income than on capital gains, and the double tax on corporate dividends.
Inferences for economic teaching include the obvious ones for growth theory and for net saving in general. They include others as well. One of the central doctrines of the marginalist revolution has held that market realization converges to producer cost, when that cost includes imputed interest on assets owned. Net saving gives producer cost, and falls short of market realization in the presence of technological growth from new ideas. Meanwhile the doctrine that net income equals consumption plus net saving is put into question by evidence offered here suggesting that net saving increases the physical quantity of capital, but not the aggregate value. In general, economics might consider relying less on book value, and more on market value and on the power of ideas.
## 2 Net saving and capital growth
This study compares net saving \(S_{net}\) to concurrent growth in market-value capital \(K\) from data in national accounts. Capital growth \(\Delta K_{i}\) in each year \(i\) for each reporting country is found as \(K_{i}-K_{i-1}\), and compared to \(S_{net,i}\) reported for that year and country. As we will be testing for differences between \(\Delta K\) and \(S_{net,i}\), we begin by writing
\[\Delta K_{i}=S_{net,i}+Q_{i}\, \tag{1}\]
where \(Q_{i}\) will mean the sum of market noise, which may prove positive or negative or zero, plus any part of capital growth explained by concurrent productivity gain as described by Rae and Mill, and by Solow as to disembodied growth. We call this sum of noise and productivity gain "free growth" in that it costs no net saving.
The object of testing is to find effects of \(S_{net}\) on concurrent \(\Delta K\), and so to help evaluate historic and current teachings as to those effects. We submitted above that most teaching, with exceptions noted as to Rae, Mill, and Solow, and within the warranted rate, predicts net saving to differ from concurrent growth in market-value capital only by market noise which tends to balance out over scale and time. The residual term \(Q\) in Eq. (1), in that case, will give the market noise converging to zero. That consensus prediction, which we will challenge, will here be called "thrift theory"; \(S_{net}\), in thrift theory, is expected to converge to \(\Delta K\) if held within the warranted rate. That is,
\[E(\Delta K)=S_{net}\quad\text{or equivalently}\quad E(Q)=0,\quad\text{if} \quad\frac{\Sigma S_{net}}{\Sigma K}\leq u\,\quad\text{in thrift theory}, \tag{2}\]
where:
1. \(\Sigma S_{net}\) is collective net saving over the economy
2. \(\Sigma K\) is collective capital over the economy before current \(\Sigma S_{net}\), and
3. \(u\) is the warranted rate.
\(E(\Delta K)\) and \(E(Q)\) here give the expected values of \(\Delta K\) and \(Q\) respectively. Expected value means predicted average of outcomes over all observations. In this case, that will mean predicted average of yearly observations over all years reported. As secular economic growth has tended to make later stocks and flows larger than earlier ones, we first divide by \(K\) (normalize) to avoid overweighting of more recent years in finding that average. Division of Eq. (1) by \(K_{i-1}\) gives
\[\frac{\Delta K_{i}}{K_{i-1}}=\frac{S_{net,i}}{K_{i-1}}+\frac{Q_{i}}{K_{i-1}}. \tag{3}\]
The first term in Eq. (3) gives capital growth rate \(g(K)\). The second term is a variant of the Keynesian net saving rate \(s_{net}\) where capital rather than output becomes the denominator. This flow will here be called "thrift". It will show as
\(s(K)\), with the subscript "net" left implicit, and with the understanding that the denominator shows capital at market value, rather than at depreciated cost. The third term in Eq. (3) will be called free growth rate and shown as \(q(K)\). Then
\[g(K)_{i}=\frac{\Delta K_{i}}{K_{i-1}}\,\quad s(K)_{i}=\frac{S_{net,i}}{K_{i-1}}\, \quad\text{and}\quad q(K)_{i}=\frac{Q_{i}}{K_{i-1}}\,\]
so that Eq. (3) can be shown more compactly as
\[g(K)_{i}=s(K)_{i}+q(K)_{i}. \tag{4}\]
By the definition \(q(K)=\frac{Q}{K}\), an expected value \(E(Q)=0\) implies \(E\left(q(K)\right)=0\). Application of Eq. (2) to Eq. (4) now gives
\[E\left(g(K)\right)=s(K)\quad\text{and}\quad E(q(K))=0\,\quad\text{under \text{thrift assumptions}}, \tag{5}\]
where "thrift assumptions" are that thrift theory is correct and that the warranted rate is not exceeded. Meanwhile Eq. (4) allows
\[\Delta g(K)_{i}=\Delta s(K)_{i}+\Delta q(K)_{i}\,\quad\text{where} \tag{6}\] \[\Delta g(K)_{i}=g(K)_{i}-g(K)_{i-1}\,\quad\Delta s(K)_{i}=s(K)_{i}-s(K)_{i -1}\,\quad\text{and}\quad\Delta q(K)_{i}=q(K)_{i}-q(K)_{i-1}\.\]
For any variables \(a\) and \(b\), we may reason \(E(a-b)=E(a)-E(b)\). By this and by Eqs. (5) and (6), then,
\[E\left(\Delta g(K)\right)=E\left(\Delta s(K)\right)\quad\text{and}\quad E( \Delta q(K))=0\,\quad\text{under \text{thrift assumptions}}. \tag{7}\]
The first term in Eq. (6) may be called "capital acceleration". Division of Eq. (6) by capital acceleration, and rearrangement, gives
\[\frac{\Delta s(K)_{i}}{\Delta g(K)_{i}}+\frac{\Delta q(K)_{i}}{\Delta g(K)_{ i}}=1. \tag{8}\]
Define \(\theta_{s,i}=\frac{\Delta s(K)_{i}}{\Delta g(K)_{i}}\) and \(\varphi_{s,i}=\frac{\Delta q(K)_{i}}{\Delta g(K)_{i}}\) to restate Eq. (8) as
\[\theta_{s,i}+\varphi_{s,i}=1. \tag{9}\]
Next define \(\theta_{s}=E(\theta_{s,i})\) and \(\varphi_{s}=E(\varphi_{s,i})\). By Eq. (9), then,
\[\theta_{s}+\varphi_{s}=1\.\quad\text{Eq. (7) now implies} \tag{10}\] \[\theta_{s}=1\quad\text{and}\quad\varphi_{s}=0\,\quad\text{under \text{thrift assumptions}}. \tag{11}\]
\(\theta\) and \(\varphi\) will be called the "thrift index" and "free growth index" respectively. \(\theta_{s}\) and \(\varphi_{s}\) give their values as found from data for net saving. Expected values, again, are predicted averages of outcomes. Thus Eq. (11) and thrift theory can be tested by finding average values of \(\theta_{s,i}\) and comparing findings to the expected value \(\theta=1\). First we find
\[\overline{\theta_{s,i}}=\frac{1}{m}\sum^{m}\theta_{s,i}\quad\text{and}\quad \overline{\varphi_{s,i}}=\frac{1}{m}\sum^{m}\varphi_{s,i}\,\]
where \(m\) is the number of observed values of \(\theta_{s,i}\) and \(\varphi_{s,i}\), and test the predictions
\[\overline{\theta_{s,i}}\cong 1\quad\text{and}\quad\overline{\varphi_{s,i}}\cong 0\.\]
Calculations of \(\overline{\theta_{s,i}}\) and \(\overline{\varphi_{s,i}}\) are not expected to show 1 and 0 exactly, under thrift assumptions, because the number of samples \(m\) is finite.
Fig. 1 shows average values of \(\theta_{s}\) and \(\varphi_{s}\) for 88 countries, both unweighted and weighted to GDP, over the period 1980-2022. To control distortions brought by small absolute denominators, years were screened out where \(|\Delta g(K)|\)
was found at less than 0.01 (see Section 9). Results show \(\varphi_{s}\cong 1\) and \(\theta_{s}\cong 0\). These findings appear to refute thrift theory, and to support free growth theory as defined earlier. \(Q\), predicted in thrift theory to describe effects of market noise converging to zero, is revealed to include also the effects of productivity gain as described by Rae, Mill and Solow. We will now test thrift theory from a different approach.
## 3 Consumption and capital growth
By analogy to Eq. (1), write
\[\Delta K_{i}+C_{i}=Z_{i}\quad\text{or equivalently}\quad\Delta K_{i}=Z_{i}-C_{i}\, \tag{12}\]
where \(C\) gives consumption.
In national accounts, which do not recognize human capital and which measure the worker's contribution to net output in pay received, \(Z\) as defined in Eq. (12) gives net output \(Y\). Net output is defined as value added. Reckoning in terms of human capital could suggest that the sum of \(C\) and \(\Delta K\) misses the contribution of self-invested work to value added, and forgets to subtract human depreciation2. An Appendix will present an argument that true net output is that
\begin{table}
\begin{tabular}{l c c} \hline & \(\theta_{s}\) & \(\varphi_{s}\) \\ \hline Regression of \(\Delta g(K)\) to value shown & \(0.3232^{***}\) & \(0.6768^{***}\) \\ & (0.1903) & (0.1903) \\ Observations & 1,414 & 1,414 \\ R\({}^{2}\) & 0.30255 & 0.65196 \\ Within R\({}^{2}\) & 0.28611 & 0.63734 \\ Year fixed effects & ✓ & ✓ \\ Country fixed effects & ✓ & ✓ \\ \hline \end{tabular}
\end{table}
Table 1: Regression of \(\Delta g(K)\) to value shown (Screen = 0.01). \(H_{0}\) per thrift theory: \(\theta_{s}=1\) & \(\varphi_{s}=0\)
Figure 1: Thrift and free growth indexes derived from net saving (88 countries).
sum with these two corrections. Meanwhile we may suspend judgment as to the meaning of \(Z\), and take it only as the sum of \(\Delta K\) and \(C\).
If net saving \(S_{net}\) is available only through equal decrease in consumption \(-C\), then (2) and (12) give
\[E(\Delta K)=-C\quad\text{and}\quad E(Z)=0\;,\quad\text{under thrift assumptions}. \tag{13}\]
We see that Eq. (13) might conflict with the teaching of the possibility of balanced growth, where capital, consumption, output and other flows all grow at the same constant rate3. Thus thrift theory, which is meant to reflect what is actually taught, does so with that reservation here and through Eq. (23) below. Continuing as before, anyhow, we divide Eq. (12) by market-value capital to get
Footnote 3: The ideas of free growth and balanced growth appear together in Mill (1848), book 1, chapter 5, section 4. Here he writes ”... whatever increases the productive power of labor... enables capital to be enlarged... concurrently with an increase of personal consumption”. The key word here is “concurrently”. Mill reasoned that concurrent growth in both capital and consumption, whether or not at the same rate, implies more production from capital and labor already existing, or equivalently free growth. Thus recognition of the possibility of balanced growth is recognition of the possibility of free growth if _concurrent_ growth of capital and consumption is meant, and not necessarily otherwise (see Section 10 below).
\[\frac{\Delta K_{i}}{K_{i-1}}=\frac{Z_{i}}{K_{i-1}}-\frac{C_{i}}{K_{i-1}}\;. \tag{14}\]
The expression \(C_{i}/K_{i-1}\) in Eq. (14) is a version of the Keynesean consumption rate \(c\), but again where market-value capital rather than output is the denominator. It can show as \(c(K)\). The second term in Eq. (14) can be notated \(z(K)\). By Eq. (14), then,
\[g(K)_{i}=z(K)_{i}-c(K)_{i}\;. \tag{15}\]
By Eqs. (13) and (15), also,
\[E\left(g(K)\right)=-c(K)\quad\text{and}\quad E\left(z(K)\right)=0\;,\quad \text{under thrift assumptions}. \tag{16}\]
Eq. (15) meanwhile allows
\[\Delta g(K)_{i}=\Delta z(K)_{i}-\Delta c(K)_{i}\;,\quad\text{where} \tag{17}\]
From Eqs. (16) and (17), then,
\[E\left(\Delta g(K)\right)=E\left(-\Delta c(K)\right)\quad\text{and}\quad E \left(\Delta z(K)\right)=0\;,\quad\text{under thrift assumptions}. \tag{18}\]
Now divide Eq. (17) by \(\Delta g(K)_{i}\), and rearrange as before, to reach
\[\frac{\Delta z(K)_{i}}{\Delta g(K)_{i}}+\frac{-\Delta c(K)_{i}}{\Delta g(K)_ {i}}=1\;. \tag{19}\]
Define \(\theta_{c,i}=\frac{-\Delta c(K)_{i}}{\Delta g(K)_{i}}\) and \(\varphi_{c,i}=\frac{\Delta z(K)_{i}}{\Delta g(K)_{i}}\) to re-express Eq. (19) as
\[\theta_{c,i}+\varphi_{c,i}=1\;. \tag{20}\]
By Eqs. (18), (19) and (20), further,
\[E\left(\theta_{c,i}\right)=1\quad\text{and}\quad E\left(\varphi_{c,i}\right)= 0\;,\quad\text{under thrift assumptions}. \tag{21}\]
Define \(\theta_{c}=E\left(\theta_{c,i}\right)\) and \(\varphi_{c}=E\left(\varphi_{c,i}\right)\) to re-express Eqs. (20) and (21) respectively as
\[\theta_{c}+\varphi_{c}=1\;,\quad\text{and} \tag{22}\]
\[\theta_{c}=1\quad\text{and}\quad\varphi_{c}=0\;,\quad\text{under thrift assumptions}. \tag{23}\]
We infer \(\theta_{c}\cong\overline{\theta_{c,t}}\) and \(\varphi_{c}\cong\overline{\varphi_{c,t}}\) as before, and test thrift theory by comparing average yearly values of \(\theta_{c,t}\) and \(\varphi_{c,t}\) to its predictions \(\overline{\theta_{c,t}}\cong 1\) and \(\overline{\varphi_{c,t}}\cong 0\).
Fig. 2 shows results of tests of these predictions from data for consumption reported in national accounts. Consumption was measured as the sum of personal consumption expenditure PCE and government consumption expenditure GCE. Capital K was again measured at market value. Again, years showing \(|\Delta g(K)|<.01\) were screened out to control small denominator effects. Test results show \(\varphi_{c}\cong 1\) and \(\theta_{c}\cong 0\), as with tests for \(\varphi_{s}\) and \(\theta_{s}\) from net saving. Table 3, which shows \(\varphi_{s}\) and \(\varphi_{c}\) for the same 88 countries separately, likewise finds \(\varphi_{s}\cong 1\) and \(\varphi_{c}\cong 1\). Thus it appears that capital acceleration arrives without help from either net saving or consumption restraint. Next we will see how these findings for \(\varphi_{s}\) and \(\varphi_{c}\) might be explained.
\begin{table}
\begin{tabular}{l c c c} \hline Country & Period & \(\overline{\varphi_{i,i}}\) & \(\overline{\varphi_{c,i}}\) \\ \hline Armenia & 1997 - 2018 (17) & 0.95 & 1.00 \\ Aruba & 1997 - 2001 (5) & 0.77 & 2.02 \\ Australia & 1962 - 2019 (43) & 0.97 & 1.01 \\ Austria & 1997 - 2017 (14) & 0.92 & 1.03 \\ Azerbaijan & 1997 - 2018 (20) & 0.67 & 1.08 \\ Bahrain & 2010 - 2013 (4) & 0.61 & 1.03 \\ Belgium & 1997 - 2011 (10) & 0.89 & 1.01 \\ Bolivia & 1998 - 2013 (13) & 1.00 & 1.03 \\ Botswana & 1997 - 2000 (4) & 0.75 & 1.10 \\ Brazil & 1998 - 2017 (16) & 0.86 & 1.02 \\ British Virgin Islands & 1997 - 1999 (3) & 0.11 & 1.62 \\ Bulgaria & 1997 - 2017 (15) & 0.99 & 1.03 \\ Burkina Faso & 2001 - 2018 (14) & 0.98 & 0.98 \\ Cabo Verde & 2009 - 2016 (8) & 0.52 & 0.95 \\ Cameroon & 1998 - 2003 (5) & 1.11 & 0.92 \\ Canada & 1974 - 2020 (37) & 0.89 & 0.97 \\ Chile & 1998 - 2018 (16) & 0.88 & 1.02 \\ China & 1993 - 2014 (20) & 0.97 & 1.03 \\ Colombia & 1997 - 2018 (19) & 0.91 & 1.10 \\ Costa Rica & 2014 - 2017 (4) & 0.97 & 1.04 \\ Cote d’Ivoire & 1997 - 2000 (4) & 0.98 & 1.59 \\ Croatia & 1997 - 2019 (18) & 0.90 & 1.05 \\ Curacao & 2002 - 2016 (12) & 0.94 & 1.11 \\ Cyprus & 1998 - 2019 (18) & 0.91 & 1.05 \\ Czech Republic & 1995 - 2015 (14) & 1.04 & 1.04 \\ Denmark & 1997 - 2020 (21) & 0.91 & 0.97 \\ Dominican Republic & 2007 - 2016 (9) & 0.77 & 1.10 \\ Ecuador & 2009 - 2018 (9) & 0.66 & 0.84 \\ Egypt & 1998 - 2015 (17) & 0.80 & 0.95 \\ Estonia & 1997 - 2019 (17) & 0.96 & 1.02 \\ Finland & 1998 - 2020 (18) & 0.97 & 1.03 \\ France & 1952 - 2018 (38) & 0.87 & 1.02 \\ Germany & 1972 - 2020 (26) & 0.89 & 1.01 \\ Greece & 1996 - 2019 (19) & 0.90 & 1.04 \\ Guatema & 2003 - 2019 (9) & 0.91 & 1.02 \\ Guinea & 2005 - 2010 (5) & 0.92 & 1.13 \\ Honduras & 2003 - 2015 (13) & 0.80 & 0.97 \\ Hong Kong & 1997 - 2020 (21) & 0.91 & 1.03 \\ Hungary & 1997 - 2019 (18) & 0.94 & 1.06 \\ Iceland & 2003 - 2014 (12) & 0.87 & 1.02 \\ India & 2000 - 2017 (13) & 0.84 & 0.98 \\ Iran & 1997 - 2018 (21) & 0.14 & 0.98 \\ Ireland & 1997 - 2019 (19) & 0.94 & 0.99 \\ Israel & 1997 - 2018 (19) & 0.98 & 1.17 \\ \hline \end{tabular}
\begin{tabular}{l c c c} \hline Country & Period & \(\overline{\varphi_{i,i}}\) & \(\overline{\varphi_{c,i}}\) \\ \hline Italy & 1981 - 2020 (28) & 0.93 & 1.01 \\ Japan & 1981 - 2018 (29) & 0.97 & 1.03 \\ Kazakhstan & 1997 - 2018 (19) & 0.97 & 1.03 \\ Korea & 1997 - 2018 (14) & 0.97 & 1.01 \\ Kuwait & 2005 - 2017 (11) & 1.07 & 1.07 \\ Kyrgyzstan & 1998 - 2019 (18) & 0.72 & 1.06 \\ Latvia & 1998 - 2015 (16) & 0.99 & 1.01 \\ Lithuania & 1997 - 2019 (16) & 0.87 & 0.99 \\ Luxembourg & 1997 - 2018 (21) & 0.97 & 1.01 \\ Malaysia & 2007 - 2015 (8) & 0.80 & 1.10 \\ \hline Malta & 1997 - 2019 (21) & 0.77 & 0.99 \\ Bulgaria & 1997 - 2017 (15) & 0.99 & 1.03 \\ Burkina Faso & 2001 - 2018 (14) & 0.98 & 0.98 \\ Cabo Verde & 2009 - 2016 (8) & 0.52 & 0.95 \\ Cameroon & 1998 - 2003 (5) & 1.11 & 0.92 \\ Canada & 1974 - 2020 (37) & 0.89 & 0.97 \\ Chile & 1998 - 2018 (16) & 0.88 & 1.02 \\ China & 1993 - 2014 (20) & 0.97 & 1.03 \\ Colombia & 1997 - 2018 (19) & 0.91 & 1.10 \\ Costa Rica & 2014 - 2017 (4) & 0.97 & 1.04 \\ Cote d’Ivoire & 1997 - 2000 (4) & 0.98 & 1.59 \\ Croatia & 1997 - 2019 (18) & 0.90 & 1.05 \\ Curacao & 2002 - 2016 (12) & 0.94 & 1.11 \\ Cyprus & 1998 - 2019 (18) & 0.91 & 1.05 \\ Czech Republic & 1995 - 2015 (14) & 1.04 & 1.04 \\ Denmark & 1997 - 2020 (21) & 0.91 & 0.97 \\ Dominican Republic & 2007 - 2016 (9) & 0.77 & 1.10 \\ Ecuador & 2009 - 2018 (9) & 0.66 & 0.84 \\ Egypt & 1998 - 2015 (17) & 0.80 & 0.95 \\ Estonia & 1997 - 2019 (17) & 0.96 & 1.02 \\ Estonia & 1998 - 2019 (17) & 0.96 & 1.02 \\ Finland & 1998 - 2020 (18) & 0.97 & 1.03 \\ France & 1952 - 2018 (38) & 0.87 & 1.02 \\ Germany & 1972 - 2020 (26) & 0.89 & 1.01 \\ Greece & 1996 - 2019 (19) & 0.90 & 1.04 \\ Guatema & 2003 - 2019 (9) & 0.91 & 1.02 \\ Guinea & 2005 - 2010 (5) & 0.92 & 1.13 \\ Honduras & 2003 - 2015 (13) & 0.80 & 0.97 \\ Hong Kong & 1997 - 2020 (21) & 0.91 & 1.03 \\ Hungary & 1997 - 2019 (18) & 0.94 & 1.06 \\ Iceland & 2003 - 2014 (12) & 0.87 & 1.02 \\ India & 2000 - 2017 (13) & 0.84 & 0.98 \\ Iran & 1997 - 2018 (21) & 0.14 & 0.98 \\ Ireland & 1997 - 2019 (19) & 0.94 & 0.99 \\ Israel & 1997 - 2018 (19) & 0.98 & 1.17 \\ \hline \end{tabular}
\end{table}
Table 3: Average \(\varphi_{i,i}\) and \(\varphi_{c,i}\) in all countries (screen = 0.01). Number of years clearing screen shown in ()
## 4 Mechanics of free growth
Some growth is capital widening, where structures and implements increase in number but do not change in design. Capital widening, however, is practical only so far before glut and diminishing returns set in. Further growth from that point must come from capital deepening, meaning improvements in the design of capital. Solow (1956) noted a kind of middle ground between capital widening and capital deepening in the disembodied growth mentioned earlier; ships carrying coals to Newcastle can raise prospective cash flows, and hence present value, by reversing the business plan. But Solow, who came to conclusions similar to ours from different evidence, puzzled as to how capital growth without net saving could be possible for capital deepening through "embodied" growth, where products of new design are made from plant of new design.4
Footnote 4: The terms capital deepening, capital widening, embodied growth and disembodied growth are all Solow’s.
The solution, we suggest, is that embodied growth is disembodied growth on a finer scale. At each step toward realization of the new plant and products, raw materials and products and labor skills and plant capacity currently available on the market are adapted to new uses. The innovator pays for these inputs at a market price determined by their value in established productive uses, but applies them innovatively to realize higher prospective cash flows, and hence higher present values, to the innovator (Marshall (1890), Schumpeter and Opie (1934)). This difference in present value realized less price paid will here be called the "innovator's reserve", meaning reserve price for inputs of capital and labor.5 The innovator's reserve quantifies the part of free growth explained by productivity gain as distinct from random market noise. As such, it is the quantity added to depreciation saving to enable embodied growth, so that net saving is never needed.
Footnote 5: i.e., capital and labor inputs are worth more to the innovator in that the innovator applies them in ways to realize greater returns. The present value of additional cash flow enabled by this advantage in return quantifies the innovator’s reserve and equivalently the non-random component of free growth.
## 5 The predictions of free growth theory
We agree with West, Ricardo and Malthus, and most economists since, that innovation is a prerequisite of growth. We go further, and expect that substantially all capital growth is explained by the innovator's reserve, so that net saving is obviated. This prediction, however, cannot be tested ideally from equations for growth as distinct from acceleration. In Eq. (5), for example, where thrift theory predicts \(E(q(K)_{i})=0\), free growth theory does not predict the diametric opposite \(E(s(K)_{i})=0\). Free growth theory makes no prediction as to the amount of saving or investment, or of its ratio to capital or to capital growth, but rather questions its _effect_ on capital growth. Causality is more clearly revealed in capital acceleration, where _changes_ in thrift are compared to concurrent _changes_ in growth. Thus the only prediction of free growth theory which we find practical to test is
\[\overline{\varphi_{s,i}}\cong\overline{\varphi_{c,i}}\cong 1\,\quad\text{implying} \quad\overline{\theta_{s,i}}\cong\overline{\theta_{c,i}}\cong 0\,\quad\text{in free growth theory}, \tag{24}\]
with test results shown in Table 3.
Consequently, the only predictions of thrift theory refuted directly by data shown in Table 3 are Eqs. (11) and (23). It was argued, however, that Eqs. (5), (7), and (11) all follow necessarily from Eq. (2), while Eqs. (16), (18), (21) and (23) follow necessarily from Eq. (13), so that refutation of Eqs. (11) and (23) is implicit refutation of those others6. Table 4 illustrates this point by testing the predictions of thrift theory \(\overline{s(K)}\cong\overline{g(K)}\) and \(\overline{-c(K)}\cong\overline{g(K)}\). Test results clearly refute those predictions.
Footnote 6: i.e., if B is true in every case where A is true, then it does not follow that A is true in every case where B is true, but it does follow that A is not true in every case where B is not true.
Our findings support those of Piketty and Zucman (2014) and Kurz (2023) as to the market power of innovators to explain capital growth beyond net saving. Again, we go farther by questioning the assumption that net saving contributes even a part of capital growth. Data shown in the Tables and Figures here suggest that it does not. Hence we attribute all capital growth and acceleration to the innovator's reserve, aside from market noise, and none to net saving.
\begin{table}
\begin{tabular}{l c c c} \hline Country & Period & \(\overline{\left(\frac{s(K_{i})}{g(K_{i})}\right)}\) & \(\overline{\left(\frac{-c(K_{i})}{g(K_{i})}\right)}\) \\ \hline Armenia & 1996 - 2018 (22) & -0.54 & 0.62 \\ Aruba & 1996 - 2001 (6) & 1.61 & 3.90 \\ Australia & 1961 - 2018 (54) & 0.55 & 1.12 \\ Austria & 1996 - 2019 (21) & 0.93 & 1.75 \\ Azerbaijan & 1996 - 2018 (23) & 2.29 & 1.49 \\ Bahrain & 2009 - 2013 (4) & -1.94 & -1.05 \\ Belgium & 1996 - 2019 (19) & 0.62 & 1.70 \\ Bolivia & 1997 - 2015 (18) & 0.37 & 1.03 \\ Botswana & 1996 - 1999 (4) & 3.30 & 3.44 \\ Brazil & 1996 - 2018 (22) & 0.26 & 1.10 \\ British Virgin Islands & 1996 - 1999 (4) & 2.68 & 1.70 \\ Bulgaria & 1996 - 2016 (17) & 0.08 & 0.95 \\ Burkina Faso & 2000 - 2018 (19) & 0.22 & 0.92 \\ Cabo Verde & 2008 - 2017 (9) & 0.33 & 1.75 \\ Cameroon & 1997 - 2003 (7) & 0.95 & 0.99 \\ Canada & 1972 - 2020 (43) & 0.44 & 1.20 \\ Chile & 1997 - 2018 (20) & 0.61 & 0.79 \\ China & 1992 - 2016 (25) & 0.77 & 0.39 \\ Colombia & 1996 - 2019 (24) & 0.62 & 3.27 \\ Costa Rica & 2013 - 2017 (5) & 0.17 & 0.52 \\ Cote d’lvoire & 1996 - 2000 (5) & 0.02 & -1.32 \\ Croatia & 1996 - 2019 (19) & 0.23 & -0.03 \\ Curacao & 2001 - 2016 (16) & 1.14 & 1.82 \\ Cyprus & 1996 - 2019 (23) & 0.26 & 1.06 \\ Czech Republic & 1994 - 2019 (19) & 0.20 & 0.75 \\ Denmark & 1996 - 2020 (24) & 0.25 & 0.39 \\ Dominican Republic & 1996 - 2016 (11) & 1.29 & 1.50 \\ Ecuador & 2008 - 2018 (10) & 3.93 & 4.36 \\ Egypt & 1997 - 2015 (19) & 2.21 & 3.35 \\ Estonia & 1996 - 2019 (20) & 0.51 & 0.98 \\ Finland & 1996 - 2020 (20) & 0.53 & 1.37 \\ France & 1950 - 2019 (60) & 0.53 & 1.02 \\ Germany & 1970 - 2020 (46) & 1.00 & 1.89 \\ Greece & 1995 - 2019 (22) & 0.15 & 0.28 \\ Guatema & 2002 - 2019 (18) & -0.81 & 1.03 \\ Guinea & 2004 - 2010 (6) & 0.83 & 0.84 \\ Honduras & 2001 - 2015 (14) & 0.12 & 0.92 \\ Hong Kong & 1997 - 2020 (22) & 0.74 & 0.41 \\ Hungary & 1996 - 2019 (20) & 0.12 & 0.59 \\ Iceland & 2000 - 2014 (15) & 0.25 & 0.36 \\ India & 1999 - 2017 (19) & 0.64 & 0.36 \\ Iran & 1996 - 2018 (23) & 2.38 & 1.27 \\ Ireland & 1996 - 2019 (22) & 0.48 & 0.56 \\ Israel & 1996 - 2019 (24) & 0.48 & 2.70 \\ Italy & 1980 - 2020 (34) & 0.08 & 0.07 \\ \hline \end{tabular}
\end{table}
Table 4: Average \(s(K)_{i}/g(K)_{i}\), and \(-c(K)_{i}/g(K)_{i}\) in all countries (screen = 0.01). Number of years clearing screen shown in ()
## 6 Optimum investment policy
Data and arguments adduced suggest that the optimum amount of saving, at the global scale, is depreciation saving and nothing more. That would not mean book depreciation, as this study has stressed differences between book and market values. Up to a point, it should be possible to analyze the composition of market capital, and to model depreciation of the whole. A better plan, as Solow (1956) wrote in response to Harrod's knife edge argument (1939), is to trust the market to maximize rate of return, and to sense the point where glut begins and returns fall.7
Footnote 7: Harrod had argued that saving must hit the warranted rate exactly or risk positive feedback through the operation of the output/capital ratio (accelerator).
Markets do so imperfectly when tax and other public policy reward saving over distributions and consumption. Findings in this paper suggest review of such policies. These include the double tax on dividends, and the greater tax rate on ordinary income than on capital gains. Effects of removing the double tax, and removing the difference between tax rates on ordinary income and on capital gains, could be revenue-neutral and non-partisan if the corporate tax were raised to match, if the tax rates on ordinary income and on capital gains met somewhere between, and if thoughtful grandfathering eased the transition.
## 7 Data sources
All our data are drawn from Distributional National Accounts (DINA) from the free online database World Inequality Database (WID). This source collates data from national accounts and tax data of 105 countries in constant currency units, and adjusts them where needed to conform to current standards of the System of National Accounts (SNA) published by the United Nations. We show results for the 88 of those countries which report all three of the factors, namely net saving, consumption and market-value capital, needed for deriving the thrift and free growth indexes. WID's source for these data is national accounts.
Consumption \(C\) in our text and equations is reproduced from Final Consumption Expenditure (mcongo)8. This sums personal consumption expenditure PCE and government expenditure GCE. Net saving \(S_{net}\) and market-value \(K\) are taken from net national saving (msavin) and market-value Capital Wealth (mnweal) respectively. GDP, which we use only for weighting purposes in Figs. 1 and 2, is reproduced from GDP (mgdpro).
Footnote 8: WID code
## 8 Accessing our results and methods
Tables and other displays of our findings for each country, and showing our methods of calculation, can be accessed at the web appendix ([https://3woilz-0-0.shinyapps.io/RhinoApplication/](https://3woilz-0-0.shinyapps.io/RhinoApplication/)).
## 9 Displays
Eqs. (10) and (22) show \(\theta_{s}+\varphi_{s}=1\) and \(\theta_{c}+\varphi_{c}=1\). All displays here and in the web appendix, except for Figs. 1 and 2 and Tables 1 and 2, save space by showing \(\varphi_{s}\) and \(\varphi_{c}\) only, leaving \(\theta_{s}\) and \(\theta_{c}\) implicit as their complements to unity. Tables 3 and 4 show \(\varphi_{s}\), \(\varphi_{c}\) and related variables for each of the 88 countries averaged over all years.
The web appendix includes displays of the \(\varphi_{s}\) and \(\varphi_{c}\) for each year in each country over the report period. These tend to show upward and downward spikes in values of \(\varphi_{s}\) and \(\varphi_{c}\) in some years. Those spikes tend to be associated with small absolute values of denominators, in these cases \(\Delta g(K)\), in those countries and years. Small denominators magnify errors in measurements of numerators. Worse, when \(\Delta g(K)_{i}\) is small, small mismeasurements of \(g(K)_{i}\) or \(g(K)_{i-1}\) might reverse \(\Delta g(K)_{i}\) in sign.
To maximize reliability of test results, we apply a range of screens to omit years where absolute denominators fall below a given threshold. Some displays show \(\varphi_{s}\) and \(\varphi_{c}\) for all years, regardless of denominator size. Others screen out all years where absolute denominators are less then.01, then.025, then continuing upward in increments of.025 to a maximum screen of.15. \(\varphi_{s}\) and \(\varphi_{c}\) are plotted for each country unscreened and at each of the seven successive levels of screening. Figs. 1 and 2, and all four Tables, applied a screen of.01. The denominator whose absolute value is screened is capital acceleration \(\Delta g(K)\) in all displays except Table 4, where it is capital growth rate \(g(K)\).
Screening out years where absolute \(\Delta g(K)\) or \(g(K)\) is small would cost little in informative value even if measurements were exact. In those years, there is little capital acceleration or capital growth, positive or negative,
for either thrift theory or free growth theory to explain. Market noise alone might account for \(\Delta g(K)\) or \(g(K)\) in such years. Screening reduces the number of observations, but increases the reliability and informative value of each.
## 10 Disclaimers
Saving in the full sense includes retained output as well as insertion of value from outside. National accounts recognize retained output as "own use" output, measured at cost, in such forms as gain in inventory and production of plant and equipment to be used by the producer rather than sold. Free growth, or equivalently the investor's reserve plus market noise, can be categorized as a third form of retained output which is costless, and thus is invisible to national accounts. In this sense, free growth is a component of net saving. When we say that net saving adds nothing to capital value, we mean only net saving in the at-cost sense reported in national accounts.
We follow Mill in reasoning that concurrent growth of both capital and consumption implies more production from factors (capital and labor) already existing, and so implies free growth. It is also possible for capital and consumption to grow by turns in alternating phases. Thrift theory allows capital growth through consumption restraint over a period, and then consumption recovery through suspension of capital growth over the period following. It is for this uncertainty as to which mechanism is meant, in the common doctrine that balanced growth is at least mathematically possible, that we withhold judgment as to whether Eqs. (13), (16), (18), (21) and (23) reflect actual teaching.
## 11 Discussion and conclusions
Capital glut is the condition warned against by West, Ricardo, Malthus and Harrod. It is loosely defined as oversupply of capital at the current state of technology. We will not attempt a more exact definition here. Findings shown in our displays, anyhow, suggest that net saving raises the physical quantity of capital, say in number of shops, manufacturing plants or finished goods of similar design, without raising aggregate value of capital, and so contributes to capital glut.
These findings challenge the teachings that capital growth is effected by net saving enabled by consumption restraint, and that producer cost, including imputed interest as the opportunity cost of capital, converges to market realization. Evidence showing \(\varphi_{s}\cong 1\) and \(\varphi_{c}\cong 1\) suggests that all capital growth is free, and consequently that market realization, in the presence of innovation, exceeds producer cost by the entirety of capital growth. Meanwhile the same evidence, which indicates that net saving adds no capital value, suggests a review of the teachings that consumption plus net saving gives net income, and evidently that consumption plus net investment gives net output.
Embodied growth is disembodied growth on a finer scale. It redeploys or repurposes existing labor skills, raw materials, and plant capacity, as well as existing finished goods, to achieve higher returns than available from the customary uses which determine their prices. The present value of yields from this advantage in return, or equivalently the innovator's reserve, defines the non-random component in free growth.
## Appendix A Net output with human capital
Human capital is impractical to measure, as it leaves little market record other than for its rental income in pay and investment cost in schooling. Thus national accounts leave it implicit, and allow us to infer what we can from data for pay and schooling. Those accounts are founded on the principle, sound in itself, that net output, or value added, is expressed in the sum of capital growth and net outflow from the value-added chain. In national accounts, then, where physical capital is the whole of capital while net outflow of the chain is the whole of consumption, the reasoning is
\[Y=\Delta K+C\,\quad\text{neglecting human capital}\lx@note{footnote}{Where $\Delta K$, mistakenly, we argue, is measured as $S_{net}$.} \tag{17}\]
It is possible in principle to model a value-added chain which includes human capital, and to compare findings with those shown in Eq. (17). Let human capital \(H\), in that new model, stand as the last link in the value-added chain. Adapting the classic illustration of the value added principle, say that farms produce wheat, mills convert the wheat to flour, bakeries convert the flour into bread, and humans convert some of the bread, called invested consumption, into human capital. The net outflow from this extended value-added chain is not all of consumption, but only the part
remaining after the part invested in human capital is subtracted (Schultz's "pure consumption" (Schultz (1961)). By this reasoning, the principle that net output is expressed in capital growth plus net outflow gives
\[Y=\Delta K+\Delta H+C_{p}\;,\quad\text{allowing human capital},\] (A.2)
where \(C_{p}\) gives pure consumption.
Yoram Ben-Porath (1967) reasoned that growth in human capital equals invested consumption plus self-invested work less human depreciation10. Let \(C_{s},\ W_{s}\) and \(D(H)\) show these flows respectively. Thus the combined arguments of Schultz and Ben-Porath arrive at
Footnote 10: Equation 4 in Ben-Porath’s paper, summarizing his first three equations. His terms and notation differ from ours. The concept of invested consumption was also introduced by Schultz (1961).
\[C=C_{s}+C_{p}\quad\text{and}\quad\Delta H=C_{s}+W_{s}-D(H)\;,\quad\text{allowing human capital}.\] (A.3)
Substitution of these equations into Eq. (A.2) finds
\[Y =\Delta K+C_{s}+W_{s}-D(H)+C_{p}\quad\text{and consequently}\] \[Y =\Delta K+C+W_{s}-D(H)\;,\quad\text{allowing human capital},\] (A.4)
if Schultz and Ben-Porath are right.
It is beyond the scope of this paper to pass judgement on either interpretation of net output. That is the reason why \(Z\) in Eq. (12) was given no meaning other than the sum of \(\Delta K\) and \(C\). \(Z\) may be interpreted to mean net output under the reasoning followed in national accounts, or not if we reserve judgement on the grounds leading to Eq. (A.4), or on other grounds.
|
2309.13610 | VisionKG: Unleashing the Power of Visual Datasets via Knowledge Graph | The availability of vast amounts of visual data with heterogeneous features
is a key factor for developing, testing, and benchmarking of new computer
vision (CV) algorithms and architectures. Most visual datasets are created and
curated for specific tasks or with limited image data distribution for very
specific situations, and there is no unified approach to manage and access them
across diverse sources, tasks, and taxonomies. This not only creates
unnecessary overheads when building robust visual recognition systems, but also
introduces biases into learning systems and limits the capabilities of
data-centric AI. To address these problems, we propose the Vision Knowledge
Graph (VisionKG), a novel resource that interlinks, organizes and manages
visual datasets via knowledge graphs and Semantic Web technologies. It can
serve as a unified framework facilitating simple access and querying of
state-of-the-art visual datasets, regardless of their heterogeneous formats and
taxonomies. One of the key differences between our approach and existing
methods is that ours is knowledge-based rather than metadatabased. It enhances
the enrichment of the semantics at both image and instance levels and offers
various data retrieval and exploratory services via SPARQL. VisionKG currently
contains 519 million RDF triples that describe approximately 40 million
entities, and are accessible at https://vision.semkg.org and through APIs. With
the integration of 30 datasets and four popular CV tasks, we demonstrate its
usefulness across various scenarios when working with CV pipelines. | Jicheng Yuan, Anh Le-Tuan, Manh Nguyen-Duc, Trung-Kien Tran, Manfred Hauswirth, Danh Le-Phuoc | 2023-09-24T11:19:13Z | http://arxiv.org/abs/2309.13610v2 | # VisionKG: Unleashing the Power of Visual Datasets via Knowledge Graph
###### Abstract
The availability of vast amounts of visual data with heterogeneous features is a key factor for developing, testing, and benchmarking of new computer vision (CV) algorithms and architectures. Most visual datasets are created and curated for specific tasks or with limited image data distribution for very specific situations, and there is no unified approach to manage and access them across diverse sources, tasks, and taxonomies. This not only creates unnecessary overheads when building robust visual recognition systems, but also introduces biases into learning systems and limits the capabilities of data-centric AI. To address these problems, we propose the **Vision** Knowledge **G**raph (**VisionKG**), a novel resource that interlinks, organizes and manages visual datasets via knowledge graphs and Semantic Web technologies. It can serve as a unified framework facilitating simple access and querying of state-of-the-art visual datasets, regardless of their heterogeneous formats and taxonomies. One of the key differences between our approach and existing methods is that ours is knowledge-based rather than metadata-based. It enhances the enrichment of the semantics at both image and instance levels and offers various data retrieval and exploratory services via SPARQL. **VisionKG** currently contains 519 million RDF triples that describe approximately 40 million entities, and are accessible at [https://vision.semkg.org](https://vision.semkg.org) and through APIs. With the integration of 30 datasets and four popular CV tasks, we demonstrate its usefulness across various scenarios when working with CV pipelines.
## 1 Introduction
Computer vision has made significant advances and visual datasets have become a crucial component in building robust visual recognition systems. The performance of the underlying deep neural networks (DNNs) in the systems is influenced not only by advanced architectures but also significantly by the quality of training data [59]. There are many available visual datasets, e.g., ImageNet [9],
OpenImage [28], and MS-COCO [33], which offer a range of visual characteristics in different contexts to improve the generalization capabilities of advanced machine learning models.
However, these datasets are often published in different data formats, and the quality of taxonomies and annotations varies significantly. Furthermore, labels used to define objects are available in diverse lexical definitions, such as WordNet [34], Freebase [4], or even just plain text. As a result, there may be inconsistencies in semantics across multiple datasets [30]. Isolated and non-unified datasets not only create unnecessary overhead when building robust visual recognition systems, but they also introduce biases into learning systems and limit the capabilities of data-centric AI [46].
Although researchers and practitioners have made efforts to unify visual datasets [29, 19, 36], a systematic approach to understanding the features and annotations underlying visual datasets is still lacking. For example, the DeepLake [19] can access data from multiple data sources in a unified manner, however, it does not bridge the gap in linking and managing these datasets. Fiftyone [36] can partially capture inconsistencies in multiple datasets by visualizing data sets and analyzing data pipeline failures. Although these works improve the performance of the learned model in a data-centric manner, training DNNs with high-quality data from multiple sources in a cost-effective way remains a formidable challenge for researchers and engineers [51].
Knowledge graph [20] offers a flexible and powerful way to organize and represent data that is comprehensible for both humans and machines. Thus, to systematically organize and manage data for computer vision, we built a knowledge graph of the visual data, named VisionKG. VisionKG is designed to provide unified and interoperable semantic representations of visual data that are used in computer vision pipelines. This knowledge graph captures the entities, attributes, relationships, and annotations of the image data, enabling advanced mechanisms to query training data and perform further analysis.
To address the data inconsistency problems mentioned above, VisionKG interlinks annotations across various datasets and diverse label spaces, promoting a shared semantic understanding and facilitating the retrieval of images that meet specific criteria and user requirements. For instance, for training and testing a specific system, developers may require images with specific types and attributes tailored to their particular scenarios across a range of visual tasks or sources. For example, pedestrian and vehicle detection in adverse weather conditions [42] or occlusion-aware pose estimation [23] both require such tailored image sets across multiple sources for training and testing. Our approach also enables users to better explore and understand relationships between entities using facet-based visualization and exploration powered by a graph data model. Graph queries powered by a graph storage can be employed to create declarative training pipelines from merged computer vision datasets, providing a convenient way to navigate and discover patterns among interlinked visual datasets such as KITTI [15], MS-COCO [33], and Cityscapes [7]. Additionally, VisionKG offers enhanced flexibility in terms of data representation and organization, enabling
faster and easier access to the necessary information, which supports developers in building training pipelines more conveniently and efficiently.
VisionKG is built based on the Linked Data principles [3], adhering to the FAIR [52] and open science guidelines [5], and encompasses various data sources. These sources have been defined and maintained by the research community, as they are widely used and have a significant impact on the development of computer vision algorithms and systems. Their popularity ensures that they will be regularly and frequently updated and extended. This makes VisionKG a valuable resource for researchers and developers who require access to the newest, high-quality image data. Our main contributions are summarized as follows:
* We provide a unified framework for representing, querying, and analysis of visual datasets. By aligning different taxonomies, we minimize the inconsistency between different datasets.
* We make these datasets accessible via standardized SPARQL queries. It is available in both web user interface and via APIs.
* We demonstrate the advantages of VisionKG via three use cases: composing visual datasets with unified access and taxonomy through SPARQL queries, automating training and testing pipelines, and expediting the development of robust visual recognition systems.
* Currently, VisionKG contains 519 million RDF triples that describe approximately 40 million entities from 30 datasets and four popular CV tasks.
The remainder of the paper is structured as follows. In Section 2, we present detailed steps that follow the Linked Data publishing practice [3] to enforce the FAIR principles [52] in VisionKG. Section 3 presents the infrastructure of our VisionKG framework. In Section 4, we demonstrate the MLOps use cases with VisionKG and how it promotes this process. Sections 5-6 discuss related works and conclusions, respectively.
## 2 Enforcing FAIR Principles for Visual Datasets
### Making Visual Data Assets _Findable_ and _Accessible_
To ensure the **findability** of visual data assets, VisionKG uses Uniform Resource Identifiers (URIs) to identify resources, including images and their associated metadata. These URIs provide a unique and persistent identifier for each resource, making it easy to find and access specific images or sets of images. Figure 1 illustrates an RDF data snippet linking images and their annotations in COCO [33], KITTI [15] and VisualGenome [25].
This pays the way to use standardized or popular vocabularies/ontologies, such as DCAT and Schema.org to enrich metadata associated with the content and context of image data in Section 2.2. These metadata can be used to facilitate searching, filtering, and discovery of images based on specific criteria, such as object category or image resolution as demonstrated later in Section 3.3. In particular, VisionKG links each piece of metadata to a
image to ensure that metadata clearly and explicitly describe the image they refer to, e.g 'containing' bounding boxes of 'person', 'pedestrian' or a'man' in Figure 1 **1**. This not only enables easy retrieval and exploration of images and their related ones based on their metadata but also ensures that more metadata can be incrementally enriched by simply adding more RDF triples linked to the corresponding image. Such desired features are powered by a triple storage in terms of storing, indexing and querying (cf. Section 3)
In this context, VisionKG can greatly facilitate the **accessibility** of data and metadata by using standardized communication protocols and supporting the decoupling of metadata from data. Its publication practice makes it easier for targeted users to access and reuse relevant data and metadata, even when the original data are no longer available. For instance, several images of Imagenet or MSCOCO were downloaded or extracted from web sources, the metadata will provide alternative sources even the original sources are no longer accessible.
To push the **accessibility** of VisionKG's data assets even further, users can access VisionKG through a well-documented web interface and a Python API. Both interfaces allow users to explore different aspects of VisionKG, such as the included tasks, images, and annotations with diverse semantics. Additionally, many SPARQL query examples.4\({}^{,}\)5 enable users to explore the functionalities of VisionKG in detail and describe their requirements or specific criteria using RDF statements.
Footnote 4: [https://vision.semkg.org](https://vision.semkg.org)
Footnote 5: [https://github.com/cqels/vision](https://github.com/cqels/vision)
### Ensure _Interoperability_ across Datasets and Tasks
To make VisionKG **interoperable** across different datasets, computer vision tasks, and knowledge graph ecosystems, we designed its data schema as an
Figure 1: FAIR for Visual Data Assets
RDFS ontology as shown in Figure 2. This schema captures the semantics of the properties of visual data related to computer vision tasks. Our approach makes use of existing and well-developed vocabularies such as schema.org wherever possible. This ensures interoperability and backward compatibility with other systems that use these vocabularies and reduces the need for customized schema development.
The key concepts in the CV datasets include images, annotations, and labels. To define these concepts, we reuse the schema.org ontology by extending its existing classes such as <schema:ImageObject>, and <schema:CreativeWork>. For example, we extend <schema:ImageObject> to create the <cv:Image> class, <schema:Dataset> to create the <cv:Dataset> class. By doing so, we are able to inherit existing properties, such as <schema:hasPart> or <schema:isPartOf>, to describe the relationships between datasets and images (Figure 2 ). Our created vocabulary offers the descriptors to capture the attributes of images that are relevant for training a computer vision (CV) model (Section 3.3), such as the image dimensions, illumination conditions, or weather patterns depicted in Figure 2 ). The concept Annotation refers to the labeling and outlining of specific regions within an image. Each type of annotation is used for a particular computer vision task. For instance, bounding boxes are utilized to train object detection models. However, annotations are also reusable for various computer vision tasks. For example, the bounding boxes of object detection annotations can be cropped to train a classification model that doesn't require bounding boxes. In order to enable interoperability of annotations across different computer vision tasks, we developed a taxonomy for them using RDFS ontology, as illustrated in Figure 2 ). In particular, defining the object detection annota
Figure 2: VisionKG Data Schema
tion class as a sub-class of the classification annotation enables the machine to understand that object detection annotations can be returned when users query annotations for a classification task. The cropping process can be performed during the pre-processing step of the training pipeline.
Annotations are associated with labels that define the object or relationship between two objects (visual relationship). However, labels are available in heterogeneous formats, and their semantics are not consistent across datasets. For instance, as shown in Figure 1 1, the pedestrian in KITTI dataset or the man in Visual Genome dataset are annotated as person in MS-COCO dataset. Furthermore, in the Visual Genome dataset, WordNet [34] identification is used to describe the label. Such inconsistencies make it unnecessarily challenging to combine different datasets for training or testing purposes. To tackle this issue, we assign a specific label type that indicates how to integrate with other existing knowledge graphs to facilitate the **semantic interoperability** across datasets. Figure 1 2 and Figure 1 3 exemplify how inconsistent labels from three datasets can be aligned using the RDFS taxonomies from WikiData.
### Optimize _Reusability_ through SPARQL Endpoint
To optimize the reusability of visual data assets, VisionKG provides a SPARQL endpoint 6 to enable users programmatically discover, combine and integrate visual data assets along with semantic-rich metadata with common vocabularies provided in Section2.2. In particular, users can use powerful SPARQL queries to automatically retrieve desired data across datasets for various computer vision tasks. We provided exemplar queries at [http://vision.semkg.org/](http://vision.semkg.org/).
Footnote 6: SPARQL Endpoint of VisionKG: [https://vision.semkg.org/sparql](https://vision.semkg.org/sparql)
Moreover, we annotated VisionKG with data usage licenses for more than ten types7 of licenses associated with datasets listed in Section 3.2. With this licensed data, users can filter datasets by their licenses to build their own custom datasets. For example, a user can pose a single SPARQL query to retrieve approximately 0.8 million training samples to train a classification model for cars with Creative Commons 4.0 license8.
Footnote 7: List of dataset licenses in VisionKG: [http://vision.semkg.org/licences.html](http://vision.semkg.org/licences.html)
Footnote 8: CC BY 4.0: [https://creativecommons.org/licenses/by/4.0/](https://creativecommons.org/licenses/by/4.0/)
By linking images and annotations with the original sources and related data curation processes, we captured and shared detailed provenance information for images and their annotations, thus, VisionKG enables users to understand the history and context of data and metadata. By providing such detailed provenance information, VisionKG can enable users to better evaluate the quality and reliability of image and video data and metadata, promoting their reuse.
## 3 Unified Access for Integrated Visual Datasets
In this Section, we first provide a detailed overview of the architecture of VisionKG, and discuss how it supports access to various popular visual datasets and
computer vision tasks. We then demonstrate VisionKG's capabilities in providing unified access to integrated visual datasets via SPARQL queries, ultimately promoting and accelerating data streaming in CV pipelines. It shows the practical usefulness of our framework for MLOps [1] in Section 4 by exploiting knowledge graph features.
### VisionKG Architecture to Facilitate Unified Access
Figure 3 presents an overview of our VisionKG framework and the process of creating and enriching our unified knowledge graph for visual datasets. We start by collecting popular computer vision datasets for CV from the PaperWithCode platform 10. Next, we extract their annotations and features across datasets using a _Visual Extractor_. We use RDF Mapping Language (RML)[10] to map the extracted data into RDF. RDF data is generated using a _Semantic Annotator_ implemented using RDFizer[21]. To enhance interoperability and enrich semantics in VisionKG, we link the data with multiple knowledge bases, such as WordNet [34] and Wikidata [38]. The _Semantic Enrichment Reasoner_ expands the taxonomy by materializing the labels in each dataset using the ontology hierarchy. For instance, categories like pedestrian or man isSubClassOf person (Figure 12). Based on the interlinked datasets and Semantic Enrichment Reasoner, users can access the data in VisionKG in a unified way (Figure13). The SPARQL Engine maintains an endpoint for users to access VisionKG using the SPARQL query language.
Footnote 10: [https://paperswithcode.com/datasets](https://paperswithcode.com/datasets)
Moreover, VisionKG offers a front-end web interface that allows users to explore queried datasets, such as visualizing data distribution and their corresponding annotations ([https://vision.semkg.org/statistics.html](https://vision.semkg.org/statistics.html)).
Figure 3: Overview of VisionKG Platform
### Linked Datasets and Tasks in VisionKG
The current version of our framework (by May 2023) integrates thirty most common-used and popular visual datasets, involved in the tasks for visual relationship detection, image classification, object detection, and instance segmentation. Table 1 gives an overview of the contained datasets, images, annotations, and triples in VisionKG. In total, it encompasses over 519 million triples distributed among these visual tasks.
To enhance the effectiveness of our framework for image classification, we have integrated both large benchmark datasets, such as ImageNet [9], as well as smaller commonly used datasets, like CIFAR [26], the diversity of covered datasets enables users to quickly and conveniently validate model performance, thus avoid extra laborious work. Table 2 demonstrates that ImageNet comprises 1.2 million entities, dominating the distribution of the classification task in VisionKG. Thanks to the interlinked datasets and semantic-rich relationships across visual tasks, users can query different categories and the desired number of images to tailor training pipelines for specific scenarios.
For object detection, Table 1 and Table 3 show that VisionKG comprises approximately 478 million triples for bounding boxes with dense annotations mainly contributed by large-scale datasets like OpenImages [28] and Objects365 [44]. The variety of visual features allows users to create diverse composite datasets based on their individual requirements for the size or the density of bounding boxes, which can be helpful to reduce biases solely introduced by a single dataset captured under specific conditions and scenarios, e.g., to enhance the model per
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Visual Tasks** & **\#Datasets** & **\#Images** & **\#Annotations** & **\#Triples** \\ \hline Visual Relationship & 2 & 119K & 1.2M & 2.1M \\ Instance Segmentation & 7 & 300K & 3.9M & 22.4M \\ Image Classification & 9 & 1.7M & 1.7M & 16.6M \\ Object Detection & 12 & 4.3M & 50.8M & 478.7M \\ \hline Total & 30 & 6.4M & 57.6M & 519.8M \\ \hline \hline \end{tabular}
\end{table}
Table 1: Statistics across various Visual Tasks in VisionKG
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & \multicolumn{3}{c}{**IMN**} & **SOP** & **CIFAR** & **MNIST** & **CART** & **Cars196** & **CUB200** \\ \hline
**\#Entities** & 2.7M & 240K & 240K & 140K & 77K & 32.4K & 23.6K \\
**\#Triples** & 13.3M & 1.2M & 1.2M & 0.7M & 0.4M & 0.2M & 0.1M \\ \hline \hline \end{tabular}
\end{table}
Table 2: Statistics of Triples and Entities in VisionKG for Image Classification. IMN:ImageNet[9], SOP: Stanford Online Products [39], CIFAR:CIFAR10/100 [26], CART: Caltech-101/-256 [16], CUB200: Caltech-UCSD Birds-200-2011 [48].
formance on densely distributed small objects, which are typically challenging to localize and recognize [32, 31].
For visual relationship detection, which aims to recognize relationships between objects in images, we have further integrated datasets such as VisualGenome [25] and SpatialScene [54], containing over 1.9 million triples for both bounding boxes and object-level relationships. Besides, VisionKG comprises 22.4 million triples for task instance segmentation, allowing users to retrieve and reuse masks of all instance-level objects for downstream scenarios, thus improving the pixel-level segmentation performance of models.
### Visual Dataset Explorer powered by SPARQL
Organizing training data, which may be in heterogeneous formats and have distinct taxonomies, into one pipeline can be a time-consuming task. To reduce this effort, our framework provides a SPARQL web interface that enables users to access, explore, and efficiently combine data by leveraging the rich semantics of SPARQL. This empowers users to describe their requirements or specific criteria using graph query patterns
Figure 4: VisionKG Web Interface
\begin{table}
\begin{tabular}{l c c c c c c c c c} \hline \hline & **MSC** & **UAD** & **KIT** & **CAR** & **BDD** & **OID** & **O365** & **LVIS** & **MVD** & **VOC** \\ \hline
**\#**Entities & 1.0M & 678K & 47K & 32K & 1.5M & 14.3M & 28.5M & 1.6M & 1.2M & 138K \\
**\#**Triples & 9.7M & 6.4M & 0.4M & 0.3M & 15.1M & 135.8M & 277.4M & 15.9M & 11.9M & 1.0M \\ \hline \hline \end{tabular}
\end{table}
Table 3: Statistics of Triples and Entities in VisionKG for Object Detection. MSC: MSC: COCO[33], UAD: UA-DETRAC [50], KIT: KITTI [15], CAR: StanfordCars196 [24], BDD: BDD100K [55], OID: OpenImages [28], O365: Objects365 [44], LVIS [17], MVD [37], VOC [12]
Figure 4 demonstrate our visual datasets explorer equipped with a live-interactive SPARQL web interface. Users can initiate their exploration by selecting a desired task, such as Detection, Classification, Segmentation, or Visual Relationship, from a drop-down menu in Figure4 1. Upon task selection, the system will promptly generate a list of all compatible datasets that support the chosen task, as Figure4 2 illustrated.
Next, users may choose a dataset, such as COCO [33] or KITTI [15], from the list. This will prompt the system to display all available categories within that dataset in Figure 4 3. To filter or select the desired categories, users can simply enter a keyword into the text box depicted in Figure 4 4. This process is further facilitated by allowing users to drag and drop a category from Figure 4 3 to the query box in Figure 4 6. The system will then auto-generate a SPARQL query, accompanied by an explainable text in Figure 4 5, designed to select images containing the specified category. It is noteworthy that multiple categories from different datasets can be selected. Users may modify the query by removing categories or adjusting the query conditions by selecting available options from boxes in Figure 4 5 or Figure 4 6. Additionally, users can also adjust the number of images to be retrieved.
Once the query is finalized, the user may click the "Query" button, and the results will be displayed in table format in Figure 4 7. Additionally, users may select the "Visualization" tab to view the results graphically, as shown in Figure 4 8. By clicking on an image, users may access additional information, such as annotations of that image and annotations generated from popular deep learning models shown in Figure 4 9. Overall, the platform offers an intuitive and efficient method for dataset selection and querying for machine learning tasks.
## 4 VisionKG for MLOps
The term _MLOps_ refers to the application of the DevOps workflow [11] specifically for machine learning (ML), where model performance is primarily influenced by the quality of the underlying data [1]. As demonstrated in Section 2 and Section 3, the detailed overview of our framework's architecture highlights its significant potential to boost the development of MLOps (e.g., data collection, preparation, and unified access to integrated data). In this section, we present three use cases that demonstrate how to carry out more complicated MLOps steps using our framework. These use cases demonstrate the ability to utilize VisionKG for composing visual datasets with unified access and taxonomy through SPARQL queries, automating training and testing pipelines, and expediting the development of robust visual recognition systems. VisionKG's features enable users to efficiently manage data from multiple sources, reduce overheads, and enhance the efficiency and reliability of machine learning models. More detailed features and tutorials about VisionKG can be found in our GitHub repository10.
### Composing Visual Datasets with a Unified Taxonomy and SPARQL
Data often comes in a variety of structures and schemas, and there is a need for consolidation of this information in a unified approach. Efficient data management with expressive query functionalities plays a pivotal role in MLOps. However, data from heterogeneous sources with inconsistent formats presents numerous challenges [19, 41] that must be addressed to ensure the efficiency and reliability of machine learning models under development. Additionally, the quality and consistency of data and unified access to data are paramount in developing visual recognition systems.
As discussed in Section 2 and 3, VisionKG is equipped with SPARQL engine allowing developers to programmatically build a composite dataset (from diverse sources with different annotated formats) to significantly reduce considerable effort in data preparation phase in MLOps. For instance, as demonstrated in Figure 5 and Figure 5, users can query for part of images or categories from one dataset, e.g., images containing both car and van from KITTI [15]. Besides, as desired, they can also query for images from multiple sources with heterogeneous formats, e.g., images containing car from MS-COCO[33] and sedan from UA-DETRAC[50] datasets, even though they have far different annotated formats (i.e., annotations of MS-COCO and UA-DETRAC are organized in JSON and XML format, respectively). Furthermore, benefiting from the Semantic Enrichment Reasoner described in Section 3.1 and integrated knowledge bases (e.g., WordNet [34]), users can query for images containing person from MS-COCO, KITTI, and Visual Genome [25] (due to distinct taxonomies, person are annotated as pedestrian in KITTI and labeled as man Visual Genome) using a simple
Figure 5: Dataset-Exploration with SPARQL under various Conditions in VisionKG.
query (Figure 1 3) rather than a more complex query (Figure 1 1) that covers all possible cases: e.g., images which containing pedestrian in KITTI and/or man in Visual Genome dataset, as users desired.
Thanks to the semantic interoperability (cf. Section 2.2) of interlinked annotations across diverse label spaces, users can create datasets from various sources with relevant definitions as desired. Along with the enrichment of semantic relationships, VisionKG provides users with composite visual datasets in a cost-efficient and data-centric manner and hence boosts the data flow in MLOps. Consider the Robust Vision Challenge11 (RVC) in the context of object detection, where participants need to download terabyte-level datasets from the web (from different sources, taxonomies, and with inconsistent formats), and then train a unified detector to classify and localize objects across all categories in these datasets. To accomplish this, one approach is to unify the taxonomies from these datasets and mitigate the bias introduced by specific domains or similar categories (e.g., the stop sign in MS-COCO is a hyponym of traffic sign, which annotated in MVD[37]) from different taxonomies. Although the organizers provide manually aligned annotations as a good starting point, unifying labels from distinct taxonomies can still be a time-consuming process. With VisionKG, it is one step closer to achieving this. Users can carry out this process with the assistance of external knowledge bases like Wikidata [47], thereby leveraging external knowledge and facts. Additionally, the unified data model that leverages RDF and knowledge graphs, along with the SPARQL endpoint, allows users to conveniently query specific parts of the datasets as desired without the extra effort of parsing and processing the entire large datasets. This constitutes part of how VisionKG accelerates the MLOps workflow.
Footnote 11: [http://www.robustvision.net/](http://www.robustvision.net/)
### Automating Training and Testing Pipelines
One of the primary goals of MLOps is to automate the training and testing pipelines to accelerate the development and deployment of ML models [1]. Automated workflows enable rapid iteration and experimentation, avoiding the time-consuming process during the model development for both researchers and developers. However, despite the advancements of MLOps in increased productivity and reproducibility of experiments, there are also some limitations remain in current MLOps tools (e.g., Kubeflow [2] and MLflow [1] ), such as limited support for complex data types and multi-modal data (e.g., images, videos, and audios). Besides, integrating these MLOps tools with existing diverse data infrastructures can be challenging and requires significant effort.
As described in Section 3.3, powered by SPARQL, VisionKG supports automated end-to-end pipelines for visual tasks. Users can start a training pipeline by writing queries to construct various composite visual datasets. As demonstrated in Figure 6 1, users can query images and annotations with a few lines of SPARQL query to use RDF-based description to get desired data, such as images containing box-level annotations of car and person from interlinked datasets in VisionKG. In combination with current popular frameworks (e.g., PyTorch, TensorFlow) or toolboxes (e.g., MMDetection [6], Detectron2 [53]), users
can further utilize the retrieved data to construct their learning pipelines in just a few lines of Python code without extra effort, as Figure 62 demonstrated. Users need to define solely the model they want to use and the hyperparameters they want to set.
Additionally, users can use VisionKG for their testing pipeline. The inference results can be integrated with data from VisonKG to provide quick insight about the potential model for specific scenarios. Figure 63 demonstrates that one can gain quick overview of a trained YOLO model [22] to detect car on images containing car in crowded traffic scenes.
These described features above significantly reduce the workload during data collection, preparation, pre-processing, verification, and model selection for MLOps. Further features of automated pipelines using VisionKG can be found in GitHub repository12.
Footnote 12: [https://github.com/cqels/vision](https://github.com/cqels/vision)
### Robust Visual Learning over Diverse Data-Sources
The increasing demand for robust visual learning systems has led to the need for efficient MLOps practices to handle large-scale heterogeneous data, maintain data quality, and ensure seamless integration between data flow and model development. Moreover, a robust learning system should perform consistently well under varying conditions, such as invariance to viewpoint and scale, stable performance under instance occlusion, and robustness to illumination changes. However, many existing visual datasets are specifically designed and curated for particular tasks, often resulting in a limited distribution of image data applicable only in narrowly defined situations [40]. This not only imposes unnecessary burdens when developing robust visual recognition systems but also introduces biases within learning systems and constrains the robustness of visual recognition systems.
As discussed in Section 4.1 and 4.2, users can use VisionKG to compose datasets across interlinked data sources and semantic-rich knowledge bases and
Figure 6: Construct CV Pipelines Employing VisionKG.
automatically build training and testing pipelines starting from SPARQL queries. This paves the way to support the construction of robust learning systems exploiting features from VisionKG. For instance, when users want to develop a robust object detector, besides bounding boxes and annotated categories, other environmental situations should also be considered and incorporated as prior knowledge to boost the robustness of trained detectors, such as weather and illumination conditions. Using VisionKG, as demonstrated in Figure 52, users can also employ fine-grained criteria for retrieving images with annotations, such as querying for "images captured at night showing cars in rainy weather conditions." This extends VisionKG's functionalities further for exploring and constructing datasets, allowing users to explore fruitful visual features as desired and build models that cater to various scenarios more robustly, e.g., images captured during adverse weather conditions or at different times of the day. This potential can assist users in evaluating the capability of domain transfer of models (e.g., if a detector trained on KITTI[15] is also robust to detect cars in snowy weather conditions) or handle rare categories and long-tail phenomenon [56] (e.g., query for a composite dataset containing specific categories which are rare in the source dataset to balance the data distribution).
These features reduce the bias arising from unrelated samples and also enable users to construct scenario-specific datasets covering rich semantics in a convenient fashion. In this way, it allows the users to build robust training pipelines in both data- and model-centric manners.
## 5 Related Work
**Limitations in Existing Computer Vision Datasets**
Modern computer vision models are data-intensive and rely heavily on available datasets to perform the learning progress and update learnable parameters. However, the majority of visual datasets are typically limited to specific domains with diverse taxonomies, and the imbalanced nature of class distribution, such as KITTI [15] and MS-COCO [33]. Model-centric approaches, like [49][57], have trained models to deal with those issues, they require either a domain adapter or adopt an additional model to learn the distribution of unified datasets. However, both model-centric solutions demand extra computing power. Data-centric approaches such as MSeg [29] attempt to unify and interlink datasets manually which is labor-intensive. Besides, existing data toolchains or data hubs like Deep Lake [19], Hugging Face 13 and OpenDataLab 14 are well-established data infrastructures for organizing datasets from distinct web sources. However, these toolchains are based solely on meta-data and do not interlink images and annotations across datasets. In contrast, our framework employs knowledge graphs and diverse external knowledge bases to achieve this and adheres to the FAIR principles [52], enabling VisionKG to interlink images and annotations across visual datasets and tasks with semantic-rich relationships.
Knowledge Graph Technologies in Computer Vision
Knowledge graphs can enhance the utilization of background knowledge about the real world and capture the semantic relationships in images and videos through external knowledge and facts [8, 58]. Approaches such as KG-CNet [13] integrate external knowledge sources like ConceptNet[45] to capture the semantic consistency between objects in images. KG-NN [35] facilitates the conversion of domain-agnostic knowledge, encapsulated within a knowledge graph, into a vector space representation via knowledge graph embedding algorithms. However, even these methods leverage external knowledge during learning or after the learning procedure, whereas our method utilizes not only the external knowledge bases but also interlinked datasets. In this way, the enhanced semantics can serve to render fruitful features for integrated datasets. The approach presented in [14] and [38] use Wikidata [47] to empower and interlink annotations for ImageNet [27]. Thanks to the knowledge and facts from the external knowledge base, the data quality has been improved, but both are labor-intensive and mainly target the specific dataset. Besides, the re-usability of these two approaches for other large visual datasets, such as OpenImages [28] and Objects365 [44], and knowledge bases, e.g., Freebase, have not been investigated. [18] employed knowledge graphs to interlink datasets. However, this approach mainly focuses on three datasets in the context of autonomous driving scenarios. In contrast, our framework, VisionKG, utilizes diverse knowledge bases such as WordNet [34], Wikidata [47], and Freebase [4] to enhance the semantics in both image- and instance-level. Additionally, KVQA [43] is a knowledge-based visual dataset employing Wikidata. It is restricted mainly to person entities. Different from it, our work interlinks various visual datasets and numerous entities across diverse taxonomies and domains.
## 6 Conclusions and Future Works
We provide a novel VisionKG that serves as a unified framework for accessing and querying state-of-the-art CV datasets, regardless of heterogeneous sources and inconsistent formats. With semantic-rich descriptions, high-quality, and consistent visual data, it not only helps to facilitate the automation of the CV pipelines but also is beneficial for building robust visual recognition systems.
As new large-scale datasets emerge, there is an increasing need to develop more efficient methods for querying and managing such a huge amount of data. As future work, we will utilize advanced indexing techniques, query optimization, and leveraging distributed computing technologies to improve scalability and integrate further datasets.
## 7 Acknowledgements
This work was funded by the German Research Foundation (DFG) under the COSMO project (ref. 453130567), the German Ministry for Education and Research via The Berlin Institute for the Foundations of Learning and Data (BI-FOLD, ref. 01IS18025A and ref. 01IS18037A), and the European Union's Horizon WINDERA under the grant agreement No. 101079214 (AIoTwin), and RIA research and innovation programme under the grant agreementNo. 101092908 (SmartEdge). |
2310.00069 | Absorptive Effects and Classical Black Hole Scattering | We describe an approach to incorporating the physical effects of the
absorption of energy by the event horizon of black holes in the scattering
amplitudes based post-Minkowskian, point-particle effective description.
Absorptive dynamics are incorporated in a model-independent way by coupling the
usual point-particle description to an invisible sector of gapless internal
degrees-of-freedom. The leading order dynamics of this sector are encoded in
the low-energy expansion of a spectral density function obtained by matching an
absorption cross section in the ultraviolet description. This information is
then recycled using the scattering amplitudes based Kosower-Maybee-O'Connell
in-in formalism to calculate the leading absorptive contribution to the impulse
and change in rest mass of a Schwarzschild black hole scattering with a second
compact body sourcing a massless scalar, electromagnetic or gravitational
field. The results obtained are in complete agreement with previous worldline
Schwinger-Keldysh calculations and provide an alternative on-shell scattering
amplitudes approach to incorporating horizon absorption effects in the
gravitational two-body problem. | Callum R. T. Jones, Michael S. Ruf | 2023-09-29T18:19:44Z | http://arxiv.org/abs/2310.00069v2 | # Absorptive Effects and Classical Black Hole Scattering
###### Abstract
We describe an approach to incorporating the physical effects of the absorption of energy by the event horizon of black holes in the scattering amplitudes based post-Minkowskian, point-particle effective description. Absorptive dynamics are incorporated in a model-independent way by coupling the usual point-particle description to an invisible sector of gapless internal degrees-of-freedom. The leading order dynamics of this sector are encoded in the low-energy expansion of a spectral density function obtained by matching an absorption cross section in the ultraviolet description. This information is then recycled using the scattering amplitudes based Kosower-Maybee-O'Connell in-in formalism to calculate the leading absorptive contribution to the impulse and change in rest mass of a Schwarzschild black hole scattering with a second compact body sourcing a massless scalar, electromagnetic or gravitational field. The results obtained are in complete agreement with previous worldline Schwinger-Keldysh calculations and provide an alternative on-shell scattering amplitudes approach to incorporating horizon absorption effects in the gravitational two-body problem.
## 1 Introduction
The detection of gravitational waves by the LIGO and Virgo collaborations has inaugurated a new era of gravitational wave astrophysics [1; 2]. Beyond the celebrated initial discovery, the next generation of ground- and space-based gravitational wave detectors [3; 4; 5] are expected to increase sensitivity by two orders-of-magnitude, beginning an era of _precision_ gravitational wave science. The success of this experimental program will require commensurate advances in theoretical waveform predictions. A powerful approach to generating these predictions is given by the effective one-body (EOB) resummation [6; 7], taking as input information from a variety of different physical regimes including non-perturbative numerical relativity simulation [8; 9; 10; 11], self-force expansion [12; 13; 14; 15] and perturbative weak-field calculations based on the post-Newtonian (PN) [16; 17] and post-Minkowskian (PM) expansions [18; 19; 20; 21; 22; 23].
Recent years have seen tremendous advances in these weak-field perturbative approaches. In particular, new and powerful calculational methods based on relativistic scattering amplitudes [24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36] and worldline effective field theories (EFT) [37; 38; 39; 40; 41; 42; 43], have allowed for rapid progress in understanding the PM scattering regime of the gravitational two-body problem. These approaches share the common strategy of initially framing the problem in _quantum mechanical_ language, and then using the eventual _classical_ or \(\hbar\to 0\) limit as an additional expansion leading to vast simplifications in the required calculations. This quantum-first perspective leads to a natural synthesis with techniques developed in modern high-energy physics including the systematic use of EFTs [24; 44; 45; 46; 47], unitarity-based methods [48; 49; 50; 51; 52], double-copy [53; 54; 55; 56] and methods for advanced multi-loop integration [57; 58; 59; 60; 61; 62]. Further details can be found in the review [63].
Physically, the PM scattering regime is an expansion based on a separation of scales between the intrinsic size of the compact bodies \(R\) (\(=2G_{\rm N}M\) for a Schwarzschild black hole) and the impact parameter \(b\). In the limit \(R\ll b\) the problem admits an effective description as the scattering of point-particles interacting by exchanging gravitons (and possibly other massless force-mediating particles). At low orders in this long-distance expansion it is sufficient to calculate scattering observables from a model in which the compact bodies are mathematically represented as structureless elementary particles minimally coupled to the mediator fields. At higher-orders, finite size effects will begin to contribute; some of these effects can be captured by modifying the effective description with non-minimal couplings (permanent multipole moments or tidal Love numbers) [64; 65], _but some cannot_. As emphasized long-ago [66], even in the long-distance regime, classical macroscopic bodies, including black holes and neutron stars, differ qualitatively from elementary particles due to the existence of _gapless_ internal degrees-of-freedom [47; 67].
As a motivating illustration, consider a lump of some material modelled as an Einstein solid, where the internal degrees-of-freedom are approximated as a large number of independent quantum harmonic oscillators [68]. The quantum mechanical spectrum of this model is discrete with level spacing of \(\mathcal{O}(\hbar)\). In the classical limit \(\hbar\to 0\), this model has an effectively continuous spectrum of excited states extending all the way to the ground state threshold without a gap. Even though a precise microscopic description of a Schwarzschild black hole is not known, we expect that it shares these qualitative features; from explicit black hole perturbation theory calculations it is known that a Schwarzschild black hole can absorb radiation of arbitrarily low frequency [69; 70], necessitating the existence of (classically) gapless excited states.
In a two-body scattering event, no matter how small the momentum transfer between the bodies, there will necessarily exist near-threshold excited states that can go on-shell and therefore cannot be integrated out of the point-particle effective theory. Since we may have neither direct experimental access to these states, nor a precise microscopic theoretical description, our approach to incorporating their physical effects is to regard them as constituting an _invisible sector_ into which energy and other quantum numbers may be absorbed. As our choice of language suggests, this problem shares many formal similarities with model-independent approaches to describing interactions between the Standard Model and a hidden sector [71; 72; 73]. From this perspective, the problem of scattering black
holes and other compact macroscopic bodies is to be treated as the evolution of an open quantum system.
In this paper we develop a scattering amplitudes based formalism for incorporating the physical effects of these near-threshold excited states, closely modelled on the world-line formalism described in [74; 75; 76; 77; 76; 77]. At leading PM order the effects of absorption are parametrized by the low-energy expansion of a spectral density function obtained by explicitly matching with an absorption cross section calculated in black hole perturbation theory [69; 70]. We then, in an EFT sense, recycle this information to calculate distinct low-energy observables. In particular, in this paper we calculate the leading PM absorptive contributions to the impulse of a Schwarzschild black hole scattering with a second compact body sourcing a massless scalar (3.36), electromagnetic (3.38) or gravitational field (3.40). The contribution to the impulse from graviton absorption agrees perfectly with previous worldline calculations [76]; as far as we are aware the expressions for the impulse due to scalar and photon absorption are new.
Previous theoretical studies of horizon absorption effects include [74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85; 86; 87; 88; 89; 90; 91]. The problem of calculating the absorptive contribution to the impulse in two-body scattering was approached in [76] using off-shell worldline based methods, where in-in observables are naturally calculated using the Schwinger-Keldysh or closed-time-path formalism [92; 93; 94]. In this paper we approach this problem from an on-shell scattering amplitudes perspective, where the natural in-in formalism is given by Kosower, Maybee and O'Connell (KMOC) [95]. As emphasized recently [96] these formalisms are closely related, but organize the calculation in very different ways. Even though we are calculating an in-in observable, KMOC takes ordinary in-out scattering amplitudes as building blocks, allowing us to retain some of the power and simplicity of modern amplitudes methods and has a natural synthesis with the previously mentioned scattering amplitudes based approaches to the gravitational two-body problem.
This paper is organized as follows. In Section 2 we introduce a general framework for incorporating horizon absorption by coupling the point-particle effective description to an invisible sector. In Section 2.2 we demonstrate that the leading-order low-energy dynamics of this sector are encoded in the expansion of a spectral density function and obtain this information by explicitly matching with an absorption cross section. The absorptive contribution to the impulse of a black hole during a two-body scattering event is then obtained in Section 3. We explain how absorption of energy by the invisible sector degrees-of-freedom can be naturally incorporated in the KMOC formalism in Section 3.1. We then describe in Sections 3.2 and 3.3, how the "heavy" and "light" invisible sector degrees of freedom manifest as Love number type contact contributions and absorptive effects respectively. In 3.4 the necessary box diagrams are constructed using unitarity-based methods and the resulting impulses and mass-shifts calculated for the absorption of scalars, photons and gravitons. Finally, in Section 3.5 we generalize the discussion to generic compact bodies including neutron stars, parametrizing the leading-order impulse and mass-shifts in terms of dissipation numbers.
Classical Black Holes as Open Quantum Systems
### Visible and Invisible Degrees-of-Freedom
We will consider the scattering of a generic compact body \(\phi_{1}\) with a Schwarzschild black hole \(\phi_{2}\) with masses \(m_{1}\) and \(m_{2}\) respectively. In addition to the gravitational field, body-1 may source an electromagnetic field \(A_{\mu}\) with electric charge \(Q_{e}\) and a massless scalar field \(\psi\) with scalar charge \(Q_{s}\). These degrees-of-freedom constitute what we will call the _visible sector_, and are described by an action
\[S_{\rm vis}=\int{\rm d}^{4}x\sqrt{-g}\biggl{[}-\frac{2}{\kappa^ {2}}R-\frac{1}{4}F_{\mu\nu}^{2}+\frac{1}{2}(\nabla_{\mu}\psi)^{2}+|D_{\mu}\phi_ {1}|^{2}-m_{1}^{2}|\phi_{1}|^{2}\] \[+\frac{1}{2}(\nabla_{\mu}\phi_{2})^{2}-\frac{1}{2}m_{2}^{2}\phi_{ 2}^{2}+Q_{s}\psi|\phi_{1}|^{2}\biggr{]}+S_{\rm HD}+S_{\rm GF}, \tag{1}\]
where \(\kappa\coloneqq\sqrt{32\pi G_{\rm N}}\), \(D_{\mu}\coloneqq\nabla_{\mu}+{\rm i}Q_{e}A_{\mu}\) and \(F_{\mu\nu}\coloneqq\nabla_{\mu}A_{\nu}-\nabla_{\nu}A_{\mu}\); \(S_{\rm GF}\) are gauge fixing terms and \(S_{\rm HD}\) denotes non-minimal (tidal Love number) interactions [64; 65]. As discussed in Section 1, at low orders in the PM expansion this action captures the relevant physics of scattering compact bodies at long distances. At some order in \(G_{\rm N}\) (to be determined), the physical effects of horizon absorption become important; physical transitions between the black hole ground state and an excited state become possible and these cannot be described by an action of the form (1).
The Hilbert space in this problem takes a factored form \({\cal H}_{\rm vis}\otimes{\cal H}_{\rm inv}\), corresponding to visible states (ground state black holes and radiation) and invisible states (excited state black holes). There are many different approaches to describing the dynamics of this system, we could explicitly trace over \({\cal H}_{\rm inv}\) and describe the scattering problem as the non-unitary time evolution of a reduced density matrix in \({\cal H}_{\rm vis}\). To retain some of the simplicity of unitary time evolution we instead choose to work on the full Hilbert space, calculating in-in observables where the initial state is in the visible sector. The invisible states, that we will collectively denote as \(X\), appear only as internal states to be summed over.
Our formalism for incorporating these invisible sector states is modelled on the world-line formalism described in [74; 75; 76; 77; 66]. The complete action for the system consists of the the visible sector (1), the unknown (possibly strongly coupled) self-interactions of the invisible \(X\)-states denoted \(S_{\rm inv}\) and _portal_ couplings between the visible and invisible sectors. For the latter, the \(X\)-states are encoded in the form of abstract, composite, local operators \({\cal O}_{i}(x)\); for which we assume the usual properties of Poincare invariance, gauge/diffeomorphism invariance and locality. Without loss of generality we exclude quadratic portal couplings e.g. \(\phi_{2}{\cal O}\); such interactions can always be removed by redefining the visible sector field basis. To leading order in \(G_{\rm N}\) the portal couplings are given by
\[S_{\rm portal}=\int{\rm d}^{4}x\sqrt{-g}[\kappa\phi_{2}\psi{\cal O}_{0}+\kappa \phi_{2}F_{\mu\nu}{\cal O}_{1}^{\mu\nu}+\phi_{2}C_{\mu\nu\rho\sigma}{\cal O}_{ 2}^{\mu\nu\rho\sigma}+\cdots], \tag{2}\]
where \(C_{\mu\nu\rho\sigma}\) is the Weyl tensor. The idea is then to calculate observables by evaluating the path integral for the visible sector, treating the portal couplings perturbatively in \(G_{\rm N}\) and
leaving the invisible sector path integral unevaluated. As an illustrative example, consider the calculation of the off-shell Green's function
\[\langle\phi_{2}(x_{1})\psi(x_{2})\phi_{2}(x_{1}^{\prime})\psi(x_{2}^{\prime}) \rangle=\int[\mathcal{D}\phi_{2}][\mathcal{D}\psi][\mathcal{D}X]\phi_{2}(x_{1}) \psi(x_{2})\phi_{2}(x_{1}^{\prime})\psi(x_{2}^{\prime})e^{\mathrm{i}S_{\rm vis }+\mathrm{i}S_{\rm inv}+\mathrm{i}S_{\rm portal}}. \tag{3}\]
Diagrammatically the leading connected contribution from the \(X\)-states takes the form:
\[\tikzfig{fig-1.eps}\qquad+\qquad\qquad(x_{2}\ \ \leftrightarrow\ \ x_{2}^{\prime}),\]
where the internal doubled line corresponds to the (non-perturbative) invisible sector 2-point function
\[\langle\mathcal{O}_{0}(x)\mathcal{O}_{0}(0)\rangle_{X}\coloneqq\int[\mathcal{ D}X]\mathcal{O}_{0}(x)\mathcal{O}_{0}(0)e^{\mathrm{i}S_{\rm inv}}. \tag{4}\]
It is natural to rewrite this in _Kallen-Lehmann_ (KL) form
\[\langle\mathcal{O}_{0}(x)\mathcal{O}_{0}(0)\rangle_{X}=\int\hat{\mathrm{d}}^ {D}k\;e^{\mathrm{i}k\cdot x}\int_{m_{2}^{2}}^{\infty}\mathrm{d}\mu^{2}\frac{ \mathrm{i}\rho_{0}(\mu^{2})}{k^{2}-\mu^{2}+\mathrm{i}0}, \tag{5}\]
where the unknown invisible sector physics is contained in the _spectral density_\(\rho_{0}(\mu^{2})\). In this form, momentum space calculations involving internal \(X\)-states can be performed using a small modification of standard Feynman rules, summarized in Appendix D, incorporating weighted integration over the spectral parameter \(\mu^{2}\).
### UV Matching: Absorption Cross Section
To determine the spectral density we match an _absorption cross section_ calculated in both the UV (full General Relativity) and the IR (the point-particle effective description). To leading order in \(\omega\to 0\), where \(\omega\) is the energy of the absorbed radiation, the necessary Schwarzschild black hole absorption cross sections were calculated long ago [69, 70]
\[\sigma_{\rm abs}(\omega)\sim\begin{cases}16\pi G_{\rm N}^{2}m_{2}^{2}&\text{ for scalars},\\ \frac{64\pi}{3}G_{\rm N}^{4}m_{2}^{4}\omega^{2}&\text{ for photons},\\ \frac{256\pi}{45}G_{\rm N}^{6}m_{2}^{6}\omega^{4}&\text{ for gravitons}.\end{cases} \tag{6}\]
For the photon and graviton cases the above cross sections correspond to an incoming polarized state with helicity \(\pm 1\) and \(\pm 2\) respectively, the explicit expression is independent of the polarization state. In the point-particle effective description, the corresponding absorption cross sections are calculated perturbatively; the general procedure is illustrated first for the absorption of a massless scalar, and then the particular complications of photon and graviton absorption described separately.
#### Scalar Absorption Cross Section
For scalar absorption we first calculate the Compton amplitude
\[\mathcal{M}(\phi_{2}(-p_{2})\psi(-k_{1})\to\phi_{2}(p_{3})\psi(k_{4})), \tag{7}\]
where the external momenta are lablelled in the all-outgoing convention. Similar to the off-shell Green's function (3) the contribution of the visible sector states is calculated perturbatively, while the invisible \(X\)-states contribute through the KL "propagator" (5). In the forward limit kinematics
\[-p_{2}^{\mu}=p_{3}^{\mu}=(m_{2},0,0,0),\hskip 28.452756pt-k_{1}^{\mu}=k_{4}^{ \mu}=(\omega,0,0,\omega), \tag{8}\]
the optical theorem relates the imaginary part of the Compton amplitude to the total cross section
\[\sigma_{\rm total}(\omega)=\frac{\mathrm{Im}\mathcal{M}^{\rm forward}(\omega )}{2m_{2}\omega}. \tag{9}\]
At leading order the absorptive part of the cross section can be disentangled from the elastic part by considering the contribution of the diagram
in the forward kinematics (8) this corresponds to
\[\mathcal{M}^{(\psi)}\bigg{|}_{\rm forward}=-\kappa^{2}\int_{m_{2}^{2}}^{ \infty}\mathrm{d}\mu^{2}\frac{\rho_{0}(\mu^{2})}{m_{2}^{2}+2m_{2}\omega-\mu^{ 2}+\mathrm{i}0}. \tag{10}\]
Using the distributional identity
\[\mathrm{Im}\bigg{(}\frac{1}{x+\mathrm{i}0}\bigg{)}=-\pi\delta(x), \tag{11}\]
we obtain the imaginary part
\[\mathrm{Im}\mathcal{M}^{(\psi)}\bigg{|}_{\rm forward}=\pi\kappa^{2}\int_{m_{2 }^{2}}^{\infty}\mathrm{d}\mu^{2}\rho_{0}(\mu^{2})\delta\big{(}m_{2}^{2}+2m_{2} \omega-\mu^{2}\big{)}=\pi\kappa^{2}\rho_{0}\big{(}m_{2}^{2}+2m_{2}\omega\big{)}, \tag{12}\]
and therefore the absorption cross section
\[\sigma_{\rm abs}^{(\psi)}(\omega)=\frac{\pi\kappa^{2}\rho_{0}\big{(}m_{2}^{2} +2m_{2}\omega\big{)}}{2m_{2}\omega}. \tag{13}\]
Matching this with the UV cross section (6) gives the near-threshold asymptotic expansion of the spectral density
\[\rho_{0}\big{(}\mu^{2}\big{)}\sim\frac{G_{\rm N}m_{2}^{2}}{2\pi}\big{(}\mu^{2 }-m_{2}^{2}\big{)},\quad\text{ as }\quad\mu^{2}\to m_{2}^{2}. \tag{14}\]
#### Photon Absorption Cross Section
For photon absorption, the composite operator \(\mathcal{O}_{1}^{\mu\nu}\) that appears in (2) transforms non-trivially under the Lorentz group and therefore has a more complicated KL form
\[\langle\mathcal{O}_{1}^{\mu\nu}(x)\mathcal{O}_{1}^{\rho\sigma}(0)\rangle_{X}= \int\hat{\mathrm{d}}^{D}k\;e^{\mathrm{i}k\cdot x}\int_{m_{2}^{2}}^{\infty} \mathrm{d}\mu^{2}\frac{\mathrm{i}\Pi_{1}^{\mu\nu\rho\sigma}\big{(}\mu^{2},k \big{)}}{k^{2}-\mu^{2}+\mathrm{i}0}. \tag{15}\]
In general the projector \(\Pi_{1}\) is constructed from the available Lorentz tensors \(k^{\mu}\), \(\eta^{\mu\nu}\) and \(\epsilon^{\mu\nu\rho\sigma}\) and constrained by the symmetry of the operators. Without loss of generality we can assume that \(\mathcal{O}_{1}^{\mu\nu}=-\mathcal{O}_{1}^{\nu\mu}\) and we will impose that the 2-point function is parity invariant. Together with the exchange symmetry
\[\Pi_{1}^{\mu\nu\rho\sigma}\big{(}\mu^{2},k\big{)}=\Pi_{1}^{\rho\sigma\mu\nu} \big{(}\mu^{2},k\big{)}, \tag{16}\]
we find there are two tensor structures compatible with these assumptions
\[\Pi_{1}^{\mu\nu\rho\sigma}\big{(}\mu^{2},k\big{)} =\rho_{1}^{(1)}\big{(}\mu^{2}\big{)}[\eta^{\mu\rho}\eta^{\nu\sigma }-\eta^{\mu\sigma}\eta^{\nu\rho}]\] \[\quad+\rho_{1}^{(2)}\big{(}\mu^{2}\big{)}\bigg{[}\frac{k^{\nu}k^ {\rho}\eta^{\mu\sigma}}{k^{2}}+\frac{k^{\mu}k^{\sigma}\eta^{\nu\rho}}{k^{2}}- \frac{k^{\mu}k^{\rho}\eta^{\nu\sigma}}{k^{2}}-\frac{k^{\nu}k^{\sigma}\eta^{\mu \rho}}{k^{2}}\bigg{]}. \tag{17}\]
As discussed further in Section 3.5 for a generic macroscopic body capable of absorbing electromagnetic radiation (e.g. a neutron star or a dielectric sphere with complex permittivity) there are two independent spectral functions that must be matched to the UV description. As described in [85; 87] the independent contributions can be obtained by separately matching contributions to the absorption cross section from incoming spinning partial waves. For Schwarzschild black holes in \(d=4\) this is actually unnecessary due to the presence of a hidden symmetry, the _electromagnetic self-duality_ of the Einstein-Maxwell equations [97]. Since this is a symmetry of the UV model we impose that it is also a symmetry of the effective point-particle description. Imposing duality invariance of the 2-point function
\[\Pi_{1}^{\mu\nu\rho\sigma}\big{(}\mu^{2},k\big{)}=\frac{1}{4}\epsilon^{\mu \nu}{}_{\alpha\beta}\epsilon^{\rho\sigma}{}_{\gamma\delta}\Pi_{1}^{\alpha \beta\gamma\delta}\big{(}\mu^{2},k\big{)}, \tag{18}\]
gives a relation between the spectral functions \(\rho_{1}^{(1)}\) and \(\rho_{1}^{(2)}\). The unique duality invariant tensor structure is found to be
\[\Pi_{1}^{\mu\nu\rho\sigma}\big{(}\mu^{2},k\big{)}=\rho_{1}\big{(}\mu^{2}\big{)} \ \hat{\Pi}_{1}^{\mu\nu\rho\sigma}(k), \tag{19}\]
where
\[\hat{\Pi}_{1}^{\mu\nu\rho\sigma}(k)\coloneqq\frac{1}{2}(\eta^{\mu\rho}\eta^{ \nu\sigma}-\eta^{\mu\sigma}\eta^{\nu\rho})+\frac{1}{k^{2}}(k^{\nu}k^{\rho}\eta ^{\mu\sigma}+k^{\mu}k^{\sigma}\eta^{\nu\rho}-k^{\mu}k^{\rho}\eta^{\nu\sigma}- k^{\nu}k^{\sigma}\eta^{\mu\rho}). \tag{20}\]
In the context of modern unitarity-based methods [48; 49; 50; 51; 52], the natural object to consider is not the off-shell 2-point function, but rather the on-shell Compton amplitude
\(\pm 1\)\(k_{1}\)\(k_{2}\)\(k_{3}\)\(k_{4}\)\(\pm 1\)\(p_{2}\)\(p_{3}\)
\[\mathcal{M}_{++}^{(\gamma)} =-\frac{4\kappa^{2}}{m_{2}^{2}}[1|p_{2}|4\rangle^{2}\int_{m_{2}^{2}}^ {\infty}\mathrm{d}\mu^{2}\frac{\rho_{1}\big{(}\mu^{2}\big{)}}{(k_{1}+p_{2})^{2} -\mu^{2}+\mathrm{i}0}\quad+\quad(2\leftrightarrow 3),\] \[\mathcal{M}_{++}^{(\gamma)} =\mathcal{M}_{--}^{(\gamma)}=0. \tag{21}\]
From this on-shell perspective, we see that the somewhat abstruse electromagnetic self-duality constraint (18) manifests as the vanishing of helicity violating Compton amplitudes [98; 99; 100; 101; 102]. We then proceed as in the scalar case above, calculating the imaginary part of the forward Compton amplitude leading to the absorption cross section
\[\sigma_{\mathrm{abs}}^{(\gamma)}(\omega)=\frac{2\pi\kappa^{2}\omega}{m_{2}} \rho_{1}\big{(}m_{2}^{2}+2m_{2}\omega\big{)}. \tag{22}\]
Matching this with the UV cross section (6) gives the near-threshold spectral density
\[\rho_{1}\big{(}\mu^{2}\big{)}\sim\frac{G_{\mathrm{N}}^{3}m_{2}^{4}}{6\pi} \big{(}\mu^{2}-m_{2}^{2}\big{)},\quad\text{ as }\quad\mu^{2}\to m_{2}^{2}. \tag{23}\]
#### Graviton Absorption Cross Section
For graviton absorption, the analysis of the relevant 2-point function
\[\langle\mathcal{O}_{2}^{\mu_{1}\mu_{2}\mu_{3}\mu_{4}}(x)\mathcal{O}_{2}^{\nu_ {1}\nu_{2}\nu_{3}\nu_{4}}(0)\rangle_{X}=\int\hat{\mathrm{d}}^{D}k\ e^{\mathrm{i}k \cdot x}\int_{m_{2}^{2}}^{\infty}\mathrm{d}\mu^{2}\frac{\mathrm{i}\Pi_{2}^{ \mu_{1}\mu_{2}\mu_{3}\mu_{4}\nu_{1}\nu_{2}\nu_{3}\nu_{4}}(\mu^{2},k)}{k^{2}- \mu^{2}+\mathrm{i}0}. \tag{24}\]
is somewhat more involved due to the proliferation of Lorentz indices. Without loss of generality we can assume that this operator has the same symmetry and tracelessness properties as the Weyl tensor
\[\mathcal{O}_{2}^{\mu\nu\rho\sigma}=-\mathcal{O}_{2}^{\nu\mu\rho\sigma},\quad \mathcal{O}_{2}^{\mu\nu\rho\sigma}=\mathcal{O}_{2}^{\rho\sigma\mu\nu},\quad \mathcal{O}_{2}^{\mu\nu\rho\sigma}+\mathcal{O}_{2}^{\mu\rho\sigma\nu}+ \mathcal{O}_{2}^{\mu\sigma\nu\rho}=0,\quad\eta_{\mu\rho}\mathcal{O}_{2}^{\mu \nu\rho\sigma}=0. \tag{25}\]
In addition, as for photon absorption, the 2-point function is further constrained by a hidden _gravitational self-duality_ symmetry. The symmetry in this case is the duality invariance of the _linearized_ Einstein equations [103] on a Petrov type-D background [74]. The implications of this symmetry for black hole perturbation theory, in particular the quasi-normal mode isospectrality of axial and polar perturbations of Schwarzschild black holes was first pointed out by Chandrasekhar [104]. Following [66] we impose self-duality as a constraint on the 2-point function in the form
\[\Pi_{2}^{\mu_{1}\mu_{2}\mu_{3}\mu_{4}\nu_{1}\nu_{2}\nu_{3}\nu_{4}}\big{(}\mu^{ 2},k\big{)}=\frac{1}{4}\epsilon^{\mu_{1}\mu_{2}}{}_{\alpha_{1}\alpha_{2}} \epsilon^{\nu_{1}\nu_{2}}{}_{\beta_{1}\beta_{2}}\Pi_{2}^{\alpha_{1}\alpha_{2} \mu_{3}\mu_{4}\beta_{1}\beta_{2}\nu_{3}\nu_{4}}\big{(}\mu^{2},k\big{)}. \tag{26}\]
We find that there is a unique solution to the combined constraints (25) and (26), the somewhat complicated expression is given in (15) and (16). As for the case of photon absorption above, the natural on-shell object to consider is the graviton Compton amplitude
\[{\cal M}^{(h)}_{+-} =-\frac{64\kappa^{2}}{m_{2}^{4}}[1|p_{2}|4\rangle^{4}\int_{m_{2}^{2}}^ {\infty}{\rm d}\mu^{2}\frac{\rho_{2}\big{(}\mu^{2}\big{)}}{(k_{1}+p_{2})^{2}-\mu ^{2}+{\rm i}0}\quad+\quad(2\leftrightarrow 3),\] \[{\cal M}^{(h)}_{++} ={\cal M}^{(h)}_{--}=0, \tag{27}\]
where the vanishing of the helicity violating amplitudes is again the on-shell manifestation of self-duality. From this expression we calculate the absorption cross section
\[\sigma^{(h)}_{\rm abs}(\omega)=32\pi\kappa^{2}m_{2}^{2}\omega^{3}\rho_{2} \big{(}m_{2}^{2}+2m_{2}\omega\big{)}, \tag{28}\]
matching this with the UV cross section (6) gives the near-threshold spectral density
\[\rho_{2}\big{(}\mu^{2}\big{)}\sim\frac{G_{\rm N}^{5}m_{2}^{6}}{360\pi}\big{(} \mu^{2}-m_{2}^{2}\big{)},\quad\text{ as }\quad\mu^{2}\to m_{2}^{2}. \tag{29}\]
## 3 Absorptive Impulse
### In-In Scattering Observables and KMOC
Our goal is now, in the spirit of EFT, to recycle the information contained in the near-threshold expanded spectral functions (14), (23), (29) to calculate distinct low-energy observables. Specifically, we calculate the leading contribution to the impulse on a Schwarzschild black hole scattering with a second compact body sourcing a scalar, electromagnetic or gravitational field. In this paper we approach this problem from an on-shell scattering amplitudes perspective, where the natural in-in formalism was formulated by KMOC [95].
We begin with an executive summary of the KMOC formalism, see [95; 105] for a more comprehensive description. In a general quantum mechanical system admitting an S-matrix describing time evolution from \(t=-\infty\) to \(t=+\infty\), the asymptotic change in the expectation value of an observable \(\mathbb{O}\), initially in some state \(|{\rm in}\rangle\), is given by the formal expression
\[\Delta O=\langle{\rm in}|S^{\dagger}\mathbb{O}S|{\rm in}\rangle-\langle{\rm in }|\mathbb{O}|{\rm in}\rangle. \tag{30}\]
In the present context, \(S\) corresponds to the S-matrix on the _complete_ Hilbert space \({\cal H}_{\rm vis}\otimes{\cal H}_{\rm inv}\) and is therefore unitary: \(S^{\dagger}S=1\). By making the standard definition \(S\coloneqq 1+{\rm i}T\) and using the unitarity relation
\[T^{\dagger}=T-{\rm i}T^{\dagger}T, \tag{31}\]
the above can be rewritten as
\[\Delta O={\rm i}\langle{\rm in}|[\mathbb{O},T]|{\rm in}\rangle+\langle{\rm in }|T^{\dagger}[\mathbb{O},T]|{\rm in}\rangle, \tag{32}\]
where the two terms on the right-hand-side are sometimes referred to as the _virtual_ and _real_ contributions respectively. The different representations of the KMOC formula have complementary virtues: (30) manifests the fact that if \(\mathbb{O}\) is Hermitian then the change in the expectation value is real-valued, while (32) manifests the fact that if \(\mathbb{O}\) is a generator of a symmetry and commutes with \(T\) then the expectation value is time independent.
We will now apply this formalism to calculate the asymptotic _impulse_\(\Delta p_{2}^{\mu}\) of a black hole undergoing 2-to-2 scattering with a second compact body. The in-state is chosen to be
\[|\text{in}\rangle\coloneqq\int\hat{\mathrm{d}}^{D}p_{1}\hat{\mathrm{d}}^{D}p_{2 }\hat{\delta}^{(+)}\big{(}p_{1}^{2}-m_{1}^{2}\big{)}\hat{\delta}^{(+)}\big{(}p_ {2}^{2}-m_{2}^{2}\big{)}\psi_{1}(p_{1})\psi_{2}(p_{2})e^{-\mathrm{i}p_{1}\cdot b _{1}}e^{-\mathrm{i}p_{2}\cdot b_{2}}|p_{1},p_{2}\rangle, \tag{10}\]
where the wave-packets \(\psi_{i}\), defined explicitly in [95], are chosen to localize particle-\(i\) around the classical free particle trajectory with 4-velocity \(u_{i}^{\mu}\) and impact parameter \(b_{i}^{\mu}\). In the \(\hbar\to 0\) limit, the quantum mechanical uncertainty in position and momentum vanishes and the corresponding asymptotic change in the expectation value of the operator \(\mathbb{P}_{2}^{\mu}\) reduces to the corresponding _classical_ impulse. To extract the classical contributions efficiently, the resulting expressions are expanded before loop integration using the method of regions [106]. In particular we expand to leading non-trivial order in the _soft region_[105] defined by the scaling
\[q^{\mu}\sim\ell^{\mu}\ll u_{i}^{\mu}\sim m_{i}. \tag{11}\]
After simplifying [95], the KMOC formula for the classical impulse takes the form
\[\Delta p_{2}^{\mu}=\frac{1}{4m_{1}m_{2}}\int\hat{\mathrm{d}}^{D}q\;\hat{ \delta}(u_{1}\cdot q)\hat{\delta}(u_{2}\cdot q)e^{-\mathrm{i}q\cdot b}[ \mathcal{I}_{v}^{\mu}+\mathcal{I}_{r}^{\mu}], \tag{12}\]
where \(b^{\mu}\coloneqq b_{1}^{\mu}-b_{2}^{\mu}\) is the relative impact parameter, without loss of generality defined to satisfy \(u_{i}\cdot b=0\). The functions \(\mathcal{I}_{v,r}^{\mu}\) are referred to as the virtual/real KMOC _kernels_, corresponding to the respective terms in (10). The virtual kernel is straightforwardly related to an elastic scattering amplitude
\[\mathcal{I}_{v}^{\mu}=\mathrm{i}q^{\mu}\mathcal{M}(\phi_{1}(m_{1}u_{1})\phi_{ 2}(m_{2}u_{2})\to\phi_{1}(m_{1}u_{1}+q)\phi_{2}(m_{2}u_{2}-q)). \tag{13}\]
The real kernel is given by inserting a complete set of states in the second term of (10). Restricting to the black hole ground state
\[\mathds{1}\supset\int\hat{\mathrm{d}}^{D}r_{1}\hat{\mathrm{d}}^{D}r_{2}\; \hat{\delta}^{(+)}\big{(}r_{1}^{2}-m_{1}^{2}\big{)}\hat{\delta}^{(+)}\big{(}r _{2}^{2}-m_{1}^{2}\big{)}|\phi_{1}(r_{1})\phi_{2}(r_{2})\rangle\langle\phi_{1} (r_{1})\phi_{2}(r_{2})|, \tag{14}\]
gives the _conservative_ contribution. In this paper we are interested in the leading absorptive contribution, this corresponds to an insertion of the \(X\)-states
\[\mathds{1}\supset\sum_{i}\int_{m_{2}^{2}}^{\infty}\mathrm{d}\mu^{2}\rho_{i} \big{(}\mu^{2}\big{)}\int\hat{\mathrm{d}}^{D}r_{1}\hat{\mathrm{d}}^{D}r_{2}\; \hat{\delta}^{(+)}\big{(}r_{1}^{2}-m_{1}^{2}\big{)}\hat{\delta}^{(+)}\big{(}r _{2}^{2}-\mu^{2}\big{)}|\phi_{1}(r_{1})X(r_{2})\rangle\langle\phi_{1}(r_{1})X( r_{2})|, \tag{15}\]
where the sum over \(i\) includes all internal quantum numbers of the excited states including spin. The resulting contribution to the real kernel is then given by
\[\mathcal{I}_{r}^{\mu}=\sum_{i}\int_{m_{2}^{2}}^{\infty}\mathrm{d} \mu^{2}\rho_{i}\big{(}\mu^{2}\big{)}\int\hat{\mathrm{d}}^{D}\ell\;\hat{\delta }^{(+)}\big{(}(m_{1}u_{1}-\ell)^{2}-m_{1}^{2}\big{)}\hat{\delta}^{(+)}\big{(}( m_{2}u_{2}+\ell)^{2}-\mu^{2}\big{)} \tag{16}\] \[\times\ell^{\mu}\times\mathcal{M}(\phi_{1}(m_{1}u_{1})\phi_{2}(m _{2}u_{2})\to\phi_{1}(m_{1}u_{1}-\ell)X(m_{2}u_{2}+\ell))\] \[\times\mathcal{M}^{*}(\phi_{1}(m_{1}u_{1}+q)\phi_{2}(m_{2}u_{2} -q)\to\phi_{1}(m_{1}u_{1}-\ell)X(m_{2}u_{2}+\ell)).\]
Since we are interested in the leading-order PM contribution to the absorptive impulse we can make use of the unitarity relation (3.2) to replace the conjugated amplitude \(\mathcal{M}^{*}\) with the unconjugated time-reversed process. This expression can be rewritten in the compact diagrammatic weighted cut notation defined in Appendix C:
\[\mathcal{I}_{r}^{\mu}\quad=\quad\raisebox{-28.452756pt}{\includegraphics[scale=0.4]{fig/C1.eps}}\quad. \tag{3.11}\]
Finally, at leading order there is no mixing between radiation and horizon absorption; the leading absorptive impulse on the small body can then be obtained trivially using conservation of momentum
\[\left(\Delta p_{1}^{\mu}\right)_{\text{abs}}=-\left(\Delta p_{2}^{\mu}\right) _{\text{abs}}. \tag{3.12}\]
### Heavy Modes and Love Numbers
By assumption the state created by \(\phi_{2}\) is a stable ground state, meaning \(\rho\big{(}\mu^{2}<m_{2}^{2}\big{)}=0\). Before proceeding it is therefore useful to define a shifted spectral parameter and spectral density
\[\rho\big{(}\mu^{2}\big{)}\coloneqq\tilde{\rho}(s),\quad\quad\ s\coloneqq \frac{\mu^{2}-m_{2}^{2}}{2m_{2}}, \tag{3.13}\]
where \(\tilde{\rho}(s<0)=0\). In this notation, both the virtual (3.7) and real (3.11) absorptive contributions to the KMOC formula (3.6) require evaluating a spectral integral from \(0<s<\infty\). Naively this is problematic as it would require detailed knowledge of the spectral function \(\tilde{\rho}(s)\) at all energy scales. However, there is clearly an important distinction between those \(X\)-states that can go on-shell during the scattering event and those that cannot. To illustrate the distinction in a physically transparent manner, we introduce a cutoff \(\Lambda\), splitting the spectral integral into two contributions:
\[\begin{array}{lclcl}\text{\em Light modes:}&0<s<\Lambda&\Leftrightarrow& \ s\sim q\\ \text{\em Heavy modes:}&\Lambda<s<\infty&\Leftrightarrow&\ s\gg q,\end{array} \tag{3.14}\]
where \(q\) is the momentum transfer, assumed to be small in the soft expansion (3.5). In the calculation of the virtual kernel (3.7), the contribution to the \(X\)-state propagator from the heavy modes can never be on-shell; expanding to leading order in the soft region
\[\int_{\Lambda}^{\infty}\text{d}s\ \frac{\tilde{\rho}(s)}{2m_{2}(u_{2}\cdot \ell)+\ell^{2}-2m_{2}s+\text{i}0}\approx-\frac{1}{2m_{2}}\int_{\Lambda}^{ \infty}\text{d}s\ \frac{\tilde{\rho}(s)}{s}. \tag{3.15}\]
That is, we find that for the heavy modes the \(X\)-state propagator _pinches_, generating an effective contact diagram shown in figure 1. The contact term can be interpreted as a new
effective operator contribution to the visible sector action (1). For example, for massless scalar absorption the leading-order operator takes the form
\[S_{\rm vis}\supset\int{\rm d}^{4}x\sqrt{-g}\big{[}c\psi^{2}\phi_{2}^{2}+...\big{]},\quad\text{ where }\quad c\sim\int_{\Lambda}^{\infty}{\rm d}s\ \frac{\tilde{\rho}(s)}{s}. \tag{23}\]
The Wilson coefficients of such higher-dimension operators are, in this context, usually referred to as _tidal Love numbers_[64, 65, 107]. Clearly, we should never have included the heavy modes to begin with, they are always off-shell and so can be consistently integrated out of the point-particle effective description. We will therefore assume that this has been done and the associated Love number contributions to observables calculated as part of the conservative contribution [64, 65].
By contrast, the light modes (gapless modes in the language of Section 1) cannot be integrated out since they can go on-shell during the scattering event. To calculate their contribution to the impulse we only need the spectral function in a small neighbourhood of the threshold value \(s\to 0\), and as demonstrated in Section 2.2, this information can obtained by matching the absorption cross section expanded around \(\omega\to 0\).
In practice the crude (though physically transparent) cutoff \(\Lambda\) can be replaced by an analytic regularization of the spectral integral for the light modes
\[\int_{0}^{\Lambda}{\rm d}s\ \tilde{\rho}(s)\quad\Rightarrow\quad\int_{0}^{ \infty}{\rm d}s\ s^{\alpha}\tilde{\rho}(s). \tag{24}\]
In this perspective the spectral integral should be taken together with the dimensionally regularized loop integral as the object to be expanded using the method of regions. The soft region is then defined by
\[s\sim q^{\mu}\sim\ell^{\mu}\ll u_{i}^{\mu}\sim m_{i}, \tag{25}\]
and therefore only the leading-order term in the Taylor expansion of \(\tilde{\rho}(s)\) around \(s=0\) contributes to leading-order in the soft expansion. By analogy with dimensional regularization, after expansion in the soft region the analytically regularized \(s\)-integration domain is extended to cover the entire range \(0<s<\infty\), evaluated for convergent values of \(\alpha\) and then analytically continued to \(\alpha=0\). This form of regularization has many familiar advantages, scaleless \(s\)-integrals evaluate to zero, power-law UV divergences are absent and logarithmic UV divergences show up as finite order poles at \(\alpha=0\).
Figure 1: Effective contact vertex obtained from the heavy modes in the KL representation.
### Light Modes and Absorption
To simplify the calculation of the absorptive impulse, it is useful to first decompose the kernel into scalar contributions
\[\mathcal{I}^{\mu}=q^{\mu}\mathcal{I}_{q}+\tilde{u}_{1}^{\mu}\mathcal{I}_{\tilde{u }_{1}}+\tilde{u}_{2}^{\mu}\mathcal{I}_{\tilde{u}_{2}}, \tag{3.19}\]
where our kinematic conventions and notation are defined in Appendix A. To calculate these contributions to the real kernel we decompose the loop momentum insertion in (3.11) as1
Footnote 1: The component perpendicular to the scattering plane integrates to zero.
\[\ell^{\mu}\rightarrow\bigg{(}\frac{\ell\cdot q}{q^{2}}\bigg{)}q^{\mu}+(u_{1} \cdot\ell)\tilde{u}_{1}^{\mu}+(u_{2}\cdot\ell)\tilde{u}_{2}^{\mu}. \tag{3.20}\]
Because of the cut condition for particle-1, \(u_{1}\cdot\ell=0\), we trivially find that
\[\mathcal{I}_{r,\tilde{u}_{1}}=0. \tag{3.21}\]
The remaining non-vanishing contributions \(\mathcal{I}_{r,q}\) and \(\mathcal{I}_{r,\tilde{u}_{2}}\) will be denoted _transverse_ and _longitudinal_ respectively. The transverse contribution can be simplified using
\[\ell\cdot q=\frac{1}{2}(\ell+q)^{2}-\frac{1}{2}\ell^{2}-\frac{1}{2}q^{2}, \tag{3.22}\]
the first two terms pinch mediator propagators and therefore produce vanishing scaleless integrals. For the longitudinal contribution we use the cut condition for particle-2, \(u_{2}\cdot\ell=s\), where we use the shifted spectral parameter \(s\) defined in (3.13). All together we find the effective decomposition
\[\ell^{\mu}\rightarrow\frac{1}{2}q^{\mu}+s\tilde{u}_{2}^{\mu}, \tag{3.23}\]
or diagrammatically:
\[\mathcal{I}_{r,q}\quad=\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \
We can re-express these in a compact form by defining a _partial_ amplitude \(\mathcal{M}_{4}\big{(}s,q^{2}\big{)}\) corresponding to the contributions of \(X\)-states with fixed invariant mass to the elastic amplitude \(\mathcal{M}\big{(}q^{2}\big{)}\); by definition
\[\mathcal{M}_{4}\big{(}q^{2}\big{)}=\int_{m_{2}^{2}}^{\infty}\mathrm{d}\mu^{2}\; \rho\big{(}\mu^{2}\big{)}\mathcal{M}_{4}\big{(}\mu^{2},q^{2}\big{)}=2m_{2}\int_ {0}^{\infty}\mathrm{d}s\;\tilde{\rho}(s)\mathcal{M}_{4}\big{(}s,q^{2}\big{)}. \tag{3.26}\]
The transverse and longitudinal contributions to the real kernel then take the form
\[\mathcal{I}_{r,q}=\mathrm{Im}\mathcal{M}_{4}\big{(}q^{2}\big{)},\qquad \quad\mathcal{I}_{r,\hat{u}_{2}}=4m_{2}\int_{0}^{\infty}\mathrm{d}s\;s\;\tilde {\rho}(s)\;\mathrm{Im}\mathcal{M}_{4}\big{(}s,q^{2}\big{)}. \tag{3.27}\]
The amplitudes appearing in these expressions are assumed to be expanded to leading order in the generalized soft limit (3.18). In the corresponding one-loop contribution to the conservative impulse, the leading soft contribution is _super-classical_ and cancels when the real and virtual kernels are summed [95]. In the absorptive case however, the leading soft contribution is the classical contribution due to an \(\hbar/\hbar\) cancellation; the super-classical scaling of the box diagram is compensated by the "quantum" scaling of the \(X\)-state effective propagator.
Next we consider the virtual kernel (3.7), which has only a transverse contribution. Decomposing this into real and imaginary parts
\[\mathcal{I}_{v,q}=-\mathrm{Im}\mathcal{M}_{4}\big{(}q^{2}\big{)}+\mathrm{i} \,\mathrm{Re}\mathcal{M}_{4}\big{(}q^{2}\big{)}, \tag{3.28}\]
we find that the transverse part of the real kernel is exactly cancelled by a corresponding contribution from the virtual kernel2.
Footnote 2: Each of these pieces individually produce _imaginary_ contributions to the impulse after Fourier transforming; this cancellation is therefore a consequence of the non-manifest reality of the observable.
Combining (3.27) and (3.28) we find that the transverse impulse (proportional to \(b^{\mu}\)) receives contributions only from the real part of the elastic amplitude. In general this takes the form
\[\mathcal{M}_{4}\big{(}q^{2}\big{)}=\int\hat{\mathrm{d}}^{D}\ell\;\mathcal{N} \big{[}(u_{2}\cdot\ell),q^{2},y\big{]}\frac{\hat{\delta}(u_{1}\cdot\ell)}{ \ell^{2}(\ell+q)^{2}}\int_{0}^{\infty}\mathrm{d}s\;\frac{s^{1+\alpha}}{u_{2} \cdot\ell-s+\mathrm{i}0}, \tag{3.29}\]
where \(\mathcal{N}\) is a model dependent polynomial and as above we have dropped scaleless contributions that pinch mediator propagators. The cut propagator for particle-1 arises from interference between the box and crossed-box diagrams in the soft limit, which are shown in Figure 2. using the distributional identity
\[\frac{1}{u_{1}\cdot\ell-\mathrm{i}0}-\frac{1}{u_{1}\cdot\ell+\mathrm{i}0}= \mathrm{i}\hat{\delta}(u_{1}\cdot\ell). \tag{3.30}\]
Importantly, since we have expanded to leading-order in the soft limit (3.18) the \(\mathcal{N}\) polynomial is homogeneous
\[\mathcal{N}\big{[}\lambda(u_{2}\cdot\ell),\lambda^{2}q^{2},y\big{]}=\lambda^{ k}\mathcal{N}\big{[}(u_{2}\cdot\ell),q^{2},y\big{]}, \tag{3.31}\]
where \(k=0\) for scalars, \(k=2\) for photons and \(k=4\) for gravitons. Since \(k\) is even in each case, \(\mathcal{N}\) must only contain even powers of \((u_{2}\cdot\ell)\) and is therefore _symmetric_ under \(\ell\to-\ell-q\). This becomes significant once we restrict to the real part of the spectral integral given by the Cauchy principal value (PV); in the analytic regularization described in Section 3.2 we calculate3
Footnote 3: We can alternatively calculate this integral using the cutoff regularization described in Section 3.2. This leads to an additional _power law_ divergent contribution that is even under \(l\to-\ell-q\) and therefore non-vanishing after loop integration. Physically, we interpret this as an ambiguity in the definition of the scale \(\Lambda\) separating the light and heavy modes in the spectral integral. Importantly, since the _logarithmically_ divergent contributions vanish after loop integration, there is no associated classical RG running of the leading (static) Love numbers and therefore no tension with their observed vanishing in \(d=4\) and recently discovered associated symmetries [108, 109, 110, 111, 112, 113].
\[\text{PV}\int_{0}^{\infty}\text{d}s\ \frac{s^{1+\alpha}}{u_{2}\cdot\ell-s}= \frac{u_{2}\cdot\ell}{\alpha}+(u_{2}\cdot\ell)\log\lvert u_{2}\cdot\ell\rvert+ \mathcal{O}(\alpha). \tag{3.32}\]
We therefore find, for the contribution of the light modes, the real part of the integrand (3.29) is odd under the change of variables \(\ell\to-\ell-q\) and integrates to zero. Note that, as discussed in Section 3.2, the contributions of the heavy modes pinch the propagator (3.15) giving an expression that is even under this change of variables and therefore a non-vanishing real part.
All together we conclude that, to leading non-trivial PM order, the absorptive contribution to the impulse is purely longitudinal, while the conservative contribution (including Love number operators) is purely transverse. The leading absorptive impulse is therefore completely determined by the _mass-shift_, the change in the black hole rest mass during scattering, explicitly
\[\left(\Delta p_{2}^{\mu}\right)_{\text{abs}}=(\Delta m_{2})\tilde{u}_{2}^{\mu}, \tag{3.33}\]
where
\[\Delta m_{2}=\frac{1}{m_{1}}\int\hat{\text{d}}^{D}q\;\hat{\delta}(u_{1}\cdot q )\hat{\delta}(u_{2}\cdot q)e^{-\text{i}q\cdot b}\int_{0}^{\infty}\text{d}s\;s \;\tilde{\rho}(s)\;\text{Im}\mathcal{M}_{4}\big{(}s,q^{2}\big{)}. \tag{3.34}\]
In [76] it was noted that, since for Schwarzchild black holes the area of the event horizon is \(A=16\pi G_{\text{N}}^{2}m_{2}^{2}\), a decrease in mass during scattering would imply a violation of the
Figure 2: The box and the crossed-box diagram are the two Feynman diagrams necessary for the computation of the leading order mass-shift. Wiggly lines represent massless force carriers: scalars, photons or gravitons.
Hawking area theorem [114]. An appealing feature of the above simple formula is that it manifests the positivity of the mass-shift. Unitarity relates the imaginary part of the partial amplitude to the strictly positive absorptive cross section where the final \(X\)-state has fixed invariant mass. Together with the fact that the spectral density is non-negative and, crucially, vanishes for \(\mu^{2}<m_{2}^{2}\), positivity of the mass-shift is a trivial corollary.
### Calculation of the Mass-Shift
In the explicit cases of scalar, photon and graviton absorption, the mass-shift can now be straightforwardly calculated from the imaginary parts of the relevant box diagrams depicted in Figure 2. These can be obtained without difficulty either using the Feynman rules in Appendix D or, as discussed in more detail in Section 3.5, by directly sewing together the triangle cut using the Compton amplitudes calculated in Section 2.2. There are then three remaining integrals to calculate \((\int\mathrm{d}s,\,\int\mathrm{d}^{D}\ell\) and \(\int\mathrm{d}^{D}q)\), the necessary master integrals are collected in Appendix E. The final results for the mass-shifts in each case are summarized in the following paragraphs.
#### Scalar Absorption
The imaginary part of the box diagram for scalar absorption
\[\mathrm{Im}\mathcal{M}_{4}^{(\psi)}\big{(}s,q^{2}\big{)}=\frac{4\pi G_{\rm N} Q_{s}^{2}}{m_{1}m_{2}}\int\hat{\mathrm{d}}^{D}\ell\frac{\hat{\delta}(u_{1} \cdot\ell)\hat{\delta}(u_{2}\cdot\ell-s)}{\ell^{2}(\ell+q)^{2}}, \tag{3.35}\]
together with the near-threshold spectral density (2.14) gives the corresponding mass-shift
\[\boxed{(\Delta m_{2})^{(\psi)}=\frac{G_{\rm N}^{2}Q_{s}^{2}m_{2}^{2}}{32m_{1}^ {2}|b|^{3}}\,\sqrt{y^{2}-1}.} \tag{3.36}\]
#### Photon Absorption
The imaginary part of the box diagram for photon absorption
\[\mathrm{Im}\mathcal{M}_{4}^{(\gamma)}\big{(}s,q^{2}\big{)}=-\frac{16\pi G_{\rm N }Q_{e}^{2}m_{1}}{m_{2}}\int\hat{\mathrm{d}}^{D}\ell\big{[}4s^{2}+\big{(}2y^{2} -1\big{)}|q|^{2}\big{]}\frac{\hat{\delta}(u_{1}\cdot\ell)\hat{\delta}(u_{2} \cdot\ell-s)}{\ell^{2}(\ell+q)^{2}}, \tag{3.37}\]
together with the near-threshold spectral density (2.23) gives the corresponding mass-shift
\[\boxed{(\Delta m_{2})^{(\gamma)}=\frac{3G_{\rm N}^{4}Q_{e}^{2}m_{2}^{4}}{32|b| ^{5}}\big{(}5y^{2}-1\big{)}\sqrt{y^{2}-1}.} \tag{3.38}\]
#### Graviton Absorption
The imaginary part of the box diagram for graviton absorption
\[\mathrm{Im}\mathcal{M}_{4}^{(h)}\big{(}s,q^{2}\big{)}=\frac{256 \pi^{2}G_{\rm N}^{2}m_{1}^{3}}{m_{2}}\int\hat{\mathrm{d}}^{D}\ell\big{[}16s^{ 4}+8\big{(}4y^{2}-1\big{)}s^{2}|q|^{2}+\big{(}8y^{4}-8y^{2}+1\big{)}|q|^{4} \big{]}\] \[\times\,\frac{\hat{\delta}(u_{1}\cdot\ell)\hat{\delta}(u_{2}\cdot \ell-s)}{\ell^{2}(\ell+q)^{2}}, \tag{3.39}\]
together with the near-threshold spectral density (2.29) gives the corresponding mass-shift
\[\boxed{(\Delta m_{2})^{(h)}=\frac{5\pi G_{\rm N}^{2}m_{1}^{2}m_{2}^{6}}{16|b|^ {7}}\big{(}21y^{4}-14y^{2}+1\big{)}\sqrt{y^{2}-1}.} \tag{3.40}\]
### Neutron Stars and Dissipation Numbers
The results of the previous section for Schwarzschild black hole absorption were obtained after explicitly matching with a UV cross section and exploiting the additional simplification of duality invariance as explained in Section 2.2. For generic compact bodies, such as neutron stars, this UV information may not be available. In such a case this formalism is still predictive, since the leading-order absorption depends only on the near-threshold Taylor series coefficients of the spectral density, we can proceed in the spirit of _bottom up_ EFT and use these unknown coefficients as a parametrization of our ignorance. As usual, at higher-orders finding a non-redundant parametrization is highly non-trivial; on-shell amplitudes based approaches are very well-suited to this problem, directly providing gauge invariant and non-redundant expressions by construction (for a relevant example in the context of Standard Model EFT see e.g. [115]).
As we have seen explicitly, the input required to compute the mass-shift is the absorptive contribution to the 2-to-2 elastic amplitude. Since we are only interested in long-distance effects, this quantity can be determined using standard on-shell unitarity-based methods, by computing the triangle cut of the amplitude depicted in Figure 3.
Absorptive effects are parametrized by the most general allowed form of the Compton amplitude. This task is somewhat complicated by the fact that, due to the unfamiliar \(s\)-integrals, the absorptive amplitudes are non-analytic functions of the external kinematics.
Based on the general discussion in Section 2.2, it is convenient to define a _Compton kernel_\(\mathcal{M}_{c}(s)\) as the integrand of the (low-energy) spectral integral, where this quantity is expanded in the soft limit (3.18). Let us first consider the case of the scalar-force model. The Compton kernel for scattering a scalar off a black hole takes the general form
\[\mathcal{M}^{(\psi)}(s)=-\frac{1}{u_{2}\cdot k_{1}-s+\mathrm{i}0} \Big{[}\kappa^{2}s\,c^{(\psi)}+\kappa^{4}s^{2}c_{2}^{(\psi)}+\kappa^{4}(k_{1} \cdot k_{4})c_{3}^{(\psi)}+\dots\Big{]}+(1\leftrightarrow 4)\,, \tag{3.41}\]
where the ellipsis denotes terms with higher orders in \(G_{\mathrm{N}}\). Notice that terms including \(u_{2}\cdot k_{1}\) and \(u_{2}\cdot k_{4}\) are absent, since \(u_{2}\cdot(k_{1}+k_{4})=-\frac{k_{1}\cdot k_{4}}{m_{2}}\) and pinch terms correspond to vanishing scaleless \(s\)-integrals. The coefficients \(c_{i}^{(\psi)}\) provide an on-shell, gauge invariant,
Figure 3: The Compton cut is the only unitary cut needed in order to compute leading-order classical observables. In the explicit Compton amplitudes given below the mediator momenta are labelled as shown, after sewing we identify \(k_{1}=-l\), \(k_{4}=l+q\).
definition of so-called _dissipation numbers_[91]. In the following we work to leading order and keep only the term proportional to \(c^{(\psi)}\), higher-order terms may be important for sub-leading absorptive effects.
For gravity and electrodynamics there are two independent Compton amplitudes corresponding to helicity conserving and helicity violating interactions; accordingly we have to parametrize two kernels \(\mathcal{M}_{+\mp}(s)\) independently. For electrodynamics the most general form is
\[\mathcal{M}_{+-}^{(\gamma)}(s) =-\frac{[1|u_{2}|4)^{2}}{u_{2}\cdot k_{1}-s+\mathrm{i}0}\Big{[} \kappa^{2}s\,c_{+-}^{(\gamma)}+\dots\Big{]}+(1\leftrightarrow 4)\,, \tag{100}\] \[\mathcal{M}_{++}^{(\gamma)}(s) =-\frac{[14]^{2}}{u_{2}\cdot k_{1}-s+\mathrm{i}0}\Big{[}\kappa^{ 2}s\,c_{++}^{(\gamma)}+\dots\Big{]}+(1\leftrightarrow 4)\,. \tag{101}\]
The spinor prefactors are fixed by the little-group covariance of the amplitudes. Again, the ellipsis denotes higher orders in \(G_{\mathrm{N}}\) and we will focus on the leading-order term. The analysis for the gravitational case is similar
\[\mathcal{M}_{+-}^{(h)}(s) =-\frac{[1|u_{2}|4)^{4}}{u_{2}\cdot k_{1}-s+\mathrm{i}0}\Big{[} \kappa^{2}s\,c_{+-}^{(h)}+\dots\Big{]}+(1\leftrightarrow 4)\,, \tag{102}\] \[\mathcal{M}_{++}^{(h)}(s) =-\frac{[14]^{4}}{u_{2}\cdot k_{1}-s+\mathrm{i}0}\Big{[}\kappa^{2 }s\,c_{++}^{(h)}+\dots\Big{]}+(1\leftrightarrow 4)\,. \tag{103}\]
We see the clear simplification compared with the off-shell the construction of \(\mathcal{O}\) operators. For completeness, these on-shell expression can be mapped onto a specific set of operators that are explicitly given in Appendix B.
The helicity violating Compton amplitudes \(\mathcal{M}_{++}\) vanish in the forward limit and therefore matching against a cross section fixes the coefficient \(c_{+-}\) uniquely, while leaving \(c_{++}\) undetermined. In order to determine all parameters another observable is required. In [116; 117] the Compton amplitudes themselves were directly matched with black hole perturbation theory, while in [87] the absorption cross sections for individual spinning partial waves were used. It might also be possible distinguish the contributions by matching forward/backward asymmetry of the Compton differential cross section [118]. Explicitly performing this matching for generic compact bodies is an interesting open problem; for the relevant case of a neutron star this information is ultimately a prediction of nuclear theory.
Starting from the Compton amplitudes, we construct the 2-to-2 elastic amplitude by sewing generalized unitarity cuts. In our conventions the necessary 3-particle amplitudes are: for the scalar
\[\mathcal{M}_{3}\Big{(}1_{\psi},2_{\phi_{1}},3_{\overline{\phi}_{1}}\Big{)}=Q _{s}\,, \tag{104}\]
for electrodynamics
\[\mathcal{M}_{3}\Big{(}1_{\gamma}^{+1},2_{\phi_{1}},3_{\overline{\phi}_{1}} \Big{)}=-\sqrt{2}Q_{e}\frac{[1|p_{2}|\xi)}{\langle 1\xi\rangle},\ \ \ \ \mathcal{M}_{3}\Big{(}1_{\gamma}^{-1},2_{\phi_{1}},3_{\overline{\phi}_{1}} \Big{)}=\sqrt{2}Q_{e}\frac{\langle 1|p_{2}|\xi]}{[1\xi]}, \tag{105}\]
and for gravity
\[\mathcal{M}_{3}\Big{(}1_{h}^{+2},2_{\phi_{1}},3_{\overline{\phi}_{1}}\Big{)}= -\frac{\kappa}{2}\frac{[1|p_{2}|\xi)^{2}}{\langle 1\xi\rangle^{2}},\ \ \ \ \mathcal{M}_{3}\Big{(}1_{h}^{-2},2_{\phi_{1}},3_{\overline{\phi}_{1}}\Big{)}=- \frac{\kappa}{2}\frac{\langle 1|p_{2}|\xi]^{2}}{[1\xi]^{2}}, \tag{106}\]
where \(|\xi\rangle\) and \(|\xi|\) are arbitrary auxiliary spinors. The case of the scalar is trivial and the resulting numerator, defined in (3.29), resulting from the sewing procedure is
\[\mathcal{N}_{4}^{(\psi)}(s)=\frac{\kappa^{2}Q_{s}^{2}c^{(\psi)}}{2m_{1}}\,. \tag{3.49}\]
For photons and gravitons, there are two different types of cuts contributing:
\[\mathcal{C}_{++}= \tag{3.50}\]
the two remaining cuts \(\mathcal{C}_{--}\) and \(\mathcal{C}_{+-}\) are related by conjugation. Using the Compton amplitudes in (3.42) and (3.43) as well as the three-point amplitudes in (3.47) and making use of spinor identities listed in Appendix A, for electrodynamics we calculate
\[\mathcal{C}_{++}^{(\gamma)}(s) = -\frac{2\kappa^{2}Q_{e}^{2}c_{++}^{(\gamma)}m_{1}^{2}s|q|^{2}}{u_ {2}\cdot\ell-s+\mathrm{i}0}+(u_{2}\to-u_{2})\,, \tag{3.51}\] \[\mathcal{C}_{-+}^{(\gamma)}(s) = -\frac{2\kappa^{2}Q_{e}^{2}c_{-+}^{(\gamma)}m_{1}^{2}s}{u_{2} \cdot\ell-s+\mathrm{i}0}\frac{\left(y|q|^{2}+2\mathrm{i}\varepsilon(\ell,q,u_ {1},u_{2})\right)^{2}}{|q|^{2}}+(u_{2}\to-u_{2})\,, \tag{3.52}\]
with \(\varepsilon(\ell,q,u_{1},u_{2})\coloneqq\varepsilon_{\mu\nu\rho\sigma}\ell^{ \mu}q^{\nu}u_{1}^{\rho}u_{2}^{\sigma}\). For gravity we find a simple _double-copy_-like form
\[\mathcal{C}_{++}^{(h)}(s) = -\frac{1}{4}\frac{\kappa^{4}c_{++}^{(h)}m_{1}^{4}s|q|^{4}}{u_{2} \cdot\ell-s+\mathrm{i}0}+(u_{2}\to-u_{2})\,, \tag{3.53}\] \[\mathcal{C}_{-+}^{(h)}(s) = \frac{\kappa^{4}c_{-+}^{(h)}m_{1}^{4}s}{u_{2}\cdot\ell-s+\mathrm{ i}0}\frac{\left(y|q|^{2}+2\mathrm{i}\varepsilon(\ell,q,u_{1},u_{2})\right)^{4}}{|q|^{ 4}}+(u_{2}\to-u_{2})\,. \tag{3.54}\]
Summing over the four different cuts, yields the following numerators, up to pinch terms
\[\mathcal{N}_{4}^{(\gamma)}(s) =2\kappa^{2}Q_{e}^{2}m_{1}\Big{\{}\big{[}4s^{2}+(2y^{2}-1)|q|^{2} \big{]}c_{+-}^{(\gamma)}-|q|^{2}c_{++}^{(\gamma)}\Big{\}}\,, \tag{3.55}\] \[\mathcal{N}_{4}^{(h)}(s) = -\frac{\kappa^{4}m_{1}^{3}}{4}\Big{\{}\big{[}16s^{4}{+}8(4y^{2}- 1)s^{2}|q|^{2}{+}(8y^{4}{-}8y^{2}{+}1)|q|^{4}\big{]}c_{+-}^{(h)}+|q|^{4}c_{++ }^{(h)}\Big{\}}\,. \tag{3.56}\]
Starting from these expressions and using the mass-shift formula (3.34) and the integrals (E.14)-(E.14), it is straightforward to derive the leading-order mass-shifts for generic bodies
\[(\Delta m_{2})^{(\psi)} =\frac{\pi G_{\rm N}Q_{s}^{2}c^{(\psi)}}{32m_{1}^{2}m_{2}|b|^{3}} \sqrt{y^{2}-1} \tag{3.57}\] \[(\Delta m_{2})^{(\gamma)} =\frac{9\pi G_{\rm N}Q_{e}^{2}}{32m_{2}|b|^{5}}\Big{[}c_{+-}^{( \gamma)}\big{(}5y^{2}-1\big{)}-4c_{++}^{(\gamma)}\Big{]}\sqrt{y^{2}-1}\,,\] (3.58) \[(\Delta m_{2})^{(h)} =\frac{225\pi^{2}G_{\rm N}^{2}m_{1}^{2}}{16m_{2}|b|^{7}}\Big{[}c _{+-}^{(h)}\big{(}21y^{4}-14y^{2}+1\big{)}+8c_{++}^{(h)}\Big{]}\sqrt{y^{2}-1}\,. \tag{3.59}\]
These expressions reduce to the Schwarzschild mass-shifts (3.36), (3.38) and (3.40) for the choice of dissipation numbers
\[c^{(\psi)}=\frac{G_{\rm N}m_{2}^{3}}{\pi},\quad c^{(\gamma)}_{+-}=\frac{G_{\rm N }^{3}m_{2}^{5}}{3\pi},\quad c^{(\gamma)}_{++}=0,\quad c^{(h)}_{+-}=\frac{G_{\rm N }^{5}m_{2}^{7}}{45\pi},\quad c^{(h)}_{++}=0, \tag{3.60}\]
where as discussed in Section 2.2, the vanishing of the helicity violating amplitudes is a manifestation of (linearized) self-duality.
The general result (3.57) agrees with the contribution of the electric and magnetic components that can be read of from Eq. (3.20) in [76]. Let us point out the trivial fact that given that we have computed the absorption for models with scalar, vector and tensor exchange, we can also make predictions for more general theories including supergravity.4 In this context it is interesting to note that no matter which mediation fields are present in the theory, the helicity-conserving contribution of the graviton will always be dominant in the high-energy limit. This implies a universal high-energy behaviour for the mass-shift, which closely mirrors similar findings for the scattering angle discussed [119; 120].
Footnote 4: The contributions from fermionic exchanges are a bit more complicated and necessitate a process where the light particle transitions from a spin 0 to a higher-spin state.
## 4 Discussion
The main results of this paper are the expressions (3.36), (3.38) and (3.40) for the change of rest mass of a Schwarzschild black hole scattering with a second compact body sourcing a massless scalar, electromagnetic or gravitational field. From these expressions we trivially obtain the 4-momentum impulse on either body using (3.33) and (3.12). The gravitational mass-shift (3.40) was previously calculated in [76] using a Schwinger-Keldysh in-in path integral based on the worldline formalism introduced in [66; 44]; our result obtained using the scattering amplitudes based KMOC formalism [95] is in complete agreement. As far as we are aware, the expressions for the impulse due to scalar (3.36) and photon (3.38) absorption are new.
The case of massless scalar absorption is of some importance for recent comparisons between PM methods and (a simplified scalar toy-model of) the self-force expansion [121; 122]. In the model considered in [121; 122], beginning at \(\mathcal{O}\big{(}G_{\rm N}^{2}Q_{s}^{2}\big{)}\), there is energy loss in the form of both radiation and horizon absorption. Combining the previous PM calculation of the two-loop radiative energy loss [122] with the scalar absorption result of this paper (3.36), we find excellent numerical agreement with the predicted energy loss from scalar self-force calculations at this order. Details of this comparison will be presented in a forthcoming paper.
Beyond these specific results, in this paper we have developed an approach to incorporating horizon absorption based on familiar in-out scattering amplitudes. As explained in Section 3.4, prior to integration, the non-trivial part of the calculation reduces to an application of standard unitarity methods for constructing integrands [48; 49; 50; 51; 52]. Furthermore, as emphasized in Section 3.2, the near-threshold expansion of the spectral integrals is most naturally treated as part of the expansion of the loop integrand in the soft region (3.5).
Taken together, the approach described in this paper can be incorporated quite naturally into the existing workflow for scattering amplitudes based methods applied to the PM gravitational two-body problem.
An important application of these methods will be the incorporation of spin degrees-of-freedom and the calculation of absorptive effects in the scattering of Kerr black holes [47; 74]. As observed in [123; 124], the static Kerr solution corresponds to a 3-particle on-shell amplitude previously identified as possessing uniquely soft UV behavior [125]. There has been considerable effort in attempting to generalize this correspondence to 4-particle Compton amplitudes [126; 127; 128; 129; 130; 131; 132; 133; 134; 135]. As emphasized recently [116; 117], direct extraction of the Kerr Compton amplitude from the Teukolsky equation is complicated at higher-orders in spin by non-trivial absorption effects. We may hope that an intrinsically amplitudes based approach to horizon absorption will be useful in clarifying the situation.
While the discussion in this paper is presented having in mind the case of compact astronomical objects such as black holes and neutron stars, we note that the discussion should equally apply to elementary processes, in particular the low energy scattering of photons off heavy ions and pions. Compton scattering serves as an important probe in low energy QCD and the polarizabilities, e.g. of pions constitute key quantities to be computed from nuclear models (see e.g. [136; 137; 138; 139]) and are measured e.g. at the COMPASS experiment at CERN [140].
Finally, in this paper we have described the calculation of the _leading_ absorptive effects in two-body scattering. A significant simplification in this case was the fact that only 2-point functions of the \(\mathcal{O}\) operators contributed, and therefore, as discussed in Section 3.5, our ignorance of the invisible sector could be parametrized in terms of a finite set of dissipation numbers arising from the near-threshold expansion of the spectral density. At higher-orders we would naively expect higher-point functions to contribute, and it is less clear how we can treat these quantities in a model-independent way. In [75] it was conjectured that the hidden sector dynamics is approximately Gaussian and that higher-point correlators factor into products of 2-point functions. To test this conjecture it may be useful to perform the EFT matching in the context of a toy model where the quantum mechanics of the black hole horizon is explicitly calculable [141]. We leave this and other important open questions to future work.
## Acknowledgements
We would like to thank Leor Barack, Zvi Bern, Enrico Herrmann, Dimitrios Kosmopoulos, Oliver Long and Chia-Hsien Shen for useful discussions and comments on the draft. CRTJ and MR are supported by the Department of Energy under Award Number DE-SC0009937, and gratefully acknowledge the continued support of the Mani L. Bhaumik Institute for Theoretical Physics. We would also like to acknowledge the hospitality of Nordita during the program _Amplifying Gravity at All Scales_ where part of this work was completed.
Conventions
We will use the mostly-minus metric convention, in \(D\) spacetime dimensions
\[\eta_{\mu\nu}=\text{diag}(+1,\underbrace{-1,...,-1}_{D-1}). \tag{108}\]
Loop and Fourier integrals are evaluated using dimensional regularization with \(D=4-2\epsilon\).
Our kinematic conventions for elastic scattering amplitudes will be to denote the incoming (outgoing) momenta \(p_{i}\) (\(p^{\prime}_{i}\)) using the 4-velocities \(u_{i}\) and momentum transfer \(q\) as
\[p_{1}^{\mu}=m_{1}u_{1}^{\mu},\hskip 14.226378ptp_{2}^{\mu}=m_{2}u_{2}^{\mu}, \hskip 14.226378ptp_{1}^{\prime\,\mu}=m_{1}u_{1}^{\mu}+q^{\mu},\hskip 14.226378ptp_{2}^ {\prime\,\mu}=m_{2}u_{2}^{\mu}-q^{\mu}. \tag{109}\]
It is also convenient to introduce the _dual_ vectors
\[\check{u}_{1}^{\mu}\coloneqq\frac{u_{1}^{\mu}-yu_{2}^{\mu}}{1-y^{2}},\hskip 28.452756pt \check{u}_{2}^{\mu}\coloneqq\frac{u_{2}^{\mu}-yu_{1}^{\mu}}{1-y^{2}},\hskip 28.452756pt y\coloneqq u_{1}\cdot u_{2}, \tag{110}\]
defined to satisfy \(u_{i}\cdot\check{u}_{j}=\delta_{ij}\).
Following [95] we absorb many factors of \(2\pi\) by defining
\[\hat{\mathrm{d}}x\coloneqq\frac{\mathrm{d}x}{2\pi},\hskip 28.452756pt\hat{ \delta}(x)\coloneqq 2\pi\delta(x). \tag{111}\]
Our scattering conventions mostly follow the conventions of Peskin-Schroeder [142]. Scattering amplitudes are related to the formal S-matrix operator by
\[\langle\text{out}|T|\text{in}\rangle\coloneqq\hat{\delta}^{(D)}(p_{\text{out} }-p_{\text{in}})\mathcal{M}(\text{in}\to\text{out}), \tag{112}\]
where \(S\coloneqq 1+\mathrm{i}T\). All of the amplitudes needed in the paper can be calculated using the Feynman rules enumerated in Appendix D. We use the standard phase convention, these rules directly applied calculate \(\mathrm{i}\mathcal{M}\).
Our spinor-helicity conventions mostly follow Elvang-Huang [143], modified to align with the choice of metric signature. All Pauli matrix identities and definitions are chosen to align with [144]. We choose \(|p|_{\alpha}\) and \(\langle p|_{\dot{\alpha}}\) to carry (outgoing) helicity weights \(+1/2\) and \(-1/2\) respectively and for real momenta \((|p|_{\alpha})^{*}=\langle p|_{\dot{\alpha}}\). We use the convention in which the helicity spinors and bispinor momenta are related as
\[p_{\mu}\sigma^{\mu}_{\alpha\dot{\alpha}}\coloneqq p_{\alpha\dot{\alpha}} \coloneqq-|p|_{\alpha}\langle p|_{\dot{\alpha}}. \tag{113}\]
A consistent choice of (outgoing) spin-1 polarization vectors is given by
\[\varepsilon_{+}^{*\mu}(p)\coloneqq\frac{[p|\sigma^{\mu}|\xi)}{\sqrt{2}\langle p \xi\rangle},\hskip 28.452756pt\varepsilon_{-}^{*\mu}(p)\coloneqq-\frac{ \langle p|\overline{\sigma}^{\mu}|\xi]}{\sqrt{2}[p\xi]}, \tag{114}\]
where \(|\xi\rangle\) and \(|\xi|\) are arbitrary auxiliary vectors; this choice is consistent with the property \(\left(\varepsilon_{+}^{\mu}\right)^{*}=\varepsilon_{-}^{\mu}\) as well as the normalization and completeness conditions
\[\varepsilon_{\lambda}^{*}(p)\cdot\varepsilon_{\lambda^{\prime}}(p)=-\delta_{ \lambda\lambda^{\prime}},\hskip 42.679134pt\sum_{\lambda=\pm}(\varepsilon_{ \lambda}^{\mu}(p))^{*}\varepsilon_{\lambda}^{\nu}(p)=-\eta^{\mu\nu}+\frac{p^{ \mu}\xi^{\nu}+p^{\nu}\xi^{\mu}}{p\cdot\xi}. \tag{115}\]
Duality-Organized Operator Basis
By adopting an on-shell approach as we have done in Section 3.5, we never have to introduce an off-shell effective action. However for the convenience of the reader and in order to facilitate comparison with the literature, we explicitly give the operators that when added to the minimal action give the corresponding Compton amplitudes. The photon Compton amplitudes in (3.42) and (3.43) can be obtained from
\[S_{\rm portal}^{(\gamma)}=2\kappa\int\mathrm{d}^{4}x\left[\frac{\partial_{\mu} \phi_{2}}{m_{2}}F_{\rm SD}^{\mu\nu}\mathcal{O}_{\nu}^{\rm SD}+\frac{\partial_ {\mu}\phi_{2}}{m_{2}}F_{\rm ASD}^{\mu\nu}\mathcal{O}_{\nu}^{\rm ASD}\right],\] (B.1)
where the self-dual and anti-self-dual field strengths are given by
\[F_{\rm SD}^{\mu\nu}=\frac{1}{2}\bigg{(}F^{\mu\nu}+\frac{\mathrm{i}}{2} \epsilon^{\mu\nu}_{\phantom{\mu\nu}\alpha\beta}F^{\alpha\beta}\bigg{)}\,, \quad F_{\rm ASD}^{\mu\nu}=\frac{1}{2}\bigg{(}F^{\mu\nu}-\frac{\mathrm{i}}{2} \epsilon^{\mu\nu}_{\phantom{\mu\nu}\alpha\beta}F^{\alpha\beta}\bigg{)}\,.\] (B.2)
The fact that this basis is particularly amplitude friendly is well-known and frequently used in the context of EFT (see e.g. [145]). The correlation functions are given by
\[\langle\mathcal{O}_{\rm SD}^{\mu}(x)\mathcal{O}_{\rm ASD}^{\nu} (0)\rangle= -i\int\mathrm{d}^{D}p\,e^{\mathrm{i}p\cdot x}\int_{m_{2}^{2}}^{ \infty}\mathrm{d}\mu^{2}\frac{c_{+-}^{(\gamma)}(\mu^{2}-m_{2}^{2})}{p^{2}-\mu ^{2}+\mathrm{i}0}\Pi_{1}^{\mu\nu}(p)+\ldots\] (B.3) \[\langle\mathcal{O}_{\rm SD}^{\mu}(x)\mathcal{O}_{\rm SD}^{\nu}(0)\rangle= -i\int\mathrm{d}^{D}p\,e^{\mathrm{i}p\cdot x}\int_{m_{2}^{2}}^{ \infty}\mathrm{d}\mu^{2}\frac{c_{++}^{(\gamma)}(\mu^{2}-m_{2}^{2})}{p^{2}-\mu ^{2}+\mathrm{i}0}\Pi_{1}^{\mu\nu}(p)+\ldots\] (B.4)
The mixed other correlators are related by parity. In this we have introduced the projector
\[\Pi_{1}^{\mu\nu}(p)=\eta^{\mu\nu}-\frac{p^{\mu}p^{\nu}}{p^{2}}\,.\] (B.5)
Where as before the ellipsis denote higher-order-in-\(G_{\rm N}\) and quantum terms. The operators for the gravitational case look very similar and are explicitly
\[S_{\rm portal}^{(h)}= \int\mathrm{d}^{4}x\bigg{[}\frac{\partial^{\alpha}\partial^{ \beta}\phi_{2}}{m_{2}^{2}}C_{\mu\alpha\nu\beta}^{\rm SD}\mathcal{O}_{\rm SD}^ {\mu\nu}+\frac{\partial^{\alpha}\partial^{\beta}\phi_{2}}{m_{2}^{2}}C_{\mu \alpha\nu\beta}^{\rm ASD}\mathcal{O}_{\rm ASD}^{\mu\nu}\bigg{]}\,.\] (B.6)
The self-and anti-self-dual projections of the Weyl tensor are
\[C_{\mu\nu\rho\sigma}^{\rm SD}=\frac{1}{2}\bigg{(}C_{\mu\nu\rho\sigma}+\frac{ \mathrm{i}}{2}\epsilon_{\mu\nu\alpha\beta}C^{\alpha\beta}_{\phantom{\alpha \beta}\rho\sigma}\bigg{)}\,,\quad C_{\mu\nu\rho\sigma}^{\rm ASD}=\frac{1}{2} \bigg{(}C_{\mu\nu\rho\sigma}-\frac{\mathrm{i}}{2}\epsilon_{\mu\nu\alpha\beta} C^{\alpha\beta}_{\phantom{\alpha\beta}\rho\sigma}\bigg{)}\,.\] (B.7)
The correlators are
\[\langle\mathcal{O}_{\rm SD}^{\mu\nu}(x)\mathcal{O}_{\rm ASD}^{ \alpha\beta}(0)\rangle= -\mathrm{i}\int\mathrm{d}^{D}p\,e^{\mathrm{i}p\cdot x}\int_{m_{2}^ {2}}^{\infty}\mathrm{d}\mu^{2}\frac{c_{+-}^{(h)}(\mu^{2}-m_{2}^{2})}{p^{2}-\mu ^{2}+\mathrm{i}0}\Pi_{2}^{\mu\nu\alpha\beta}(p)+\ldots\,,\] (B.8) \[\langle\mathcal{O}_{\rm SD}^{\mu\nu}(x)\mathcal{O}_{\rm SD}^{ \alpha\beta}(0)\rangle= -\mathrm{i}\int\mathrm{d}^{D}p\,e^{\mathrm{i}p\cdot x}\int_{m_{2}^ {2}}^{\infty}\mathrm{d}\mu^{2}\frac{c_{++}^{(h)}(\mu^{2}-m_{2}^{2})}{p^{2}-\mu ^{2}+\mathrm{i}0}\Pi_{2}^{\mu\nu\alpha\beta}(p)+\ldots\,,\] (B.9)
once again the remaining correlators are related by parity; the projector is given by5
Footnote 5: In practice the correlator will always contract into an conserved current, so we can replace \(\Pi_{1}^{\mu\nu}\to\eta^{\mu\nu}\) and \(\Pi_{2}^{\mu\nu\alpha\beta}\to\eta^{\mu\alpha}\eta^{\nu\beta}\).
\[\Pi_{2}^{\mu\nu\alpha\beta}(p)=\frac{1}{2}\Pi_{1}^{\mu\alpha}(p)\Pi_{1}^{\nu \beta}(p)+\frac{1}{2}\Pi_{1}^{\mu\beta}(p)\Pi_{1}^{\nu\alpha}(p)-\frac{1}{D-1} \Pi_{1}^{\mu\nu}(p)\Pi_{1}^{\alpha\beta}(p)\,.\] (B.10)
Weighted Cuts
It is convenient to define a diagrammatic _weighted_ cut as a generalization of the standard Cutkosky cutting rules [146]:
\[\begin{array}{c}\includegraphics[width=142.364pt]{cutkosky_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut__cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut__cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut__cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut_cut__cut_cut_cut__cut_cut_cut__cut__cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut__cut_cut_cut_cut__cut_cut__cut_cut_cut__cut_cut_cut_cut__cut_cut__cut_cut__cut_cut__cut_cut_cut__cut_cut__cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut_cut__cut__cut_cut_cut__cut_cut_cut_cut__cut_cut_cut_cut__cut_cut__cut_cut__cut_cut__cut_cut_cut_cut__cut_cut__cut_cut_cut_cut__cut__cut_cut__cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut_cut_cut_cut_cut_cut__cut_cut_cut__cut_cut_cut_cut__cut_cut__cut_cut__cut_cut_cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut_cut_cut__cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut__cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut__cut_cut_cut_cut_cut__cut_cut_cut_cut__cut_cut__cut_cut_cut_cut__cut_cut_cut__cut__cut_cut__cut_cut_cut__cut_cut_cut_cut__cut_cut__cut__cut_cut_cut__cut_cut__cut_cut_cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut__cut_cut__cut_cut__cut__cut_cut_cut__cut_cut__cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut__cut__cut_cut_cut__cut_cut__cut_cut__cut_cut__cut_cut__cut_cut__cut_cut__cut_cut__cut__cut_cut_cut__cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut__cut_cut__cut_cut_cut_cut__cut__cut_cut__cut_cut__cut__cut__cut_cut_cut_cut__cut__cut_cut__cut__cut_cut__cut_cut__cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut__cut_cut_cut__cut__cut_cut__cut_cut__cut__cut_cut_cut__cut_cut__cut_cut__cut__cut_cut_cut__cut_cut__cut_cut__cut_cut__cut__cut__cut_cut_cut_cut__cut__cut_cut__cut_cut_cut_cut__cut_cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut__cut_cut_cut__cut__cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut_cut__cut__cut_cut__cut_cut_cut__cut_cut_cut__cut_cut__cut_cut__cut_cut__cut_cut_cut_cut_cut__cut_cut_cut__cut_cut__cut_cut_cut__cut_cut__cut__cut_cut__cut_cut_cut__cut__cut__cut_cut_cut__cut_cut__cut_cut_cut_cut__cut_cut__cut_cut__cut_cut_cut_cut_cut__cut_cut__cut__cut_cut_cut__cut_cut_cut__cut__cut_cut_cut__cut_cut__cut_
**Scalar QED**
[MISSING_PAGE_POST]
[\nu\nu\]
[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
[MISSING_PAGE_POST]
\[\nu\]
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\]
\[\nu\]
\[\nu\]
\[\nu\nu\]
\[\nu\]
\[\nu\nu\
where \(\hat{\Pi}_{2}\coloneqq\mathcal{P}\cdot\Sigma(k)\), with
\[\mathcal{P}^{\mu_{1}\mu_{2}\mu_{3}\mu_{4}\nu_{1}\nu_{2}\nu_{3}\nu_{4 }}_{\rho_{1}\rho_{2}\rho_{3}\rho_{4}\sigma_{1}\sigma_{2}\sigma_{3}\sigma_{4}}\] \[\coloneqq\frac{1}{8}\Big{[}\Big{(}\delta^{[\mu_{1}}_{\rho_{1}} \delta^{[\mu_{2}]}_{\rho_{2}}\delta^{[\mu_{3}}_{\rho_{3}}\delta^{\mu_{4}]}_{ \rho_{4}}+\delta^{[\mu_{3}}_{\rho_{1}}\delta^{\mu_{4}]}_{\rho_{2}}\delta^{[\mu_ {1}}_{\rho_{3}}\delta^{\mu_{2}]}_{\rho_{4}}\Big{)}\Big{(}\delta^{[\nu_{1}}_{ \sigma_{1}}\delta^{\nu_{2}]}_{\sigma_{2}}\delta^{[\nu_{3}}_{\sigma_{3}}\delta^{ \nu_{4}]}_{\sigma_{4}}+\delta^{[\mu_{3}}_{\sigma_{1}}\delta^{\nu_{4}]}_{\sigma _{2}}\delta^{[\nu_{1}}_{\sigma_{3}}\delta^{\nu_{2}]}_{\sigma_{3}}\Big{)}\] \[\qquad\qquad+\Big{(}\delta^{[\nu_{1}}_{\rho_{1}}\delta^{[\nu_{3}}_ {\rho_{2}}\delta^{[\nu_{3}}_{\rho_{3}}\delta^{\nu_{4}]}_{\rho_{4}}+\delta^{[ \nu_{3}}_{\rho_{1}}\delta^{\nu_{4}]}_{\rho_{2}}\delta^{[\nu_{1}}_{\rho_{3}} \delta^{\nu_{2}]}_{\rho_{4}}\Big{)}\Big{(}\delta^{[\mu_{1}}_{\sigma_{1}}\delta ^{\mu_{2}]}_{\sigma_{2}}\delta^{[\mu_{3}}_{\sigma_{3}}\delta^{\mu_{4}]}_{ \sigma_{4}}+\delta^{[\mu_{3}}_{\sigma_{1}}\delta^{\mu_{4}]}_{\sigma_{2}}\delta^ {[\mu_{1}}_{\sigma_{3}}\delta^{\mu_{2}]}_{\sigma_{3}}\Big{)}\Big{]}, \tag{115}\]
and
\[\Sigma^{\mu_{1}\mu_{2}\mu_{3}\mu_{4}\nu_{1}\nu_{2}\nu_{3}\nu_{4}} (k)\] \[=\frac{8}{3}(2\eta^{\mu_{1}\nu_{4}}\eta^{\mu_{2}\nu_{3}}\eta^{ \mu_{3}\nu_{2}}\eta^{\mu_{4}\nu_{1}}+2\eta^{\mu_{1}\nu_{4}}\eta^{\mu_{2}\nu_{2 }}\eta^{\mu_{3}\nu_{3}}\eta^{\mu_{4}\nu_{1}}-\eta^{\mu_{1}\mu_{4}}\eta^{\mu_{2 }\mu_{3}}\eta^{\nu_{1}\nu_{4}}\eta^{\nu_{2}\nu_{3}})\] \[\quad-\frac{32}{k^{2}}(2k^{\mu_{1}}k^{\mu_{3}}\eta^{\mu_{2}\nu_{4 }}\eta^{\mu_{4}\nu_{2}}\eta^{\nu_{1}\nu_{3}}+2k^{\mu_{1}}k^{\nu_{1}}\eta^{\mu_ {2}\nu_{4}}\eta^{\mu_{3}\nu_{3}}\eta^{\mu_{4}\nu_{2}}-2k^{\mu_{3}}k^{\nu_{1}} \eta^{\mu_{1}\nu_{4}}\eta^{\mu_{2}\nu_{3}}\eta^{\mu_{4}\nu_{2}}\] \[\qquad\qquad+k^{\mu_{1}}k^{\mu_{3}}\eta^{\mu_{2}\mu_{4}}\eta^{ \nu_{1}\nu_{4}}\eta^{\nu_{2}\nu_{3}}+2k^{\mu_{1}}k^{\nu_{1}}\eta^{\mu_{2}\mu _{4}}\eta^{\mu_{3}\nu_{4}}\eta^{\nu_{2}\nu_{3}})\] \[\quad+\frac{128k^{\mu_{1}}k^{\mu_{3}}k^{\nu_{1}}k^{\nu_{3}}}{(k^ {2})^{2}}(2\eta^{\mu_{2}\nu_{4}}\eta^{\mu_{4}\nu_{2}}-\eta^{\mu_{2}\mu_{4}} \eta^{\nu_{2}\nu_{4}}). \tag{116}\]
## Appendix E Master Integrals
To calculate the mass-shift (3.34) there are three integrals to evaluate, which can in principle be performed in any order. The integrals in question, in general take the form
\[I_{k,n}=\int_{0}^{\infty}\mathrm{d}s\,s^{k}\int\hat{\mathrm{d}}^{4}q\,\hat{ \delta}(u_{2}\cdot q)\hat{\delta}(u_{1}\cdot q)e^{\mathrm{i}q\cdot b}|q|^{2n} \int\hat{\mathrm{d}}^{D}\ell\frac{\hat{\delta}(u_{2}\cdot\ell-s)\hat{\delta}(u _{1}\cdot\ell)}{\ell^{2}(\ell+q)^{2}}\,, \tag{117}\]
where we set \(D=4\) since all integrals that are required in the main text are finite. Furthermore we will only need the case where \(k=2r\), \(r\in\mathds{Z}_{\geq 0}\). The approach we will take is to first evaluate the spectral integral using
\[\int_{0}^{\infty}\mathrm{d}s\;s^{2r}\hat{\delta}(u_{2}\cdot\ell-s)=2\pi(u_{2} \cdot\ell)^{2r}\theta(u_{2}\cdot\ell)\,. \tag{118}\]
By making a change of variables \(\ell\to-\ell-q\) and using the trivial identity \(\theta(x)+\theta(-x)=1\), the resulting loop integrals can then be simplified using
\[\int\hat{\mathrm{d}}^{4}\ell\big{[}(u_{2}\cdot\ell)^{2r}\theta(u_{2}\cdot l) \big{]}\frac{\hat{\delta}(u_{1}\cdot\ell)}{\ell^{2}(\ell+q)^{2}}=\frac{1}{2} \int\hat{\mathrm{d}}^{D}\ell\frac{(u_{2}\cdot\ell)^{2r}\hat{\delta}(u_{1}\cdot \ell)}{\ell^{2}(\ell+q)^{2}}. \tag{119}\]
Next we need scalar loop integrals of the form
\[\int\hat{\mathrm{d}}^{4}\,\ell\frac{(u_{2}\cdot\ell)^{2r}\hat{\delta}(u_{1}\cdot \ell)}{\ell^{2}(\ell+q)^{2}}\ =\ \ \frac{(-1)^{r}(2r)!}{2^{4r+3}(r!)^{2}}\big{(}y^{2}-1\big{)}^{r}\big{(}-q^{2} \big{)}^{\frac{2r-1}{2}},\ \ \ \ r\in\mathds{Z}_{\geq 0}. \tag{104}\]
This formula is derived by specializing to the rest frame of particle-1
\[u_{1}^{\mu}=(1,0,\mathbf{0}),\ \ \ \ \ \ \ u_{2}^{\mu}=(y,\sqrt{y^{2}-1},\mathbf{0}),\ \ \ \ \ \ \ \ q^{\mu}=(0,0,\mathbf{q}), \tag{105}\]
where
\[y=u_{1}\cdot u_{2},\ \ \ \ \ \ \ \ \ |\mathbf{q}|=\big{(}-q^{2}\big{)}^{1/2}, \tag{106}\]
giving
\[\int\hat{\mathrm{d}}^{4}\,\ell\ \frac{(u_{2}\cdot\ell)^{2r}\hat{ \delta}(u_{1}\cdot\ell)}{\ell^{2}(\ell+q)^{2}} =\int\hat{\mathrm{d}}\ell_{0}\hat{\mathrm{d}}\ell_{1}\hat{ \mathrm{d}}^{2}\mathbf{\ell}\ \frac{\Big{(}y\ell_{0}-\sqrt{y^{2}-1}\ell_{1}\Big{)}^{2r}\hat{ \delta}(\ell_{0})}{[\ell_{0}^{2}-\ell_{1}^{2}-\mathbf{\ell}^{2}][\ell_{0}^{2}-\ell _{1}^{2}-(\mathbf{\ell}+\mathbf{q})^{2}]}\] \[=\frac{\big{(}y^{2}-1\big{)}^{r}}{\pi}\int_{0}^{\infty}\mathrm{d} \ell_{1}\ \ell_{1}^{2r}\int\hat{\mathrm{d}}^{2}\mathbf{\ell}\ \frac{1}{[\mathbf{\ell}^{2}+\ell_{1}^{2}][(\mathbf{\ell}+\mathbf{q})^{2}+\ell_{1}^{2}]}\] \[=\frac{\big{(}y^{2}-1\big{)}^{r}}{\pi}\int_{0}^{\infty}\mathrm{d} \ell_{1}\ \ell_{1}^{2r}\Bigg{[}\frac{\operatorname{arcsinh}\Big{(}\frac{1}{2\ell_{1}} \Big{)}}{\pi|\mathbf{q}|\sqrt{\mathbf{q}^{2}+4\ell_{1}^{2}}}\Bigg{]}\] \[=\big{(}y^{2}-1\big{)}^{r}\big{(}-q^{2}\big{)}^{\frac{2r-1}{2}} \times\frac{1}{\pi^{2}}\int_{0}^{\infty}\mathrm{d}x\ x^{2r}\Bigg{[}\frac{ \operatorname{arcsinh}\big{(}\frac{1}{2x}\big{)}}{\sqrt{1+4x^{2}}}\Bigg{]}. \tag{107}\]
The remaining integral is just a number, but diverges for \(r\in\mathds{Z}_{>0}\); we evaluate it for \(r\in\mathds{R}\) in the range \(-1/2<r<1/2\) and analytically continue the result. Finally we take the Fourier transform to impact parameter space. The derivation of the general result is well-known, for example Appendix A of [105], and is repeated here for convenience
\[\int\hat{\mathrm{d}}^{4}q\ \hat{\delta}(u_{1}\cdot q)\hat{\delta}(u_{2}\cdot q )e^{-iq\cdot b}\big{(}-q^{2}\big{)}^{-\alpha}=\frac{1}{\sqrt{y^{2}-1}}\frac{ \Gamma(1-\alpha)}{2^{2\alpha}\pi\Gamma(\alpha)}|b|^{2\alpha-2}, \tag{108}\]
where by assumption \(u_{1}\cdot b=u_{2}\cdot b=0\). For completeness we cases needed in the main text
\[I_{2,0} =\frac{1}{128}\frac{\sqrt{y^{2}-1}}{|b|^{3}}\,, \tag{109}\] \[I_{4,0} =\frac{27}{2048}\frac{\big{(}y^{2}-1\big{)}^{3/2}}{|b|^{5}}\,,\] (110) \[I_{2,1} =\,-\frac{9}{128}\frac{\sqrt{y^{2}-1}}{|b|^{5}}\,,\] (111) \[I_{6,0} =\frac{1125}{16384}\frac{\big{(}y^{2}-1\big{)}^{5/2}}{|b|^{7}}\,,\] (112) \[I_{4,1} =\,-\frac{675}{2048}\frac{\big{(}y^{2}-1\big{)}^{3/2}}{|b|^{7}}\,,\] (113) \[I_{2,2} =\frac{225}{128}\frac{\sqrt{y^{2}-1}}{|b|^{7}}\,. \tag{114}\]
The integral \(I_{2,0}\) is needed in the scalar computation, \(I_{4,0},I_{2,1}\) are needed for the photon and \(I_{6,0},I_{4,1},I_{2,2}\) are needed for the graviton.
Let us also give an alternative derivation. First note that we can set \(n=0\), all other integrals are obtained through the usual trick of taking derivatives with respect of \(b\). Specifying to the rest frame of particle-1 in (120) and performing the \(\ell_{0}\) and \(\ell_{1}\) integrations which sets \(\ell_{0}=0\) and \(\ell_{1}=-\frac{s}{\sqrt{y^{2}-1}}\), we find
\[I_{k,0}=\frac{1}{(y^{2}-1)}\int_{0}^{\infty}\mathrm{d}s\,s^{k}\int\hat{ \mathrm{d}}^{2}\mathbf{q}\,e^{\mathrm{i}\mathbf{q}\cdot\mathbf{b}}\int\frac{\hat{\mathrm{d }}^{2}\mathbf{\ell}}{[\mathbf{\ell}^{2}+s^{2}/(y^{2}-1)][(\mathbf{\ell}+\mathbf{q})^{2}+s^{2}/ (y^{2}-1)]}. \tag{121}\]
We notice that this the Fourier transform of a convolution, such that
\[I_{k,0}=\frac{1}{(y^{2}-1)}\int_{0}^{\infty}\mathrm{d}s\,s^{k}\Bigg{[}\int \frac{\hat{\mathrm{d}}^{2}\mathbf{q}\,e^{\mathrm{i}\mathbf{q}\cdot\mathbf{b}}}{[\mathbf{q}^{2} +s^{2}/(y^{2}-1)]}\Bigg{]}^{2}\,. \tag{122}\]
The Fourier transform is well-known,
\[\int\frac{\hat{\mathrm{d}}^{2}\mathbf{q}\,e^{\mathrm{i}\mathbf{q}\cdot\mathbf{b}}}{\mathbf{q}^ {2}+a^{2}}=\frac{1}{2\pi}K_{0}(a|b|)\,, \tag{123}\]
where \(K_{0}\) is a modified Bessel function of the second kind. It is perhaps interesting to note that this implies that the Fourier transform of \(\mathrm{Im}\,\mathcal{M}(s,q^{2})\) decays exponentially as \(e^{-2s|b|}\), thus rendering the integrals finite assuming \(\tilde{\rho}(s)\) grows at most polynomial which can be arranged using proper subtraction terms in the KL representation. Performing the Fourier integral, we find
\[I_{k,0}=\frac{1}{4\pi^{2}(y^{2}-1)}\int_{0}^{\infty}\mathrm{d}s\,s^{k}K_{0} \Bigg{[}\frac{sb}{\sqrt{y^{2}-1}}\Bigg{]}^{2}=\frac{\left(y^{2}-1\right)^{ \frac{k-1}{2}}\Gamma\big{(}\frac{k+1}{2}\big{)}^{3}}{16\pi^{3/2}\Gamma\big{(} \frac{k}{2}+1\big{)}}\frac{1}{|b|^{k+1}}\,. \tag{124}\]
Taking repeated derivatives yields the desired result
\[I_{k,n}=\frac{(-1)^{n}4^{n-2}\Gamma\big{(}\frac{k+1}{2}\big{)}^{3}\big{(} \frac{k+1}{2}\big{)}^{2}_{n}}{\pi^{3/2}\Gamma\big{(}\frac{k}{2}+1\big{)}} \frac{\left(y^{2}-1\right)^{\frac{k-1}{2}}}{|b|^{2n+1+k}}, \tag{125}\]
where \((x)_{n}\) is the rising factorial.
|
2309.15507 | Approximate Message Passing with Rigorous Guarantees for Pooled Data and
Quantitative Group Testing | In the pooled data problem, the goal is to identify the categories associated
with a large collection of items via a sequence of pooled tests. Each pooled
test reveals the number of items of each category within the pool. We study an
approximate message passing (AMP) algorithm for estimating the categories and
rigorously characterize its performance, in both the noiseless and noisy
settings. For the noiseless setting, we show that the AMP algorithm is
equivalent to one recently proposed by El Alaoui et al. Our results provide a
rigorous version of their performance guarantees, previously obtained via
non-rigorous techniques. For the case of pooled data with two categories, known
as quantitative group testing (QGT), we use the AMP guarantees to compute
precise limiting values of the false positive rate and the false negative rate.
Though the pooled data problem and QGT are both instances of estimation in a
linear model, existing AMP theory cannot be directly applied since the design
matrices are binary valued. The key technical ingredient in our analysis is a
rigorous asymptotic characterization of AMP for generalized linear models
defined via generalized white noise design matrices. This result, established
using a recent universality result of Wang et al., is of independent interest.
Our theoretical results are validated by numerical simulations. For comparison,
we propose estimators based on convex relaxation and iterative thresholding,
without providing theoretical guarantees. The simulations indicate that AMP
outperforms the convex estimator for noiseless pooled data and QGT, but the
convex estimator performs slightly better for noisy pooled data with three
categories when the number of observations is small. | Nelvin Tan, Pablo Pascual Cobo, Jonathan Scarlett, Ramji Venkataramanan | 2023-09-27T09:20:57Z | http://arxiv.org/abs/2309.15507v3 | # Approximate Message Passing with Rigorous Guarantees for
###### Abstract
In the _pooled data_ problem, the goal is to identify the categories associated with a large collection of items via a sequence of pooled tests. Each pooled test reveals the number of items of each category within the pool. We study an approximate message passing (AMP) algorithm for estimating the categories and rigorously characterize its performance, in both the noiseless and noisy settings. For the noiseless setting, we show that the AMP algorithm is equivalent to one recently proposed by El Alaoui et al. Our results provide a rigorous version of their performance guarantees, previously obtained via non-rigorous techniques. For the case of pooled data with two categories, known as _quantitative group testing_ (QGT), we use the AMP guarantees to compute precise limiting values of the false positive rate and the false negative rate. Though the pooled data problem and QGT are both instances of estimation in a linear model, existing AMP theory cannot be directly applied since the design matrices are binary valued. The key technical ingredient in our result is a rigorous analysis of AMP for generalized linear models defined via generalized white noise design matrices. This result, established using a recent universality result of Wang et al., is of independent interest. Our theoretical results are validated by numerical simulations. For comparison, we propose estimators based on convex relaxation and iterative thresholding, without providing theoretical guarantees. Our simulations indicate that AMP outperforms the convex programming estimator for a range of QGT scenarios, but the convex program performs better for pooled data with three categories.
test outcome from the pooled sub-population \(S_{i}\) is denoted by
\[Y_{i}=\big{(}|\tau^{-1}(1)\cap S_{i}|,\ldots,|\tau^{-1}(L)\cap S_{i}|\big{)},\]
where \(\tau^{-1}(1)\) represents the set of items in category 1, \(|\tau^{-1}(1)\cap S_{i}|\) represents the number of items in \(S_{i}\) of category 1, and so on. We denote the vector of proportions of assigned values (i.e., the empirical distribution of the categories) by
\[\hat{\pi}=\frac{1}{p}\big{(}|\tau^{-1}(1)|,\ldots,|\tau^{-1}(L)|\big{)}, \tag{1.1}\]
The signal to be estimated is denoted as \(B\in\mathbb{R}^{p\times L}\), with \(j\)th row \(B_{j}=e_{\tau(j)}\in\mathbb{R}^{L}\) for \(j\in[p]\). Here \(e_{\tau(j)}\) is the vector with a 1 in position \(\tau(j)\) and 0 elsewhere. For example, if \(\tau(j)=2\), then \(B_{j}=[0,1,0,\ldots,0]^{\top}\).
An equivalent linear-algebraic formulation will turn out to be more convenient to work with. In this formulation, we have a design matrix \(X\in\mathbb{R}^{n\times p}\), where \(X_{ij}=\mathds{1}\{j\in S_{i}\}\), for \(i\in[n]\), \(j\in[p]\). Then our histogram query can be succinctly rewritten as
\[Y_{i,:}=\sum_{j=1}^{p}X_{ij}B_{j}=B^{\top}X_{i,:}\in\mathbb{R}^{L},\quad i\in[ n], \tag{1.2}\]
where \(Y_{i,:}\) is the \(i\)th row of \(Y\) represented as a column vector, and \(X_{i,:}\) is the \(i\)th row of \(X\) represented as a column vector. More generally, the model can be written as \(Y=XB\in\mathbb{R}^{n\times L}\).
For the noisy setting, we study the situation where the observed pooled measurements are corrupted by additive independent noise. Specifically, we have
\[Y_{i,:}=B^{\top}X_{i,:}+\Psi_{i,:}\in\mathbb{R}^{L},\quad i\in[n], \tag{1.3}\]
where \(\Psi_{i,:}\) is the zero-mean noise (e.g., Gaussian or uniform) for the \(i\)th test. From (1.2) and (1.3), we observe that the pooled data problem can be viewed as an instance of compressed sensing, with additional constraints on the design matrix and the signal. Specifically, in the pooled data model \(Y=XB+\Psi\), the \(n\times p\) design matrix \(X\) is binary-valued, as is the \(p\times L\) signal matrix \(B\) which has exactly one-zero value in each row.
The special case of pooled data with two categories, called _quantitative group testing_ (QGT), has been studied in a number of recent works [33, 39, 40, 46, 56]. QGT is typically represented using a binary signal vector \(\beta\in\mathbb{R}^{p}\), with ones in positions (items) corresponding to the first category, and zeros in positions corresponding to the second category. Therefore, the goal is to recover \(\beta\) from the observed vector \(Y=X\beta+\Psi\).
In this paper, we study the natural 'linear category' regime [2] where no category is dominant nor negligible, i.e., as \(p\) grows, the fraction of items assigned to each of the \(L\) categories is \(\Theta(1)\). We consider random design matrices, specifically, the _random dense_ setting, where the \(X_{ij}\stackrel{{\text{i.i.d.}}}{{\sim}}\text{Bernoulli}(\alpha)\) for some fixed \(\alpha\in(0,1)\), for \(i\in[n],j\in[p]\). We consider the high dimensional regime where both \(n,p\to\infty\) with \(n/p\to\delta\). Note that the noiseless version of the problem becomes trivial if \(n=p\) and \(\alpha\in(0,1/2]\), since the random binary square matrix \(X\) will be invertible with high probability [31, Theorem 4.8]. Hence, we assume \(\delta<1\) for the noiseless setting.
### Approximate Message Passing
In this paper, we consider Approximate Message Passing (AMP) techniques to recover the categories of each item (or equivalently, the signal matrix \(B\in\mathbb{R}^{p\times L}\)) from \(Y\in\mathbb{R}^{n\times L}\) produced according to (2) or (3). AMP is a family of iterative algorithms that can be tailored to take advantage of structural information about the signals and the model, e.g., a known prior on the signal matrix, or on the proportion of observations that come from each signal. AMP algorithms were first proposed for the standard linear model (compressed sensing) [9, 20, 37, 41, 51], but have since been applied to a range of statistical problems, including estimation in generalized linear models and their variants [7, 44, 45, 48, 50, 55, 57, 62], and low-rank matrix estimation [18, 22, 25, 42, 47]. In all these settings, under suitable model assumptions, the performance of AMP in the high-dimensional limit is characterized by a succinct deterministic recursion called _state evolution_. Furthermore, the state evolution characterization has been used to show that AMP achieves Bayes-optimal performance for some models [16, 18, 19, 47]. The monograph [24] contains a survey of AMP algorithms for various models defined via Gaussian matrices.
Though AMP for compressed sensing has been widely studied, including for matrix-valued signals [36, 68], these results assume a Gaussian design matrix, and therefore cannot be applied to the pooled data problem. We address this issue in this paper, and study an AMP algorithm for the pooled data problem with rigorous guarantees. We do this by first centering and rescaling the data to cast it as a special case of a matrix generalized linear model (_matrix GLM_). In a matrix GLM, the goal is to estimate a signal matrix \(B\in\mathbb{R}^{p\times L}\) from an observed matrix \(\widetilde{Y}:=(\widetilde{Y}_{1},\ldots,\widetilde{Y}_{n})^{\top}\in\mathbb{R }^{n\times L_{\text{out}}}\), whose \(i\)th row \(\widetilde{Y}_{i,:}\) is generated as:
\[\widetilde{Y}_{i,:}=q(B^{\top}\widetilde{X}_{i,:}\,,\,\widetilde{\Psi}_{i,:}) \in\mathbb{R}^{L_{\text{out}}},\quad i\in[n]. \tag{4}\]
Here \(\widetilde{\Psi}\in\mathbb{R}^{n\times L_{\Psi}}\) is a matrix of unobserved auxiliary variables (with \(i\)th row \(\widetilde{\Psi}_{i,:}\)), \(\widetilde{X}\) is a design matrix (with \(i\)th row \(\widetilde{X}_{i,:}\)), and \(q:\mathbb{R}^{L}\times\mathbb{R}^{L_{\Psi}}\to\mathbb{R}^{L_{\text{out}}}\) is a known function.
An AMP algorithm for estimation in the matrix GLM was recently studied in [61], for i.i.d. Gaussian design matrices. In the pooled data setting, the modified design matrix \(\widetilde{X}\) is not Gaussian, but is an instance of a _generalized white noise_ matrix [67]. Wang et al. [67] analyzed an abstract AMP recursion for generalized white noise matrices and proved that the state evolution remains valid in this setting. We show that the AMP algorithm in [61] for the matrix GLM can be reduced to the abstract AMP in [67], using which we establish a rigorous state evolution for the pooled data problem.
### Main Contributions
#### AMP universality
We establish a rigorous state evolution result for the AMP algorithm applied to the matrix GLM in (4), where \(\widetilde{X}\) is a generalized white noise design matrix (see Definition 3.1). Theorem 3.2 gives a rigorous characterization of the joint empirical distribution of the AMP iterates in the high-dimensional limit as \(n,p\to\infty\) with \(n/p\to\delta\), for a constant \(\delta>0\). This allows us to compute exact asymptotic formulas for performance measures such as the mean-squared error and correlation between the signals and their estimates. Theorem 3.2 generalizes the state evolution result in [61, Theorem 1] for i.i.d. Gaussian designs, guaranteeing that the same AMP algorithm and state evolution remain valid for a much broader class of GLMs.
#### Pooled data
We show that after centering and suitable rescaling, the pooled data problem is a special case of the matrix-GLM with a generalized white noise design matrix. Therefore, rigorous performance guarantees for AMP can be readily obtained from Theorem 3.2. Furthermore, we show that in the noiseless setting, our AMP state evolution is equivalent to the one given by El Alaoui et al. [2], thereby making their performance guarantees rigorous; see Proposition 4.1 and Appendix B.
In Section 4.1, we provide numerical simulations to validate the theoretical results. For comparison, we propose alternative estimators based on convex relaxation and iterative thresholding, without providing theoretical guarantees. Our simulations indicate that the optimization based method achieves the best performance for the pooled data problem with three categories, indicating the sub-optimality of AMP in this setting.
#### Quantitative group testing
In Theorem 5.2, we provide rigorous guarantees for the limiting false positive rate (FPR) and false negative rate (FNR) achieved by AMP. We also show that, with a simple modification to the AMP algorithm, it can be used to vary the trade-off between the FPR and the FNR. Numerical simulations show that AMP outperforms the other estimators for a wide range of QGT scenarios.
### Related Work
#### Information-theoretic results for pooled data
We first summarize works that also study the linear category regime, where each category constitutes a \(\Theta(1)\) proportion of items.
* For the noiseless setting, Grebinski and Kucherov [30] proved that with exponential time algorithms, the pooled data problem can be solved with \(\Theta(\frac{p}{\log p})\) tests. More recently, under the condition that \(\pi\) is the uniform distribution and \(X_{ij}\sim\text{Bernoulli}(\alpha)\) for \(\alpha\in(0,1)\), Wang et al. [66] presented a lower bound on the minimum number of tests \(n\) for reliable recovery; they showed that if \(n<\frac{\log L}{L-1}\frac{p}{\log p}\), then the signal \(B\) cannot be uniquely determined. These results were later generalized and sharpened in [3], where it was shown that \(\gamma_{\text{low}}\frac{p}{\log p}<n<\gamma_{\text{up}}\frac{p}{\log p}\) tests are necessary and sufficient for \(B\) to be uniquely determined, where \(\gamma_{\text{low}}\) and \(\gamma_{\text{up}}\) are constants that depend on \(\pi\) and \(L\). This gap was closed by Scarlett and Cevher [53], who showed that \(\gamma_{\text{up}}\frac{p}{\log p}\) tests are also necessary for \(B\) to be unique.
* For the noisy setting, Scarlett and Cevher [53] developed a general framework for understanding variations of the pooled data problem with random noise. Specifically, under the noise model (3) with Gaussian noise \(\Psi_{i,:}\overset{\text{i.i.d.}}{\sim}\mathcal{N}_{L}(0,p\sigma^{2}I_{L})\), where \(I_{L}\) is an \(L\times L\) identity matrix, they showed that we require \(n=\Omega(p\log p)\) for exact recovery, which is super-linear in the number of items \(p\), in contrast with the sub-linear \(\Theta\big{(}\frac{p}{\log p}\big{)}\) behaviour observed in the noiseless case.
#### Algorithmic results for pooled data
Wang et al. [66] proposed a deterministic design matrix and an algorithm that recovers \(B\) with \(n=\Omega\big{(}\frac{p}{\log p}\big{)}\) tests. In the sublinear category regime, there is one dominant category with \(p-o(p)\) items, and the remaining categories have \(k=o(p)\) items. For random designs, an efficient algorithm for the sublinear category regime was recently developed in [34], and shown to achieve exact recovery with \(O(k)\) tests when \(k=\Theta\big{(}p^{\kappa}\big{)}\), for a constant \(\kappa\in(0,1)\). For the linear category regime, El Alaoui et al. [2] proposed an AMP algorithm for a random dense design, and characterized the asymptotic behaviour of the algorithm in the limit as \(n,p\to\infty\) with \(n/p\to\delta\); As mentioned above,
rigorous performance guarantees were not provided.
_Algorithmic results for QGT_. For the case of \(L=2\), the existing work has largely focused on the sublinear category regime. Algorithms inspired from coding theory [39, 40, 46, 56] and thresholding [32] require \(\Omega(k\log p)\) tests for exact recovery. The number of tests can be reduced to \(O(k)\) by specializing the algorithm from [34] (for general \(L\)) to the case of \(L=2\) when \(k=\Theta\big{(}p^{\Omega(1)}\big{)}\). Recently, Li and Wang [43] and Hahn-Klimroth et al. [33] studied noisy versions of QGT.
_Boolean group testing_. In Boolean Group Testing (BGT), there are two categories of items, usually referred to as defectives and non-defectives, and the outcome of the pooled test is \(1\) if it contains at least one defective, and zero otherwise. With the defectives corresponding ones in the binary signal vector \(\beta\), the testing model is:
\[Y_{i}=\mathds{1}\{X_{i,:}^{\top}\beta>0\},\quad i\in[n]. \tag{5}\]
BGT can be viewed as a less informative version of QGT, where test yields at most one bit of information. BGT is of interest in a range of applications including medical testing, DNA sequencing, and communication protocols [6, Section 1.7], and more recently, in testing for COVID-19 [5, 65]. BGT has been widely studied, including variants of the model (5) under practical constraints [27, 28, 49, 58, 59, 60], and with noise [52, 49, 54, 29]. We refer the interested reader to the survey by Aldridge et al. [6]. Belief propagation and AMP algorithms for noisy BGT with side information were recently proposed in [38] and [11], without theoretical guarantees. In Section 6 we discuss the challenges in extending the AMP guarantees for pooled data and QGT to the BGT setting.
_AMP Universality_. In addition to AMP universality results for generalized white noise matrices, Wang et al. [67] also gave similar results for generalized invariant ensembles. Other AMP universality results, for sub-Gaussian matrices and semi-random matrices, were recently established in [14] and [21], respectively.
## 2 Preliminaries
_Notation_. We write \([n:m]\) for \([n,n+1,\ldots,m]\) where \(n<m\). All vectors (including those corresponding to rows of matrices) are assumed to be column vectors unless otherwise stated. For vectors \(a,b\in\mathbb{R}^{n}\), \(\langle a,b\rangle=a^{\top}b\in\mathbb{R}\) is the inner product, \(a\odot b\in\mathbb{R}^{n}\) is their entry-wise product, and \(\langle a\rangle=\frac{1}{n}\sum_{i=1}^{n}a_{i}\) denotes the empirical average of the entries of \(a\). Matrices are denoted by upper case letters, and given a matrix \(A\), we write \(A_{i,:}\) for its \(i\)th row and \(A_{:,j}\) for its \(j\)th column. The operator norm is denoted by \(\|A\|_{\mathrm{op}}\). For \(r\in[1,\infty)\) and a vector \(a=(a_{1},\ldots,a_{n})\in\mathbb{R}^{n}\), we write \(\|a\|_{r}\) for the \(\ell_{r}\)-norm, so that \(\|a\|_{r}=\big{(}\sum_{i=1}^{n}|a_{i}|^{r}\big{)}^{1/r}\). We use \(e_{l}\) to denote the one-hot vector with a \(1\) in position \(l\), \(1_{p}\) for the vector of \(p\) ones, \(0_{p}\) for the vector of \(p\) zeros, and \(I_{p}\) for the \(p\times p\) identity matrix. Given random variables \(U,V\), we write \(U\stackrel{{ d}}{{=}}V\) to denote equality in distribution. Throughout the paper, the function \(\log(\cdot)\) has base \(e\), and we use of Bachmann-Landau asymptotic notation (i.e., \(O\), \(o\), \(\Omega\), \(\omega\), \(\Theta\)).
_Almost sure convergence_. This is denoted using the symbol \(\frac{a.s}{\gamma}\). Let \(\{A^{n}\}\) be a sequence of random elements taking values in a Euclidean space \(E\). We say that \(A^{n}\) converges almost surely to a deterministic limit \(a\in E\), and write \(A^{n}\stackrel{{ a.s.}}{{\rightarrow}}a\), if \(\mathbb{P}[\lim_{n\rightarrow\infty}A^{n}=a]=1\).
_Wasserstein convergence_. We review the definition of [67]. For a vector \(a\in\mathbb{R}^{n}\) and a random variable \(A\in\mathbb{R}\), we write \(a\stackrel{{ W_{r}}}{{\rightarrow}}A\) as \(n\rightarrow\infty\), for the Wasserstein-\(r\) convergence of
the empirical distribution of the entries of \(a\) to the law of \(A\). More generally, for vectors \(a^{1},\ldots,a^{k}\in\mathbb{R}^{n}\) and a random vector \((A^{1},\ldots,A^{k})\in\mathbb{R}^{k}\), we write
\[(a^{1},\ldots,a^{k})\,\stackrel{{ W_{T}}}{{\to}}\,(A^{1},\ldots,A^{ k})\text{ as }n\to\infty,\]
for the Wasserstein-\(r\) convergence of the empirical distribution of rows of \((a^{1},\ldots,a^{k})\in\mathbb{R}^{n\times k}\) to the joint law of \((A^{1},\ldots,A^{k})\). This means, for any continuous function \(\phi:\mathbb{R}^{k}\to\mathbb{R}\) and input vector \((a^{1}_{i},\ldots,a^{k}_{i})\in\mathbb{R}^{k}\) satisfying the _polynomial growth_ condition
\[|\phi(a^{1}_{i},\ldots,a^{k}_{i})|\leq C\big{(}1+\|(a^{1}_{i},\ldots,a^{k}_{i} )\|^{r}_{2}\big{)}\text{ for a constant }C>0, \tag{1}\]
we have as \(n\to\infty\)
\[\frac{1}{n}\sum_{i=1}^{n}\phi(a^{1}_{i},\ldots,a^{k}_{i})\to\mathbb{E}\big{[} \phi(A^{1},\ldots,A^{k})\big{]}. \tag{2}\]
We write \((a^{1},\ldots,a^{k})\,\stackrel{{ W}}{{\to}}\,(A_{1},\ldots,A_{k})\) as \(n\to\infty\) to mean that the above Wasserstein-\(r\) convergences hold for every order \(r\geq 1\).
## 3 AMP for Matrix GLM with Generalized White Noise Design
We begin with the definition of a generalized white noise matrix.
[67, Definition 2.15] A generalized white noise matrix \(\widetilde{X}\in\mathbb{R}^{n\times p}\) with a (deterministic) variance profile \(S\in\mathbb{R}^{n\times p}\) is one satisfying the following conditions:
* All entries \(\widetilde{X}_{ij}\) are independent.
* Each entry \(\widetilde{X}_{ij}\) has mean 0, variance \(n^{-1}S_{ij}\), and higher moments satisfying, for each integer \(m\geq 3\), \[\lim_{n,p\to\infty}p\cdot\max_{i\in[n]}\max_{j\in[p]}\mathbb{E}\Big{[}| \widetilde{X}_{ij}|^{m}\Big{]}=0.\]
* For a constant \(C>0\), \[\max_{i\in[n]}\max_{j\in[p]}S_{ij}\leq C,\quad\lim_{n,p\to\infty}\max_{i\in[n ]}\Big{|}\frac{1}{p}\sum_{j=1}^{p}S_{ij}-1\Big{|}=0,\quad\lim_{n,p\to\infty} \max_{j\in[p]}\Big{|}\frac{1}{n}\sum_{i=1}^{n}S_{ij}-1\Big{|}=0.\]
Note that Definition 3 simplifies greatly for the case where \(S_{ij}=1\), for all \((i,j)\in[n]\times[p]\). In this case, the entries are all i.i.d. with variance \(1/n\), the third condition in the definition is trivially satisfied, and the second condition requires moments of order 3 and higher to decay faster than \(1/p\).
Model assumptionsConsider the matrix GLM model (4) defined via a generalized white noise design matrix \(\widetilde{X}\). The signal matrix \(B\in\mathbb{R}^{p\times L}\) and the auxiliary variable matrix \(\widetilde{\Psi}\in\mathbb{R}^{n\times L_{\Psi}}\) are both independent of \(\widetilde{X}\). As \(p\to\infty\), we assume that \(n/p\to\delta\), for some positive constant \(\delta\). As \(p\to\infty\), the empirical distributions of the rows of the signal matrix and the auxiliary variable matrix both converge in Wasserstein distance to well-defined limits. More precisely, there exist random variables \(\bar{B}\sim P_{\bar{B}}\) (where \(\bar{B}\in\mathbb{R}^{L}\)) and \(\bar{\Psi}\sim P_{\bar{\Psi}}\) (where \(\bar{\Psi}\in\mathbb{R}^{L_{\Psi}}\)) with \(B\stackrel{{ W}}{{\to}}\bar{B}\) and \(\widetilde{\Psi}\stackrel{{ W}}{{\to}}\bar{\Psi}\), respectively.
In this section, we allow general priors \(P_{\bar{B}}\) and \(P_{\bar{\Psi}}\) for the signal and auxiliary matrices, before specializing to the pooled data setting in the following section.
### Algorithm
The AMP algorithm and the associated state evolution for estimating \(B\) in the matrix GLM model (4) are the same as those in [61] for i.i.d. Gaussian designs. For the rest of this paper, we call this algorithm the _matrix-AMP_ algorithm. We present these below for completeness, before stating the main result in the next subsection.
In each iteration \(k\geq 1\), the matrix-AMP algorithm iteratively produces estimates \(\widehat{B}^{k}\) and \(\Theta^{k}\) of \(B\in\mathbb{R}^{p\times L}\) and \(\Theta:=\widetilde{X}B\in\mathbb{R}^{n\times L}\), respectively. Starting with an initializion \(\widehat{B}^{0}\in\mathbb{R}^{p\times L}\) and defining \(\widehat{R}^{-1}:=0\in\mathbb{R}^{n\times L}\), for iteration \(k\geq 0\) the algorithm computes:
\[\Theta^{k} =\widetilde{X}\widehat{B}^{k}-\widehat{R}^{k-1}(F^{k})^{\top}, \quad\widehat{R}^{k}=g_{k}(\Theta^{k},\widetilde{Y}), \tag{12}\] \[B^{k+1} =\widetilde{X}^{\top}\widehat{R}^{k}-\widehat{B}^{k}(C^{k})^{ \top},\quad\widehat{B}^{k+1}=f_{k+1}(B^{k+1}).\]
Here the functions \(g_{k}:\mathbb{R}^{L}\times\mathbb{R}^{L_{\rm out}}\to\mathbb{R}^{L}\) and \(f_{k+1}:\mathbb{R}^{L}\to\mathbb{R}^{L}\) act row-wise on their matrix inputs, and the matrices \(C^{k},F^{k+1}\in\mathbb{R}^{L\times L}\) are defined as
\[C^{k}=\frac{1}{n}\sum_{i=1}^{n}g^{\prime}_{k}(\Theta^{k}_{i, \cdot},\widetilde{Y}_{i,\cdot}),\ \ F^{k+1}=\frac{1}{n}\sum_{j=1}^{p}f^{\prime}_{k+1}(B^{k+1}_{j, \cdot}),\]
where \(g^{\prime}_{k},f^{\prime}_{k+1}\) denote the Jacobians of \(g_{k},f_{k+1}\), respectively, with respect to their first arguments. We note that the time complexity of each iteration of (12) is \(O(npL)\).
_State evolution._ The'memory' terms \(-\widehat{R}^{k-1}(F^{k})^{\top}\) and \(-\widehat{B}^{k}(C^{k})^{\top}\) in (12) debias the iterates \(\Theta^{k}\) and \(B^{k+1}\), ensuring that their joint empirical distributions are accurately captured by state evolution (SE) in the high-dimensional limit. Theorem 3.2 below shows that for each \(k\geq 1\), the empirical distribution of the rows of \(B^{k}\) converges to the distribution of \(\mathrm{M}^{k}_{B}\bar{B}+G^{k}_{B}\in\mathbb{R}^{L}\), where \(G^{k}_{B}\sim\mathcal{N}(0,\mathrm{T}^{k}_{B})\) is independent of \(\bar{B}\), the random variable representing the limiting distribution of the rows of the signal matrix \(B\). The deterministic matrices \(\mathrm{M}^{k}_{B},\mathrm{T}^{k}_{B}\in\mathbb{R}^{L\times L}\) are recursively defined below. The result implies that the empirical distribution of the rows of \(\widehat{B}^{k}\) converges to the distribution of \(f_{k}(\mathrm{M}^{k}_{B}\bar{B}+G^{k}_{B})\). Thus, \(f_{k}\) can be viewed as a denoising function that can be tailored to take advantage of the prior on \(\bar{B}\). Theorem 3.2 also shows that the empirical distribution of the rows of \(\Theta^{k}\) converges to the distribution of \(\mathrm{M}^{k}_{\bar{\Theta}}Z+G^{k}_{\bar{\Theta}}\in\mathbb{R}^{L}\), where \(Z\sim\mathcal{N}(0,\frac{1}{\delta}\mathbb{E}[\bar{B}\bar{B}^{\top}])\) and \(G^{k}_{\Theta}\sim\mathcal{N}(0,\mathrm{T}^{k}_{\Theta})\) are independent.
We now describe the state evolution recursion defining the matrices \(\mathrm{M}^{k}_{\Theta},\mathrm{T}^{k}_{\Theta},\mathrm{M}^{k}_{B},\mathrm{T}^ {k}_{B}\in\mathbb{R}^{L\times L}\). Recalling that the observation \(\widetilde{Y}\) is generated via the function \(q\) according to (4), it is convenient to rewrite \(g_{k}\) in (12) in terms of another function \(\tilde{g}_{k}:\mathbb{R}^{L}\times\mathbb{R}^{L}\times\mathbb{R}^{L_{\Psi}}\to \mathbb{R}^{L}\) defined as:
\[\tilde{g}_{k}(z,u,v):=g_{k}(u,\,q(z,v)). \tag{13}\]
Then, for \(k\geq 0\), given \(\Sigma^{k}\in\mathbb{R}^{2L\times 2L}\), take \((Z,Z^{k})^{\top}\sim\mathcal{N}(0,\Sigma^{k})\) to be independent of \(\bar{\Psi}\sim P_{\bar{\Psi}}\) and compute:
\[\mathrm{M}^{k+1}_{B} =\mathbb{E}[\partial_{Z}\tilde{g}_{k}(Z,Z^{k},\bar{\Psi})], \tag{14}\] \[\mathrm{T}^{k+1}_{B} =\mathbb{E}[\tilde{g}_{k}(Z,Z^{k},\bar{\Psi})\tilde{g}_{k}(Z,Z^{k },\bar{\Psi})^{\top}],\] (15) \[\Sigma^{k+1} =\begin{bmatrix}\Sigma^{k+1}_{(11)}&\Sigma^{k+1}_{(12)}\\ \Sigma^{k+1}_{(21)}&\Sigma^{k+1}_{(22)}\end{bmatrix}, \tag{16}\]
where the four \(L\times L\) matrices constituting \(\Sigma^{k+1}\in\mathbb{R}^{2L\times 2L}\) are given by:
\[\Sigma^{k+1}_{(11)} =\frac{1}{\delta}\mathbb{E}[\bar{B}\bar{B}^{\top}], \tag{3.6}\] \[\Sigma^{k+1}_{(12)} =\Big{(}\Sigma^{k+1}_{(21)}\Big{)}^{\top}=\frac{1}{\delta} \mathbb{E}[\bar{B}f_{k+1}(\mathrm{M}^{k+1}_{B}\bar{B}+G^{k+1}_{B})^{\top}],\] (3.7) \[\Sigma^{k+1}_{(22)} =\frac{1}{\delta}\mathbb{E}[f_{k+1}(\mathrm{M}^{k+1}_{B}\bar{B}+ G^{k+1}_{B})f_{k+1}(\mathrm{M}^{k+1}_{B}\bar{B}+G^{k+1}_{B})^{\top}]. \tag{3.8}\]
Here \(G^{k+1}_{B}\sim\mathcal{N}(0,\mathrm{T}^{k+1}_{B})\) is independent of \(\bar{B}\sim P_{\bar{B}}\). In (3.3), \(\partial_{Z}\tilde{g}_{k}\) is the partial derivative (Jacobian) of \(\tilde{g}_{k}\) with respect to its first argument \(Z\in\mathbb{R}^{L}\), so it is an \(L\times L\) matrix. The state evolution recursion (3.3)-(3.5) is initialized with \(\Sigma^{0}\in\mathbb{R}^{2L\times 2L}\), defined below in (3.11).
For \((Z,Z^{k})^{\top}\sim\mathcal{N}(0,\Sigma^{k})\), using standard properties of Gaussian random vectors, we have
\[(Z,Z^{k},\bar{\Psi})\stackrel{{ d}}{{=}}(Z,\mathrm{M}^{k}_{\Theta} Z+G^{k}_{\Theta},\bar{\Psi}), \tag{3.9}\]
where \(G^{k}_{\Theta}\sim\mathcal{N}(0,\mathrm{T}^{k}_{\Theta})\),
\[\mathrm{M}^{k}_{\Theta}=\Sigma^{k}_{(21)}\big{(}\Sigma^{k}_{(11)}\big{)}^{-1},\quad\mathrm{T}^{k}_{\Theta}=\Sigma^{k}_{(22)}\,-\,\Sigma^{k}_{(21)}\big{(} \Sigma^{k}_{(11)}\big{)}^{-1}\,\Sigma^{k}_{(12)}. \tag{3.10}\]
### Main result
We begin with the assumptions required for the main result. The first is on the matrix-AMP initializer \(\widehat{B}^{0}\in\mathbb{R}^{p\times L}\), the second is on the functions \(g_{k},f_{k\!+\!1}\) used to define the matrix-AMP algorithm in (3.1), and the third is on the design matrix \(\tilde{X}\in\mathbb{R}^{n\times p}\).
**(A1)** Almost surely as \(n,p\to\infty\), \((B,\widehat{B}^{0})\stackrel{{ W}}{{\to}}(\bar{B},\bar{B}^{0})\), with the joint law of \([\bar{B}^{\top},(\bar{B}^{0})^{\top}]^{\top}\) having finite moments of all orders. Moreover, there exists \(\Sigma^{0}\in\mathbb{R}^{2L\times 2L}\) such that
\[\frac{1}{n}\begin{bmatrix}B^{\top}B&B^{\top}\widehat{B}^{0}\\ (\widehat{B}^{0})^{\top}B&(\widehat{B}^{0})^{\top}\widehat{B}^{0}\end{bmatrix} \stackrel{{ a.s.}}{{\to}}\Sigma^{0}, \tag{3.11}\]
and multivariate polynomials are dense in the real \(L^{2}\)-spaces of functions \(f:\mathbb{R}^{L\Psi}\to\mathbb{R}\), \(g:\mathbb{R}^{2L}\to\mathbb{R}\) with the inner-products
\[\big{\langle}f,\tilde{f}\big{\rangle}:=\mathbb{E}\big{[}f\big{(}\bar{\Psi} \big{)}\tilde{f}\big{(}\bar{\Psi}\big{)}\big{]}\text{ and }\big{\langle}g,\tilde{g}\big{\rangle}:=\mathbb{E}\big{[}g\big{(}\bar{B}, \bar{B}^{0}\big{)}\tilde{g}\big{(}\bar{B},\bar{B}^{0}\big{)}\big{]}.\]
**(A2)** For \(k\geq 0\), the functions \(f_{k+1}\) and \(\tilde{g}_{k}\) (defined in (3.2)) are continuous, Lipschitz with respect to their first argument, and satisfy the polynomial growth condition in (2.1) for some order \(r\geq 1\).
**(A3)**\(\|\widetilde{X}\|_{\mathrm{op}}<C\) almost surely for sufficiently large \(n,p\), for some constant \(C\). for any fixed polynomial functions \(f^{\dagger}:\mathbb{R}^{L\to\mathbb{R}}\) and \(f^{\ddagger}:\mathbb{R}^{2L}\to\mathbb{R}\), as \(n,p\to\infty\),
\[\max_{i\in[n]}\big{|}\langle f^{\ddagger}(\widetilde{\Psi}) \odot S_{i,\cdot}\rangle-\langle f^{\ddagger}(\widetilde{\Psi})\rangle\cdot \langle S_{i,\cdot}\rangle\Big{|}\stackrel{{ a.s.}}{{\to}}0\] \[\max_{j\in[p]}\Big{|}\langle f^{\ddagger}(B,\widehat{B}^{0}) \odot S_{\cdot,j}\rangle-\langle f^{\ddagger}(B,\widehat{B}^{0})\rangle\cdot \langle S_{\cdot,j}\rangle\Big{|}\stackrel{{ a.s.}}{{\to}}0,\]
where \(S\) is the variance profile of \(\widetilde{X}\) (see Definition 3.1).
Theorem 3.2: _Consider the matrix-AMP algorithm in (3.1) for the matrix GLM model in (1.4), defined via a generalized white noise matrix. Suppose that the model assumptions on p.6 and assumptions **(A1)**, **(A2)**, and **(A3)** are satisfied, and that \(\mathrm{T}^{1}_{B}\) is positive definite. Then for each \(k\geq 0\), we have_
\[\big{(}B,\widehat{B}^{0},B^{k+1}\big{)} \stackrel{{ W_{2}}}{{\to}}\big{(}\bar{B},\bar{B}^{0}, \mathrm{M}^{k+1}_{B}\bar{B}+G^{k+1}_{B}\big{)}, \tag{3.12}\] \[\big{(}\widetilde{\Psi},\Theta,\Theta^{k}\big{)} \stackrel{{ W_{2}}}{{\to}}\big{(}\bar{\Psi},Z,\mathrm{M}^{k}_{ \Theta}\,Z+G^{k}_{\Theta}\big{)}, \tag{3.13}\]
_almost surely as \(n,p\to\infty\) with \(n/p\to\delta\). In the above, \(G^{k+1}_{B}\sim\mathcal{N}(0,\mathrm{T}^{k+1}_{B})\) is independent of \(\bar{B}\), and \(G^{k}_{\Theta}\sim\mathcal{N}(0,\mathrm{T}^{k}_{\Theta})\) is independent of \((Z,\bar{\Psi})\)._
The proof of the theorem is given in Appendix A. We provide a brief outline here. We start with an abstract AMP iteration (with vector-valued iterates) for generalized white noise matrices. A state evolution result for the abstract AMP iteration was recently established by Wang et al. [67]. Using induction, we show that the matrix-AMP algorithm in (3.1) can be reduced to the abstract AMP iteration via a suitable change of variables and iteration indices. This allows us to translate the state evolution of the abstract AMP iteration to the matrix-AMP algorithm. One challenge in establishing the reduction is that the abstract AMP iteration is defined for vector-valued iterates, whereas the matrix-AMP algorithm in (3.1) has matrix-valued iterates, with \(L\) columns per iterate. We handle this mismatch by using \(L\) iterations of the abstract AMP to produce each matrix-valued iterate of the matrix-AMP.
Theorem 3.2 can be generalized to characterize the joint distributions of the iterates \(\big{(}B,\widehat{B}^{0},B^{1},\ldots,B^{k+1}\big{)}\) and \(\big{(}\widetilde{\Psi},\Theta,\Theta^{1},\ldots,\Theta^{k}\big{)}\). We omit this as the state evolution recursion becomes cumbersome, but the result can be obtained using essentially the same proof.
Performance measuresTheorem 3.2 allows us to compute the limiting values of performance measures, such as the mean-squared error (MSE) and the normalized correlation, via the laws of the random vectors on the RHS of (3.12) and (3.13). Let \(\bar{B}^{k}:=\mathrm{M}^{k}_{B}\bar{B}+G^{k}_{B}\), and recall that \(Z^{k}:=\mathrm{M}^{k}_{\Theta}Z+G^{k}_{\Theta}\). Then, for \(k\geq 1\), Theorem 3.2 and (2.2) together imply that the limiting MSE of the matrix-AMP estimates \(\widehat{B}^{k}\in\mathbb{R}^{p\times L}\) is given by
\[\frac{1}{p}\sum_{j=1}^{p}\|\widehat{B}^{k}_{j,:}-B_{j:\cdot}\|_{2}^{2}\ \stackrel{{ a.s.}}{{\to}}\mathbb{E}\Big{[}\|f_{k}(\bar{B}^{k})- \bar{B}\|_{2}^{2}\Big{]}. \tag{3.14}\]
Similarly, the limiting normalized correlation of the matrix-AMP estimates is
\[\frac{1}{p}\sum_{j=1}^{p}\langle\widehat{B}^{k}_{j,:},B_{j,:}\rangle\ \stackrel{{ a.s.}}{{\to}}\mathbb{E}\Big{[}\langle f_{k}(\bar{B}^{k}), \bar{B}\rangle\Big{]}. \tag{3.15}\]
Choosing the AMP denoising functionsTheorem 3.2 shows that the effective noise covariance matrices of the random vectors \(\big{(}\mathrm{M}^{k}_{\Theta}\big{)}^{-1}\,Z^{k}\) and \(\Big{(}\mathrm{M}^{k+1}_{B}\Big{)}^{-1}\,\bar{B}^{k+1}\) are
\[N^{k}_{\Theta}:=\Big{(}\mathrm{M}^{k}_{\Theta}\Big{)}^{-1}\,\mathrm{T}^{k}_{ \Theta}\left(\big{(}\mathrm{M}^{k}_{\Theta}\big{)}^{-1}\right)^{\top}\ \text{and}\ N^{k+1}_{B}:=\Big{(}\mathrm{M}^{k+1}_{B}\Big{)}^{-1}\,\mathrm{T}^{k+ 1}_{B}\left(\big{(}\mathrm{M}^{k+1}_{B}\big{)}^{-1}\right)^{\top}, \tag{3.16}\]
respectively. A reasonable approach is to choose \(f_{k}\) and \(g_{k}\) such that the trace of each effective noise covariance matrix is minimized. Assuming that the signal prior \(P_{\bar{B}}\) and the distribution of auxiliary variables \(P_{\Psi}\) are known, the optimal choices for \(f_{k},g_{k}\) were derived in [61, Section 3.1]. Specifically, it was shown that for \(k\geq 1\), the following statements hold:
1) Given \(\mathrm{M}_{B}^{k}\), \(\mathrm{T}_{B}^{k}\), the quantity \(\mathrm{Tr}(N_{\Theta}^{k})\) is minimized when \(f_{k}=f_{k}^{*}\), where
\[f_{k}^{*}(s)=\mathbb{E}[\bar{B}\mid\mathrm{M}_{B}^{k}\bar{B}+G_{B}^{k}=s], \tag{3.17}\]
where \(G_{B}^{k}\sim\mathcal{N}(0,\mathrm{T}_{B}^{k})\) and \(\bar{B}\sim P_{\bar{B}}\) are independent.
2) Given \(\mathrm{M}_{\Theta}^{k}\), \(\mathrm{T}_{\Theta}^{k}\), the quantity \(\mathrm{Tr}(N_{B}^{k+1})\) is minimized when \(g_{k}=g_{k}^{*}\), where
\[g_{k}^{*}(u,\,y)=\mathrm{Cov}[Z\mid Z^{k}=u]^{-1}\big{(}\mathbb{E}[Z\mid Z^{k }=u,\bar{Y}=y]-\,\mathbb{E}[Z\mid Z^{k}=u]\big{)}. \tag{3.18}\]
Here \((Z,Z^{k})^{\top}\sim\mathcal{N}(0,\Sigma^{k})\) and \(\bar{Y}=q(Z,\bar{\Psi})\), with \(\bar{\Psi}\sim P_{\Psi}\) independent of \(Z\).
## 4 AMP for Pooled Data
We now consider the pooled data problem described in Section 1.1. We use a random dense design matrix \(X\in\mathbb{R}^{n\times p}\), where \(X_{ij}\overset{\text{i.i.d.}}{\sim}\text{Bernoulli}(\alpha)\) for some fixed \(\alpha\in(0,1)\), for \(i\in[n],j\in[p]\). Since the entries of \(X\) have non-zero mean, they need to be recentered and scaled to obtain a generalized white noise matrix \(\widetilde{X}\). Specifically, let
\[\widetilde{X}_{ij}=\frac{(X_{ij}-\alpha)}{\sqrt{n\alpha(1-\alpha)}}\,,\quad i \in[n],\,j\in[p]. \tag{4.1}\]
The entries of \(\widetilde{X}\) are i.i.d., with
\[\widetilde{X}_{ij}=\begin{cases}-\alpha\sqrt{\frac{1}{n\alpha(1-\alpha)}}\text { with probability }1-\alpha\\ (1-\alpha)\sqrt{\frac{1}{n\alpha(1-\alpha)}}\text{ with probability }\alpha,\end{cases} i\in[n],\,j\in[p]. \tag{4.2}\]
It is easy to check that \(\widetilde{X}\) satisfies the conditions in Definition 3.1, with \(S_{ij}=1\), for all \((i,j)\). Hence, \(\widetilde{X}\) is a generalized white noise matrix.
From (4.1), we have
\[X=\alpha\mathbf{1}_{n}\mathbf{1}_{p}^{\top}+\sqrt{n\alpha(1-\alpha)}\, \widetilde{X}. \tag{4.3}\]
Therefore, the test outcome matrix \(Y=XB+\Psi\) in (1.3) can be rewritten as \(Y=\alpha\mathbf{1}_{n}\mathbf{1}_{p}^{\top}B+\sqrt{n\alpha(1-\alpha)}\widetilde {X}B+\Psi\), which gives
\[\frac{Y-\alpha\mathbf{1}_{n}\mathbf{1}_{p}^{\top}B}{\sqrt{n\alpha(1-\alpha)}} =\widetilde{X}B+\frac{\Psi}{\sqrt{n\alpha(1-\alpha)}}. \tag{4.4}\]
Observe that
\[\alpha\mathbf{1}_{n}\mathbf{1}_{p}^{\top}B=\alpha p\,\left(\frac{1}{p} \mathbf{1}_{n}\mathbf{1}_{p}^{\top}B\right)=\alpha p[\hat{\pi},\dots,\hat{\pi }]^{\top},\]
where \(\hat{\pi}\in\mathbb{R}^{L}\) is the vector of proportions defined in (1.1). This allows us to rewrite (4.4) row-wise as follows:
\[\frac{Y_{i,:}-\alpha p\hat{\pi}}{\sqrt{n\alpha(1-\alpha)}}=B^{\top}\widetilde{X }_{i,:}\,+\,\frac{\Psi_{i,:}}{\sqrt{n\alpha(1-\alpha)}},\qquad i\in[n].\]
Defining
\[\widetilde{Y}_{i,:}=\frac{Y_{i,:}-\alpha p\hat{\pi}}{\sqrt{n\alpha(1-\alpha)} }\,,\quad\widetilde{\Psi}_{i,:}=\frac{\Psi_{i,:}}{\sqrt{n\alpha(1-\alpha)}}, \tag{4.5}\]
we have
\[\widetilde{Y}_{i,:}=B^{\top}\widetilde{X}_{i,:}\,+\,\widetilde{\Psi}_{i,:},\; \;\;i\in[n],\quad\text{or}\;\;\widetilde{Y}=\widetilde{X}B\,+\,\widetilde{ \Psi}, \tag{4.6}\]
where \(\widetilde{Y},\widetilde{\Psi}\in\mathbb{R}^{n\times L}\) are matrices whose \(i\)th rows are \(\widetilde{Y}_{i,:}\) and \(\widetilde{\Psi}_{i,:}\). The modified model (4.6) is an instance of the matrix GLM in (1.4) (with \(\Psi=\widetilde{\Psi}=0_{n\times L}\) for the noiseless case).
If the vector of proportions \(\hat{\pi}\) is known, we can compute the modified test outcome matrix \(\widetilde{Y}\) and run the matrix-AMP algorithm in (3.1) using the modified design \(\widetilde{X}\). Theorem 3.2 then directly gives a rigorous asymptotic characterization for the matrix-AMP algorithm. We now describe how the assumptions for Theorem 3.2 can be satisfied for the pooled data problem.
Model AssumptionsWe require that there exist random variables \(\bar{B}\sim P_{\bar{B}}\) and \(\bar{\Psi}\sim P_{\bar{\Psi}}\) (where \(\bar{B},\bar{\Psi}\in\mathbb{R}^{L}\)) with \(B\stackrel{{ W}}{{\to}}\bar{B}\) and \(\widetilde{\Psi}\stackrel{{ W}}{{\to}}\bar{\Psi}\), respectively. For \(B\), this means that the vector of proportions \(\hat{\pi}\) in (1.1) converges in Wasserstein distance (of all orders) to a well-defined categorical distribution \(\pi\) on the set \([L]\). For the limiting noise distribution \(\bar{\Psi}\) to be well-defined, we need each entry of \(\widetilde{\Psi}\) to be of constant order with high probability. (Equivalently, from (4.5), each entry of \(\Psi\) need to be of order \(\sqrt{p}\) with high probability.) This means that \(\bar{\Psi}\) has to follow a distribution with constant order variance (e.g., all sub-Gaussian distributions). This is consistent with the fact that the entries of \(\widetilde{X}B\) are of constant order with high probability, as shown in Appendix F.
Verifying Assumption (A1)We can initialize the matrix-AMP algorithm with \(\widehat{B}^{0}\in\mathbb{R}^{p\times L}\) whose rows are chosen i.i.d. according to the limiting signal distribution \(\pi\). With this initialization, the initial covariance in (3.11) becomes:
\[\frac{1}{n}\begin{bmatrix}B^{\top}B&B^{\top}\widehat{B}^{0}\\ (\widehat{B}^{0})^{\top}B&(\widehat{B}^{0})^{\top}\widehat{B}^{0}\end{bmatrix} \stackrel{{ a.s.}}{{\to}}\frac{1}{\delta}\begin{bmatrix}\text{ Diag}(\pi)&\pi\pi^{\top}\\ \pi\pi^{\top}&\text{Diag}(\pi)\end{bmatrix},\]
where the convergence holds since \(\hat{\pi}\to\pi\) as \(p\to\infty\), by the assumption above.
Verifying Assumption (A2)The optimal choices for the matrix-AMP denoising functions, given by \(f_{k}^{*}\) and \(g_{k}^{*}\) in (3.17) and (3.18), can be explicitly computed for noiseless pooled data or the noisy setting with Gaussian noise; the expressions are given in (B.6)-(B.8) in Appendix B. It can be verified from the expressions that these functions are continuous, Lipschitz, and satisfy the polynomial growth condition.
Verifying Assumption (A3)Since the entries of \(\widetilde{X}\) are i.i.d. according to (4.2), the matrix \(\sqrt{n}\widetilde{X}\) has i.i.d. sub-Gaussian entries. Using a concentration inequality for the operator norm of sub-Gaussian matrices [63, Theorem 4.4.5] together with the Borel-Cantelli lemma, we obtain that \(\|\widetilde{X}\|_{\text{op}}<C\) almost surely for sufficiently large \(n\). Since the variance profile \(S_{ij}=1\) for all \((i,j)\), the second condition in (A3) is trivially satisfied.
_Knowledge of \(\hat{\pi}\)._ The matrix-AMP algorithm requires knowledge of the vector of proportions \(\hat{\pi}\) to compute \(\widetilde{Y}\), as given in (4.5). While the assumption of knowing \(\hat{\pi}\) may seem strong, it can be easily obtained in the noiseless setting by using an additional (non-random) test, where we include every item. To obtain \(\hat{\pi}\), take the output of this non-random test and divide it by \(p\). For the noisy setting, one can estimate \(\hat{\pi}\), either via a known prior \(\pi\), or by averaging the outcomes of multiple non-random tests including every item. We explore the effect of mismatch in estimating \(\hat{\pi}\) via simulations in the next subsection.
_Comparison with El Alaoui et al. [2]._ El Alaoui et al. [2] proposed an AMP algorithm for the noiseless setting, and characterized its performance via a state evolution, without providing rigorous guarantees. To connect our results with theirs, we prove the following proposition regarding the state evolution recursion.
**Proposition 4.1**: _Under the Bayes-optimal choice of AMP denoising functions given by (3.17)-(3.18), the state evolution recursion in (3.3)-(3.5) is equivalent to the state evolution recursion presented in [2]._
The proposition is proved in Appendix B (Sections B.1-B.3). Furthermore, we show in Section B.4 that the AMP in (3.1) can be obtained from the one in El Alaoui et al. [2] using large-sample approximations that are standard in the AMP literature. Theorem 3.2 and Proposition 4.1 together make the AMP performance guarantees in [2] rigorous.
### Numerical Simulations
We present numerical simulation results for the pooled data model given by (1.3) (or equivalently (4.5)), in both the noiseless and noisy settings. We take \(X_{ij}\stackrel{{\text{i.i.d.}}}{{\sim}}\text{Bernoulli}(\alpha)\) for \(i,j\in[n]\times[p]\), and the rows of the signal \(B_{j,:}\stackrel{{\text{i.i.d.}}}{{\sim}}\text{Categorical}(\pi)\) for \(j\in[p]\), where \(\pi\) is a vector of probabilities that sum to one. We use \(\alpha=0.5\), since we find that the choice of \(\alpha\) has very little effect on the performance of each algorithm. The rows of the initializer \(\widehat{B}^{0}\in\mathbb{R}^{p\times 2}\) are chosen i.i.d. according to the same distribution \(\pi\), independently of the signal. We set the signal dimension \(p=500\) and vary the value of \(n\) in our experiments. In all our plots, curves labeled 'AMP' refers to the empirical performance of the matrix-AMP algorithm, and points labeled 'SE' refers to the theoretical performance of the matrix-AMP algorithm, calculated from the state evolution parameters.
In the noisy setting, we consider Gaussian noise, with \(\Psi_{i,:}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,p \sigma^{2}I_{L})\) for \(i\in[n]\). While it may seem unusual to add continuous noise to discrete observations, this still captures the essence of the noisy pooled data problem, and simplifies the implementation of our algorithm. Furthermore, since the noise standard deviation is of order \(\sqrt{p}\), rounding the noise values to the nearest integer does not noticeably affect performance. This noisy model has been previously considered in [53, Section 2.2., Example 3].
The matrix-AMP algorithm in (3.1) is implemented with \(g_{k}=g_{k}^{*}\) and \(f_{k}=f_{k}^{*}\), the optimal choices given by (3.18) and (3.17) respectively - see Appendix D.1 for details of the implementation. The performance in all the plots is measured via the normalized correlation between the matrix-AMP estimate and the signal (see (3.15)). Each point on the plots is obtained from \(10\) independent runs, where in each run, the matrix-AMP algorithm is executed for \(10\) iterations. We report the average and error bars at \(1\) standard deviation of the final iteration. In our implementation, in the final iteration of the matrix-AMP algorithm, we quantize the estimates of the signal \(B\), e.g., a row estimate of \([0.9,0.1]\) would get quantized
to \([1,0]\).
We also compare the matrix-AMP algorithm with two popular methods for compressed sensing: iterative hard thresholding and convex programming. Due to the additional constraints in the pooled data problem, some work is required to extend these algorithms to pooled data. The algorithms are defined as follows.
Optimization-based methodsWe obtain these from the convex relaxation of the maximum a posteriori (MAP) estimator for pooled data. We assume that the categorical prior \(\pi\) is known, i.e., the probability of each item belonging to category \(l\in[L]\) is \(\pi_{l}\). We first reformulate our variables to ones that are more suitable for optimization formulations. We define
\[Y_{\text{opt}} =\begin{bmatrix}Y_{:,1}\\ \vdots\\ Y_{:,L}\end{bmatrix}\in\mathbb{R}^{nL};\quad B_{\text{opt}}=\begin{bmatrix}B_{:,1}\\ \vdots\\ B_{:,L}\end{bmatrix}\in\{0,1\}^{pL}; \tag{4.7}\] \[X_{\text{opt}} =\text{diag}(X,\ldots,X)\in\{0,1\}^{nL\times pL};\quad C_{\text{ opt}}=[I_{p},\ldots,I_{p}]\in\{0,1\}^{p\times pL}.\]
Write \(I_{pL}^{(pl)}\in\mathbb{R}^{l\times pl}\) for the sub-matrix of \(I_{pL}\) obtained by taking the \(p(l-1)\)-th to \(pl\)-th rows of \(I_{pL}\). The convex program for the noisy setting (CVX), with noise \(\Psi_{i,:}\stackrel{{\text{i.i.d.}}}{{\sim}}\mathcal{N}(0,p \sigma^{2}I_{L})\), is
minimize \[\frac{1}{2p\sigma^{2}}\|Y_{\text{opt}}-X_{\text{opt}}B_{\text{ opt}}\|_{2}^{2}-\sum_{l=1}^{L}(\log\pi_{l})1_{p}^{\top}I_{pL}^{(pl)}B_{\text{ opt}}\quad\text{(w.r.t.}\ \ B_{\text{opt}})\] subject to \[0_{p}\leq B_{\text{opt}}\leq 1_{p},\text{ and }C_{\text{opt}}B_{ \text{opt}}=1_{p}.\]
For the noiseless pooled data setting, the convex program above can be simplified to the following linear program (LP):
minimize \[-\sum_{l=1}^{L}(\log\pi_{l})1_{p}^{\top}I_{pL}^{(pl)}B_{\text{ opt}}\quad\text{(w.r.t.}\ \ B_{\text{opt}})\] subject to \[0_{p}\leq B_{\text{opt}}\leq 1_{p},\,Y_{\text{opt}}=X_{\text{ opt}}B_{\text{opt}},\,C_{\text{opt}}B_{\text{opt}}=1_{p}.\]
The derivations of the convex and linear programs are given in Appendix E.
Iterative hard thresholdingWe extend the IHT algorithm [26, Section 3.3] for compressed sensing to pooled data. The algorithm is as follows:
* **Input.**\(\widetilde{Y}\), \(\widetilde{X}\), and the sparsity levels of the columns of \(B\), namely \(\{\pi_{1}^{*}p,\ldots,\pi_{L}^{*}p\}\).
* **Initialization.**\(B^{0}=0\in\mathbb{R}^{p\times L}\).
* For iteration \(k=1,\ldots,k_{\text{max}}\): Execute \[B^{k+1}=H_{\{\pi_{1}^{*}p,\ldots,\pi_{L}^{*}p\}}(B^{k}+\widetilde{X}^{\top}( \widetilde{Y}-\widetilde{X}B^{k})),\] where \(H_{\{\pi_{1}^{*}p,\ldots,\pi_{L}^{*}p\}}(\cdot)\) is the hard thresholding function adapted to matrix signals, and is defined to perform the following:
* Look for the largest entry in the input matrix and make a hard decision on the corresponding (item, category) pair.
Once an item is allocated, it is removed from further consideration in the later iterations, and once a class \(l\) has been allocated \(\pi_{l}^{*}p\) items, we also remove it from further consideration in the later iterations.
* Return \(B^{k_{\max}}\).
We found that that using \(\widetilde{Y}\) and \(\widetilde{X}\) instead of \(Y\) and \(X\) leads to better performance.
Simulation resultsFigure 1 shows the performance of the matrix-AMP algorithm in the noiseless setting, for various \(\pi\). The normalized correlation is plotted as function of the sampling ratio \(\delta=n/p\), for different categorical priors. The state evolution predictions closely match the performance of the matrix-AMP algorithm for practical values of \(n\) and \(p\), validating the result of Theorem 3.1. As expected, the correlation improves with increasing \(\delta\). Furthermore, we observe that for \(L=2\), the more non-uniform the \(\pi\), the better the performance of the matrix-AMP algorithm. However, this is not the case for \(L=3\). Figure 2 shows how the matrix-AMP algorithm compares with LP and IHT for different noise levels. We observe that the optimization-based methods (LP for noiseless, CVX for noisy) outperforms the matrix-AMP algorithm, indicating the suboptimality of AMP in this setting. However, we note that there are currently no theoretical guarantees for LP or CVX applied to pooled data. Figure 3 shows the performance curves of the matrix-AMP algorithm for different values of noise level \(\sigma\); as expected, correlation improves as \(\sigma\) decreases.
So far, we have assumed that the matrix-AMP algorithm has access to the empirical category proportions \(\hat{\pi}\). We now study the performance when we do not know \(\hat{\pi}\) exactly, but only have an estimate of it. For simplicity, we consider \(L=2\) with estimated proportions \(\pi_{\mathrm{est}}=[\pi_{1}^{*}+\epsilon,\pi_{2}^{*}-\epsilon]\), for some \(\epsilon>0\). Figure 4 shows the performance for \(\epsilon=0.01\) and \(\epsilon=0.05\). We observe that the performance of the matrix-AMP algorithm is very sensitive to the accuracy of the estimate \(\pi_{\mathrm{est}}\), i.e., a small increase in \(\epsilon\) can substantially degrade the performance. The underlying reason that AMP requires each entry of \(\widetilde{Y}\) to be of constant order (as \(p\) grows) with high probability. This is true if \(\hat{\pi}\) is known exactly; we show in Appendix
Figure 1: AMP for pooled data: normalized correlation vs. \(\delta\), with \(\sigma=0\) and varying \(\hat{\pi}\)
F that \(\mathbb{E}[\widetilde{Y}_{il}]=0\) and \(\mathrm{Var}[\widetilde{Y}_{il}]=\frac{p\pi_{l}}{n}=\Theta(1)\), for \(i\in[n],l\in[L]\). However, a constant shift of \(\epsilon\) in the entries of \(\hat{\pi}\) causes the the entries of \(\widetilde{Y}\) to jump from \(\Theta(1)\) to \(\Theta(\sqrt{p})\) - see Appendix F for details. This indicates that it is worth performing additional (non-random) test(s) to obtain an accurate estimate of \(\hat{\pi}\) before running the matrix-AMP algorithm.
## 5 AMP for Quantitative Group Testing
Recall that Quantitative Group Testing (QGT) is a special case of the pooled data problem with \(L=2\), where items are either defective or non-defective. As mentioned in Section 1.1, in QGT the signal is often represent by a vector \(\beta\in\{0,1\}^{p}\), where \(1\) corresponds to a 'defective' item and \(0\) corresponds to a non-defective item. Using a vector instead of a \(p\times 2\) matrix to represent the QGT signal has two major benefits: (i) It allows us to present guarantees for the false positive rate (FPR) and false negative rate (FNR) in a simpler way, and (ii) It provides a simpler and more efficient AMP
Figure 3: AMP for pooled data: normalized correlation vs. \(\delta\) for varying \(\sigma\) with uniform prior \(\pi\). The plots are similar for the case of non-uniform priors.
Figure 2: AMP vs. other algorithms for pooled data: normalized correlation vs. \(\delta\), with \(L=3\) and \(\hat{\pi}=[1/3,1/3,1/3]\). The plots are similar for the case of non-uniform priors.
algorithm where matrix operations can be omitted.
We use \(\beta\) (instead of \(B\)) for the signal vector to avoid confusion between the pooled data problem and QGT. The design matrix is the same as that for pooled data, i.e., \(X_{ij}\sim_{\text{i.i.d.}}\) Bernoulli(\(\alpha\)) where \(\alpha\in(0,1)\). The vector of test outcomes \(Y\in\mathbb{R}^{n}\) is given by
\[Y=X\beta\,+\,\Psi,\quad i\in[n], \tag{5.1}\]
where \(\Psi\in\mathbb{R}^{n}\) is the additive noise. Recall that \(\hat{\pi}\) is the proportion of defective items in the population. Substituting the decomposition of \(X\) in (4.3) into (5.1) and rearranging, we get
\[\frac{Y-\alpha\mathbf{1}_{n}\mathbf{1}_{p}^{\top}\beta}{\sqrt{n\alpha(1-\alpha )}}=\widetilde{X}\beta+\frac{\Psi}{\sqrt{n\alpha(1-\alpha)}}.\]
Since \(\alpha\mathbf{1}_{n}\mathbf{1}_{p}^{\top}\beta=\alpha p\left(\frac{1}{p} \mathbf{1}_{n}\mathbf{1}_{p}^{\top}\beta\right)=\alpha p[\hat{\pi},\dots,\hat {\pi}]^{\top}\), we can rewrite the noisy model as
\[\frac{Y_{i}-\alpha p\hat{\pi}}{\sqrt{n\alpha(1-\alpha)}}=\widetilde{X}_{i, \beta}^{\top}\beta+\frac{\Psi_{i}}{\sqrt{n\alpha(1-\alpha)}}, \tag{5.2}\]
which can be further rewritten as
\[\widetilde{Y}_{i}=\widetilde{X}_{i,\beta}^{\top}\beta+\widetilde{\Psi}_{i}, \quad i\in[n],\quad\text{ or }\quad\widetilde{Y}=\widetilde{X}\beta+\widetilde{\Psi}. \tag{5.3}\]
The noiseless QGT model can be obtained by setting \(\Psi=0_{n}\).
### Algorithm
As in the pooled data problem, we assume knowledge of \(\hat{\pi}\) and apply the AMP algorithm with the modified data \(\widetilde{X}\) and \(\widetilde{Y}\). The AMP algorithm in (3.1) is used, noting that the iterates are vectors. To differentiate the AMP algorithm for QGT from the pooled
data setting, we change notation, and replace \(B,B^{k}\), and \(\widehat{B}^{k}\), with \(\beta,\beta^{k}\), and \(\hat{\beta}^{k}\), respectively, for \(k\geq 1\). The notation for state evolution parameters is also changed to emphasize that in QGT, they are scalars. We let \(\mu_{\beta}^{k}:=\mathrm{M}_{B}^{k}\), \(\sigma_{\beta}^{k}:=\mathrm{T}_{B}^{k}\), \(\mu_{\Theta}^{k}:=\mathrm{M}_{\Theta}^{k}\), and \(\sigma_{\Theta}^{k}:=\mathrm{T}_{\Theta}^{k}\).
We assume that the empirical distributions of \(\beta\in\mathbb{R}^{p}\) and \(\widetilde{\Psi}\in\mathbb{R}^{n}\) converge to well-defined limits as \(p,n\to\infty\). Specifically, assume that \(\beta\overset{W}{\to}\bar{\beta}\) and \(\widetilde{\Psi}\overset{W}{\to}\bar{\Psi}\). Here \(\bar{\beta}\) is a binary random variable whose law represents the limiting proportion of \(1\)s in \(\beta\).
The AMP algorithm is initialized with a random \(\hat{\beta}^{0}\) whose components are generated i.i.d. according to the law of \(\bar{\beta}\). Then the initial covariance \(\Sigma^{0}\) in (3.11) can be computed as:
\[\frac{1}{n}\begin{bmatrix}\beta^{\top}\beta&\beta^{\top}\hat{\beta}^{0}\\ (\hat{\beta}^{0})^{\top}\beta&(\hat{\beta}^{0})^{\top}\bar{\beta}^{0}\end{bmatrix} \overset{a.s.}{\to}\ \frac{1}{\delta}\begin{bmatrix}\mathbb{E}\{\bar{\beta}^{2}\}&( \mathbb{E}\{\bar{\beta}\})^{2}\\ (\mathbb{E}\{\bar{\beta}\})^{2}&\mathbb{E}\{\bar{\beta}^{2}\}\end{bmatrix}=: \Sigma^{0}.\]
With these assumptions, Theorem 3.2 directly gives the following state evolution result.
**Theorem 5.1**.: _Consider the AMP algorithm in (3.1) for the QGT model in (5.3), with the notational changes, assumptions, and initialization described above. Assume that the denoising functions \(f_{k+1},g_{k}\) used in the AMP algorithm satisfy Assumption **(A2)** in Section 3.1. Then for each \(k\geq 0\),_
\[\big{(}\beta,\beta^{0},\beta^{1},\ldots,\beta^{k+1}\big{)} \overset{W_{2}}{\to}\big{(}\bar{\beta},\bar{\beta}^{0},\mu_{ \beta}^{1}\bar{\beta}+G_{\beta}^{1},\ldots,\mu_{\beta}^{k+1}\bar{\beta}+G_{ \beta}^{k+1}\big{)} \tag{5.5}\] \[\big{(}\widetilde{\Psi},\Theta,\Theta^{0},\ldots,\Theta^{k}\big{)} \overset{W_{2}}{\to}\big{(}\bar{\Psi},Z,\mu_{\Theta}^{0}\,Z+G_{ \Theta}^{0},\ldots,\mu_{\Theta}^{k}\,Z+G_{\Theta}^{k}\big{)}, \tag{5.4}\]
_almost surely as \(n,p\to\infty\) with \(n/p\to\delta\). Here \(G_{\beta}^{k+1}\sim\mathcal{N}\big{(}0,\big{(}\sigma_{\beta}^{k+1}\big{)}^{2} \big{)}\) is independent of \(\bar{\beta}\), and \(G_{\Theta}^{k}\sim\mathcal{N}\big{(}0,\big{(}\sigma_{\Theta}^{k}\big{)}^{2} \big{)}\) is independent of \((Z,\bar{\Psi})\)._
The optimal choices for the AMP denoising functions, given by \(f_{k}^{*}\) and \(g_{k}^{*}\) in (3.17) and (3.18), can be explicitly computed; see Appendix D.2. From the expressions, it is easy to verify that these functions satisfy Assumption (A2), as required by the theorem.
Performance measuresTheorem 5.1 allows us to compute the limiting values of performance measures such as the normalized squared correlation between the estimate \(\hat{\beta}^{k}\) and the signal \(\beta\). We have
\[\frac{\langle\hat{\beta}^{k},\beta\rangle^{2}}{\|\hat{\beta}^{k}\|_{2}^{2}\| \beta\|_{2}^{2}}\overset{a.s.}{\to}\frac{(\mathbb{E}[f_{k}(\mu_{\beta}^{k} \bar{\beta}+G_{\beta}^{k})\cdot\bar{\beta}])^{2}}{\mathbb{E}[f_{k}(\mu_{\beta}^ {k}\bar{\beta}+G_{\beta}^{k})^{2}]\cdot\mathbb{E}[\bar{\beta}^{2}]}. \tag{5.6}\]
In practical applications of QGT, we might want to understand the false positive and false negative rates (FPR and FNR) separately, or weigh one more than the other based on the practical needs. We proceed to investigate the FPR and FNR of AMP estimates for QGT. To get a final estimate in the signal domain \(\{0,1\}^{p}\), we can choose the denoiser \(f_{k}\) in the final iteration \(k=K\) to output a hard decision (whereas the \(f_{k}\) in earlier iterations \(1,\ldots,K-1\) will have no such restriction). Theorem 5.1 guarantees that the empirical distribution of \(\beta^{K}\) converges to the law of \(\mu_{\beta}^{K}\bar{\beta}+G_{\beta}^{K}\). Therefore, for some chosen threshold \(\zeta>0\), we define
\[f_{K}(\beta^{K})=\mathds{1}\bigg{\{}\frac{\beta^{K}}{\mu_{\beta}^{K}}>\zeta \bigg{\}}, \tag{5.7}\]
where the indicator function is applied component-wise to \(\beta^{K}\). That is, the large entries of the input \(\beta^{K}\) are declared to be one (i.e., defective) and small entries of \(\beta^{K}\) to be zero (i.e., non-defective). Based on the above function, let us denote the estimated defective set by
\[\widehat{\mathcal{S}}=\bigg{\{}j:\frac{\beta^{K}_{j}}{\mu^{k}_{\beta}}>\zeta \bigg{\}}, \tag{10}\]
The FPR and the FNR are defined as follows:
\[\text{FPR}=\frac{\sum_{j=1}^{p}\mathds{1}\{\beta_{j}=0\cap j\in\widehat{ \mathcal{S}}\}}{p-\sum_{j=1}^{p}\beta_{j}},\quad\text{ and }\quad\text{FNR}=\frac{\sum_{j=1}^{p}\mathds{1}\{\beta_{j}=1\cap j\notin \widehat{\mathcal{S}}\}}{\sum_{j=1}^{p}\beta_{j}}. \tag{11}\]
Under the same assumptions as Theorem 3.1, applying the thresholding function in (10) in the final iteration \(K>1\), we have
\[\text{FPR}\stackrel{{\text{a.s.}}}{{\rightarrow}}1-\Phi\bigg{(} \frac{\mu^{K}_{\beta}}{\sigma^{K}_{\beta}}\zeta\bigg{)}\quad\text{ and }\quad\text{FNR}\stackrel{{\text{a.s.}}}{{\rightarrow}}1-\Phi \bigg{(}\frac{\mu^{K}_{\beta}}{\sigma^{K}_{\beta}}(1-\zeta)\bigg{)}, \tag{12}\]
where \(\Phi\) is the cumulative distribution function of the standard Gaussian distribution.
The proof of the corollary is given in Appendix C.
### Numerical Simulations
We present numerical simulation results for the QGT model in (11) (and equivalently (12)). We take \(X_{ij}\stackrel{{\text{i.i.d.}}}{{\sim}}\text{Bernoulli}(0.5)\) for \(i,j\in[n]\times[p]\) and \(\beta_{j}\stackrel{{\text{i.i.d.}}}{{\sim}}\text{Bernoulli}(\pi)\) for \(\pi\in(0,1)\) and \(j\in[p]\); the initializer \(\hat{\beta}^{0}\in\mathbb{R}^{p}\) is chosen randomly according to the same distribution as the signal vector \(\beta\), but independent of it. For all our plots, the AMP algorithm in (10) is implemented with \(g_{k}=g_{k}^{*}\) and \(f_{k}=f_{k}^{*}\), with \(f_{K}\) in (10) used in the final iteration \(K\). The optimal choices \(g_{k}^{*}\) and \(f_{k}^{*}\) are given by (13) and (14); see Appendix D.2 for details of the implementation. The performance in all the plots is either measured via the normalized squared correlation between the AMP estimate and the signal (see (13)) or via the FPR and FNR (see (11)). We set the number of items to be \(p=500\) and vary the value of the number of tests \(n\leq p\) in our experiments. Each point on the plots is obtained from 10 independent runs, where in each run, the AMP algorithm is executed for 10 iterations. In the normalized squared correlation plots, we also report the average and error bars at 1 standard deviation of the final iteration.
vs. FNR trade-off curve by using a thresholding function, with a varying threshold, to get the estimate of the signal. Specifically, after running the linear program to obtain \(\beta^{\text{(lp)}}\), we can set \(\hat{\beta}_{j}=\mathds{1}\big{\{}\beta_{j}^{\text{(lp)}}>\zeta\big{\}}\). Several other QGT algorithms have been proposed (some examples are [35, 39, 40, 23]) that were used for the sublinear category regime with no noise. We omit comparisons with them because they do not appear to offer a simple mechanism for controlling the trade-off between FPR and FNR. Figure 6 shows that LP outperforms the AMP algorithm for larger values of \(\delta\), after \(\delta=0.25\) for \(\pi=0.1\), and after around the \(\delta=0.55\) for \(\pi=0.3\). Figure 7 shows the FPR vs. FNR tradeoff of the AMP algorithm and LP for \(\pi=0.1\) and \(\pi=0.3\), with sampling ratio \(\delta=0.2\). These plots indicate that the AMP algorithm is better than LP when the sampling ratio \(\delta\) is small and defective probability \(\pi\) is large.
for the pooled data simulations). Specifically, \(\Psi_{i}\stackrel{{\text{i.i.d.}}}{{\sim}}\text{Uniform}[-\lambda\sqrt{p}, \lambda\sqrt{p}]\) for some constant \(\lambda\). After rescaling as in (5.3), we get \(\widetilde{\Psi}_{i}=\widetilde{\Psi}_{i}/\sqrt{n\alpha(1-\alpha)}\) whose empirical distribution converges to \(\bar{\Psi}\sim\text{Uniform}[-\tilde{\lambda},\tilde{\lambda}]\), where \(\tilde{\lambda}=\lambda/\sqrt{\delta\alpha(1-\alpha)}\). As expected, Figure 8 shows that performance of the AMP algorithm improves as the noise level \(\lambda\) decreases.
Our benchmark for AMP will be a convex programming estimator that is a variant of
Figure 7: QGT, FPR vs. FNR: \(\alpha=0.5\), \(\delta=0.2\), \(p=500\), and threshold \(\zeta\in\{0,0.1,\ldots,1.0\}\) for both AMP and LP.
basis pursuit denoising (BPDN) [13]. The convex program for BPDN can be written as
\[\text{minimize} \|\beta\|_{1}\] subject to \[\|y-X\beta\|_{\infty}\leq\lambda\sqrt{p},\text{ and }0\leq\beta_{j} \leq 1\text{ for }j\in[p],\]
where the \(\ell_{\infty}\)-constraint is tailored to uniform noise. This BPDN-type algorithm is most commonly used in noisy compressed sensing. Similar to the AMP algorithm, we can obtain a FPR vs. FNR trade-off curve by using a thresholding function, with a varying threshold. Specifically, after running the convex program to obtain \(\beta^{\text{(bpdn)}}\), we can set \(\hat{\beta}_{j}=\mathds{1}\big{\{}\beta^{\text{(bpdn)}}_{j}>\zeta\big{\}}\). Figure 9 shows that the AMP algorithm outperforms BPDN for all values of \(\delta\), and the performance gap is larger for smaller \(\delta\). Figure 10 backs this up showing that the FPR vs. FNR curve for the AMP algorithm generally lies below that for BPDN, and the gap between the curves increases when \(\pi\) is increased from \(0.1\) to \(0.3\).
## 6 Discussion and Future Directions
In this paper, we have obtained rigorous performance guarantees for the AMP algorithm applied to pooled data decoding and QGT, with i.i.d. Bernoulli design matrices. We expect that our analysis, based on the universality result of [67], can be extended to Bernoulli designs with additional structure (such as spatial coupling) that might help improve performance. The numerical results in Section 4.1 show that the convex programming estimator outperforms the AMP algorithm for pooled data with \(L=3\). This is surprising, since it has been shown that for linear regression with i.i.d. Gaussian designs (and vector-valued signal), AMP with Bayes-optimal denoisers is superior to convex procedures for a large class of priors, and moreover, conjectured to be optimal among polynomial-time algorithms [12]. Obtaining rigorous theoretical guarantees for the convex pooled data estimator is an interesting direction for future work.
AMP for Boolean Group TestingRecall that in noiseless BGT, the outcome of test \(i\in[n]\) is \(Y_{i}=\mathds{1}\{X_{i,:}^{\top}\beta>0\}\). As in QGT, the signal \(\beta\in\mathbb{R}^{p}\) is binary vector, with ones in the
entries corresponding to defective items. In the linear regime, where the number of defectives \(d\) is proportional to \(p\), it is known that individual testing is optimal for _exact_ recovery [4, 8]. On the other hand, precisely characterizing the performance under an approximate recovery criterion, specifically the trade-off between FPR and FNR in the linear regime, is an open question [60]. The similarity between BGT and QGT suggests that the AMP analysis for the latter can be generalized to provide rigorous guarantees for BGT with i.i.d. Bernoulli matrices, similar to those of Theorem 5.1 for QGT. However, this is not the case. We briefly describe the challenges in extending our results to BGT.
Consider noiseless BGT with an i.i.d. Bernoulli(\(\alpha\)) design matrix \(X\). Using the decomposition in (4.1), we can rewrite (1.5) as the following re-centered and rescaled model:
\[Y_{i}=\mathds{1}\left\{\widetilde{X}_{i,:}^{\top}\beta>\frac{-\alpha p\hat{ \pi}}{\sqrt{n\alpha(1-\alpha)}}\right\}=q(\widetilde{X}_{i,:}^{\top}\beta).\]
With \(\widetilde{Y}_{i}:=Y_{i}\), this is a special case of the matrix-GLM model in (1.4). For each test in BGT to be informative (i.e., have a non-vanishing probability of being either positive or negative), we need \(\alpha=\Theta\big{(}\frac{1}{p}\big{)}\), since the number of defectives is a constant fraction of \(p\)[6, Section 2.1]. However, with this choice of \(\alpha\), the re-centered and rescaled matrix \(\widetilde{X}\) does not satisfy the second condition in Definition 3.1, which is required for rigorous AMP guarantees. In contrast, for QGT, the choice of \(\alpha=\Theta(1)\) both leads to informative tests and gives an \(\widetilde{X}\) satisfying all the conditions in Definition 3.1. We can attempt to overcome this issue by choosing \(\alpha=\Theta\big{(}\frac{\log n}{n}\big{)}\), which ensures that \(\widetilde{X}\) satisifes Definition 3.1, but this choice leads to uninformative tests. Intuitively, when \(\alpha=\Theta\big{(}\frac{\log n}{n}\big{)}\), the average number of defective items in each test is of order \(\frac{p\log n}{n}=\Theta(\log n)\) (where we used \(n=\Theta(p)\)), causing the test outcome to be positive with probability tending to \(1\). That is, each individual test provides almost no information as \(p\) grows large.
A potential solution would be to shift away from the linear regime where the number of defectives \(d=\Theta(p)\), and instead apply the AMP algorithm to the mildly sublinear regime \(d=\frac{p}{\log p}\). This would lead to each test being informative with \(\alpha=\Theta\big{(}\frac{\log n}{n}\big{)}\), but does not satisfy the AMP assumption that requires the empirical distribution of the signal \(\beta\) to converge to a well-defined limiting distribution. As a result, we would need to develop a new AMP algorithm that works for signals with sublinear sparsity - we leave this as an open problem.
|
2309.14619 | Field-induced quantum phase in a frustrated zigzag-square lattice | This study presents the experimental realization of a spin-1/2 zigzag-square
lattice in a verdazyl-based complex, namely
($m$-Py-V-2,6-F$_2$)$[$Cu(hfac)$_2]$. Molecular orbital calculations suggest
the presence of five types of frustrated exchange couplings. Our observations
reveal an incremental increase in the magnetization curve beyond a critical
field, signifying a phase transition from the antiferromagnetic ordered state
to a quantum state characterized by a 1/2 plateau. This intriguing behavior
arises from the effective stabilization of a zigzag chain by the external
fields. These results provide evidence for field-induced dimensional reduction
in a zigzag-square lattice attributed to the effects of frustration. | Hironori Yamaguchi, Kazutoshi Shimamura, Yasuo Yoshida, Akira Matsuo, Koichi Kindo, Kiichi Nakano, Satoshi Morota, Yuko Hosokoshi, Takanori Kida, Yoshiki Iwasaki, Seiya Shimono, Koji Araki, Masayuki Hagiwara | 2023-09-26T02:19:19Z | http://arxiv.org/abs/2309.14619v1 | # Field-induced quantum phase in a frustrated zigzag-square lattice
###### Abstract
This study presents the experimental realization of a spin-1/2 zigzag-square lattice in a verdazyl-based complex, namely (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)]. Molecular orbital calculations suggest the presence of five types of frustrated exchange couplings. Our observations reveal an incremental increase in the magnetization curve beyond a critical field, signifying a phase transition from the antiferromagnetic ordered state to a quantum state characterized by a 1/2 plateau. This intriguing behavior arises from the effective stabilization of a zigzag chain by the external fields. These results provide evidence for field-induced dimensional reduction in a zigzag-square lattice attributed to the effects of frustration.
pacs: 75.10.Jm, +
Footnote †: preprint: APS/123-QED
[MISSING_PAGE_POST]
Present address: ]Present address: ][http://www.physik](http://www.physik).
frustration and quantum fluctuations within this effective zigzag chain disrupts the conventional ordered state, demonstrating the emergence of a field-induced quantum phase in a zigzag-square lattice.
We synthesized (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)] by initially preparing \(m\)-Py-V-2,6-F\({}_{2}\) using the conventional procedure [32]. The subsequent synthesis of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)] was accomplished following a previously reported procedure for verdazyl-based complexes [33; 34; 35; 36]. Dark-green crystals of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)] were obtained through recrystallization from a mixed solvent of CH\({}_{2}\)Cl\({}_{2}\) and \(n\)-heptan. Single crystal X-ray diffraction was performed using a Rigaku XtaLAB Synergy-S instrument. To measure the magnetic susceptibility, we utilized a commercial SQUID magnetometer (MPMS, Quantum Design) in conjunction with a handmade \({}^{3}\)He refrigerator, enabling measurements down to 0.59 K [37; 38]. The experimental results were corrected by considering the diamagnetic contributions calculated using Pascal's method. For specific heat measurements, a commercial calorimeter (PPMS, Quantum Design) employing a thermal relaxation method was employed. High-field magnetization measurements were conducted using a non-destructive pulse magnet under pulsed magnetic fields. All experiments were performed using small, randomly oriented single crystals with some polycrystalline samples.
The molecular structure of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)] is depicted in Fig. 1(a), where the Cu atom is coordinated by a radical, resulting in a 5-coordinate environment [39]. The spin of \(m\)-Py-V-2,6-F\({}_{2}\) and Cu\({}^{2+}\) is 1/2. The crystallographic parameters at 100 K are as follows: triclinic, space group \(P\bar{1}\), \(a\) = 9.6769(4) A, \(b\) = 9.9412(4) A, \(c\) = 16.7749(8) A, \(\alpha\) = 103.868(4)\({}^{\circ}\), \(\beta\) = 91.103(4)\({}^{\circ}\), \(\gamma\) = 90.358(4)\({}^{\circ}\), \(V\) = 1566.32(12) A\({}^{3}\), \(Z\) = 2, \(R\) = 0.0539, and \(R_{\text{w}}\) = 0.1469. MO calculations revealed three predominant intermolecular interactions between the radicals, as shown in Fig. 1(b) [40]. These interactions are quantified as \(J_{\text{V1}}/k_{\text{B}}\) = 3.4 K, \(J_{\text{V2}}/k_{\text{B}}\) = \(-\)1.7 K, and \(J_{\text{V3}}/k_{\text{B}}\) = \(-\)1.2 K, which are defined within the Heisenberg spin Hamiltonian, given by \(\mathcal{H}\) = \(J_{n}\)\(\sum_{<i,j>}\)\(\mathcal{S}_{i}\)\(\cdot\)\(\mathcal{S}_{j}\). The resulting spin-lattice corresponds to a spin-1/2 zigzag chain composed of a F-AF alternating chain, with F next-nearest-neighbor interactions. Additionally, we found that there is not only intramolecular coupling but also a close contact between the radical and copper atom [39], forming a square unit, as depicted in Fig. 1(c). The MO calculation indicated that the intramolecular and intermolecular exchange interactions, \(J_{\text{Cu1}}\) and \(J_{\text{Cu2}}\), are F and AF, respectively. Since the MO estimates tend to overestimate the interactions between verdazyl radicals and transition metals [35; 36], it is difficult to evaluate their absolute values. Consequently, assuming all expected interactions, the zigzag chains formed by \(J_{\text{V1}}\), \(J_{\text{V2}}\), and \(J_{\text{V3}}\) are coupled via a square unit formed by \(J_{\text{Cu1}}\) and \(J_{\text{Cu2}}\), resulting in a spin-1/2 zigzag-square lattice, as illustrated in Fig. 1(d).
Figure 2(a) shows the temperature dependence of the magnetic susceptibility (\(\chi\) = \(M/H\)) at 0.1 T. The \(\chi T\) value exhibits a pronounced decrease as the temperature decreases, indicating the development of AF correlations, as shown in Fig. 2(b). In the 10-300 K temperature range, the \(\chi\) follows the Curie-Weiss law, with an estimated Weiss temperature of \(\theta_{\text{W}}\) = -1.79 (3) K, indicating dominant AF interactions. Additionally, a shoulder-like behavior is observed in \(\chi\) below approximately 2 K. At this temperature, the temperature derivative of \(\chi\) exhibits a discontinuous change at \(T_{\text{N}}\)\(\simeq\) 0.9 K, as shown in the inset of Fig. 2(a), which can be attributed to a phase transition to an AF ordered state. If we assume a significant difference in the magnitude of exchange interactions, an energy separation occurs due to the difference in temperature region where the correlation becomes dominant, yielding a multistep change in \(\chi T\)[42; 43; 36]. Because the observed \(\chi T\) exhibits a monotonic decrease with decreasing temperature down to \(T_{\text{N}}\) in the present system, we can expect that the magnitudes of the exchange interactions are sufficiently comparable, preventing the energy separation and resulting in an AF ordered state composed of both \(S_{\text{V}}\) and \(S_{\text{Cu}}\) spins. The specific heat at zero field exhibits a distinct peak at \(T_{\text{N}}\), indicating a phase transition to the AF ordered state, as shown in Fig. 3. When magnetic fields are applied, the peak signal disappears above 3 T.
Figure 4(a) shows the magnetization curve at 1.5 K above \(T_{\text{N}}\), demonstrating paramagnetic behavior that becomes nearly saturated at approximately 15 T. Based on the isotropic \(g\) value of 2.0 for the organic radicals, the saturation value of 2.1 \(\mu_{\text{B}}\)/f.u. suggests that the average \(g\) value of \(S_{\text{Cu}}\) is approximately 2.2. The temperature dependence of the magnetization curve in the low-field region is shown in Fig. 4(b). Notably, a gradual change is observed above \(H_{\text{c}}\)\(\simeq\)3 T for \(T<T_{\text{N}}\), indicating the presence of a 1/2 plateau. The slight increase in the plateau phase is considered to be dominated by the thermal excitation effect. Furthermore, the field derivative of the magnetization curve (\(dM/dH\)) exhibits an inflection point at \(H_{\text{c}}\), as shown in the inset of Fig. 4(b). Considering that the phase transition signal in the specific heat disappears above \(H_{\text{c}}\), it can be inferred that a phase transition from the AF ordered state to a quantum state accompanied by the 1/2 plateau occurs at \(H_{\text{c}}\).
Our analysis focused on the ground state of the anticipated spin-1/2 zigzag-square lattice. Experimental observations do not indicate any energy separation associated with significant lattice distortion (the differences in the exchange interactions) that could lead to the formation of nonmagnetic singlet dimers in the system. Therefore, the ground state at zero field is expected to be an AF ordered state encompassing all spin sites. Figure 1(d) highlights the notable difference in coordination numbers between \(S_{\text{V}}\) (6) and \(S_{\text{Cu}}\) (2). The lower coordination number of \(S_{\text{Cu}}\) enhances the effect of polarization by the external magnetic fields. Consequently, in the low-field region, \(S_{\text{Cu}}\) gradually aligns with the field direc
tion as the field increases, eventually reaching an almost fully polarized state at \(H_{\rm c}\). The magnetic moment of 1.1 \(\mu_{\rm B}\)/f.u. at \(H_{\rm c}\) is consistent with the anticipated value for fully polarized \(S_{\rm Cu}\) with \(g\)=2.2 along the field direction. As a result, the predominantly polarized \(S_{\rm Cu}\) lacks sufficient degrees of freedom to alter the ground state, effectively leading to a field-induced dimensional reduction where the \(S_{\rm V}\) chain dominates. Furthermore, the frustration and quantum fluctuations within the zigzag chain disrupt the conventional ordered state, giving rise to a field-induced quantum phase accompanied by the 1/2 plateau. Theoretical studies suggest that the corresponding zigzag chain, composed of F-AF alternating chain with F next-nearest-neighbor interactions, exhibits varieties of gapped quantum phases depending on the exchange parameters [29; 30; 31]. If we assume a singlet state formed by \(J_{\rm V1}\), the plateau phase observed up to 5 T indicates \(J_{\rm V1}/k_{\rm B}>6.7\) K. Considering that the evaluated Weiss temperature indicates dominant AF correlations, the actual value of \(J_{\rm V1}\) is expected to be larger than the MO evaluation, which is consistent with the above prediction. Furthermore, the internal fields attributed to the coupling with the \(S_{\rm Cu}\) through \(J_{\rm Cu1}\) and \(J_{\rm Cu2}\) can modify the effective field on \(S_{\rm V}\), leading to the modulation of the plateau region. Since \(J_{\rm Cu1}\) and \(J_{\rm Cu2}\) have opposite signs, the internal fields caused by their couplings are considered to cancel each other. Although the precise nature of the quantum state in our system remains uncertain based on our current findings, we expect that the gapped quantum state exhibiting the 1/2 plateau can be attributed to exchange interactions forming the effective zigzag chain. Figure 4(c) presents a valence bond picture of the quantum state, assuming the presence of a Haldane phase as one of the anticipated quantum phases within the corresponding zigzag chain. In this representation, two spins coupled by the F interaction \(J_{\rm V2}\) are considered as effective spin-1, while two spin-1/2 particles on different spin-1 sites form a singlet dimer through the AF interaction \(J_{\rm V1}\).
To summarize, we successfully synthesized single crystals of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)], a verdazyl-based complex. MO calculations suggested the presence of three types of intermolecular exchange couplings between the radicals and two types of exchange couplings between the radical and Cu atom. The magnetic susceptibility and specific heat indicated a phase transition to an AF ordered state. The magnetization curve in the ordered phase exhibited a gradual increase above the critical field, indicating a phase transition from the AF ordered state to a quantum state characterized by a 1/2 plateau. By applying magnetic fields, the spins on the Cu atoms gradually aligned with the field direction, causing the effective zigzag chain composed of radical spins to become promi
Figure 2: (color online) Temperature dependence of (a) magnetic susceptibility (\(\chi=M/H\)) and (b) \(\chi T\) of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)] at 0.1 T. The inset shows the temperature derivative of \(\chi\). The arrows indicate the phase transition temperature \(T_{\rm N}\).
Figure 1: (color online) (a) Molecular structure of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)]. Hydrogen atoms are omitted for clarity. Crystal structure of (\(m\)-Py-V-2,6-F\({}_{2}\))[Cu(hfac)\({}_{2}\)] forming (b) the zigzag chain along the \(b\)-axis and (c) the square unit. The blue and brown nodes represent the spin-1/2 of the radicals and Cu atoms, respectively. The thick lines represent the exchange interactions. (d) Spin-1/2 zigzag-square lattice composed of \(J_{\rm V1}\), \(J_{\rm V2}\), \(J_{\rm V3}\), \(J_{\rm Cu1}\), and \(J_{\rm Cu2}\).
ment. This field-induced dimensional reduction destabilized the conventional ordered state, leading to the emergence of a field-induced quantum phase. We expect that the gapped quantum state featuring the 1/2 plateau is attributed to the effective zigzag chain based on the zigzag-square lattice. The material studied here provides a platform for investigating a frustrated spin-1/2 zigzag-square lattice with a field-induced quantum phase, which will inspire further numerical studies on its ground state. It opens up a new research avenue centered on the zigzag-square topology in the field of condensed-matter physics.
This research was partly supported by KAKENHI (Grants No. 23K13065 and No. 23H01127). A part of this work was performed under the interuniversity cooperative research program of the joint-research program of ISSP, the University of Tokyo.
|
2301.13754 | CMB signature of non-thermal Dark Matter produced from self-interacting
dark sector | The basic idea of this work is to achieve the observed relic density of a
non-thermal dark matter(DM) and its connection with Cosmic Microwave Background
(CMB) via additional relativistic degrees of freedom which are simultaneously
generated during the period $T_{\rm BBN}~{\rm to}~T_{\rm CMB}$ from a
long-lived dark sector particle. To realize this phenomena we minimally extend
the type-I seesaw scenario with a Dirac fermion singlet($\chi$) and a complex
scalar singlet ($\varphi$) which transform non-trivially under an unbroken
symmetry $\mathcal{Z}_3$. $\chi$ being the lightest particle in the dark sector
acts as a stable dark matter candidate while the next to lightest state
$\varphi$ operates like a long lived dark scalar particle. The initial density
of $\varphi$ can be thermally produced through either self-interacting number
changing processes ($3 \varphi \to 2 \varphi$) within dark sector or the
standard annihilation to SM particles ($2 \varphi \to 2~ {\rm SM}$). The late
time (after neutrino decoupling) non-thermal decay of $\varphi$ can produce
dark matter in association with active neutrinos. The presence of extra
relativistic neutrino degrees of freedom at the time of CMB can have a
significant impact on $\Delta \rm N_{eff}$. Thus the precise measurement of
$\Delta \rm N_{ eff}$ by current PLANCK 2018 collaboration and future
experiments like SPT-3G and CMB-S4 can indirectly probe this non-thermal dark
matter scenario which is otherwise completely secluded due to its tiny coupling
with the standard model. | Dilip Kumar Ghosh, Purusottam Ghosh, Sk Jeesun | 2023-01-31T16:41:14Z | http://arxiv.org/abs/2301.13754v2 | # CMB signature of non-thermal Dark Matter produced from self-interacting dark sector
###### Abstract
The basic idea of this work is to achieve the observed relic density of a non-thermal dark matter(DM) and its connection with Cosmic Microwave Background (CMB) via additional relativistic degrees of freedom which are simultaneously generated during the period \(T_{\rm BBN}\) to \(T_{\rm CMB}\) from a long-lived dark sector particle. To realize this phenomena we minimally extend the type-I seesaw scenario with a Dirac fermion singlet(\(\chi\)) and a complex scalar singlet (\(\phi\)) which transform non-trivially under an unbroken symmetry \({\cal Z}_{3}\). \(\chi\) being the lightest particle in the dark sector acts as a stable dark matter candidate while the next to lightest state \(\phi\) operates like a long lived dark scalar particle. The initial density of \(\phi\) can be thermally produced through either self-interacting number changing processes (\(3\phi\to 2\phi\)) within dark sector or the standard annihilation to SM particles (\(2\phi\to 2\) SM). The late time (after neutrino decoupling) non-thermal decay of \(\phi\) can produce dark matter in association with active neutrinos. The presence of extra relativistic neutrino degrees of freedom at the time of CMB can have a significant impact on \(\Delta{\rm N}_{\rm eff}\). Thus the precise measurement of \(\Delta{\rm N}_{\rm eff}\) by current PLANCK 2018 collaboration and future experiments like SPT-3G and CMB-IV can indirectly probe this non-thermal dark matter scenario which is otherwise completely secluded due its tiny coupling with the standard model.
Introduction
The standard model (SM) of particle physics has been extraordinarily victorious in explaining properties of elementary particles of the universe and their interactions through strong, electromagnetic and weak forces. The SM seems complete after the discovery of Higgs-like particle with mass \(M_{h}=125\) GeV at the Large Hadron Collider (LHC)[1; 2], which is responsible for mass generation mechanism through electroweak symmetry breaking in the SM. Inspite of the great triumph of the SM, several theoretical and experimental issues still persist, that demands physics beyond the framework of the Standard Model. Based on numerous astrophysical and cosmological observations at a wide range of length scales, it is now well established fact that about 80% of total mass of the universe consists of Dark matter (DM)[3; 4; 5; 6] with relic density (\(\Omega_{\rm DM}h^{2}=0.120\pm 0.001\)) [6]. Another astonishing experimental evidence is the observation of tiny but non-zero neutrino masses (\(m_{\nu}\lesssim{\cal O}(10^{-10})\) GeV) and neutrino flavour oscillations [7; 8; 9; 10; 11; 12]. To address these issues, various theoretical as well as phenomenological ideas have been proposed. The issue of neutrino masses and their mixing angles can be resolved by the Seesaw mechanisms [13; 14; 15; 16; 17; 18; 19; 20; 21; 22]. However, any direct experimental verification of these ideas are yet to be confirmed. While in the dark matter sector, weakly interacting massive particles (WIMP) [23; 24; 25; 26; 27; 28] is the most popular and widely studied thermal DM candidate whose interaction strength with SM particles is of the order of electroweak interactions and via freeze-out mechanism it fits nicely the observed relic density of the universe. Nevertheless, null measurements from various dark matter detection experiments [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41] severely restricts the WIMP freeze-out mechanism and forcing us to think if the standard WIMP paradigm is just waning or it is already deceased. To bypass this deadlock, an alternative framework, coined as _freeze-in_ mechanism has been proposed. In this framework DM is a feebly interacting massive particle (FIMP) whose interactions with SM plasma is too small \(\lesssim{\cal O}(10^{-10})\) to keep them in thermal bath [42; 43; 44; 45; 46; 47]. Rather FIMPs are produced non-thermally either from decay or annihilation of bath particles in the early universe. The FIMP freezes in once the temperature of the universe becomes lower than the FIMP mass and produces DM relic abundance in the correct ball-park as observed today. Moreover, FIMPs having such a petite coupling with SM particles can easily accommodate various non-observational signature of DM in different detection experiments like Panda[29], XENON[30], LUX[31]. However, some attempts have been made to test the
FIMP scenario indirectly using observational data from big bang nucleosynthesis (BBN) or cosmic microwave background (CMB) [48; 49; 50; 51; 52; 53; 54]. Furthermore, non-thermal production of DM from the decay of heavier dark sector particles have also been studied in literature [55; 56; 57; 58].
Apart from FIMP, strongly interacting massive particle(SIMP) is another alternate paradigm to explain the DM abundance [59; 60; 61] as well as the structure formation of the universe[62; 63; 64]. SIMPs are produced thermally in the early universe by number changing processes within itself. SIMP scenario requires strong self interaction and very small annihilation rate to SM particles contrary to WIMPs to successfully satisfy the correct DM relic density [65; 66; 67].
On the other hand Cosmic Microwave Background (CMB) is an ideal probe of the physics in the early universe. The very precise measurement of anisotropies in the temperature of photons which dissociate from visible sector in the recombination phase of the thermal evolution of our universe, leads to the determination of the energy density in that particular era. From this one can estimate the number of light species in the universe and in the massless limit this is provided by the relativistic degrees of freedom \(g_{*}\)[68; 69]. On the other hand after neutrino decoupling, one recasts the number of light degrees of freedom associated with neutrino bath as \(\rm N_{eff}\) and in the SM it is roughly number of active neutrinos (\(N_{\nu}=3\)). Thus any physics scenarios beyond the SM (BSM) with new light degrees of freedom with masses \(\mathcal{O}\) (eV) or less can subscribe to \(\rm N_{eff}\). We have very precise information of \(\rm N_{eff}\) from recent Planck 2018 [6], which suggests \(\rm N_{eff}\) at the time of CMB formation to be \(\rm N_{eff}^{CMB}=2.99^{+0.34}_{-0.33}\) at 95% confidence level (C.L), whereas in the SM, \(\rm N_{eff}^{SM}=3.045\). The quantity \(\rm N_{eff}\) is parameterized as \(\rm N_{eff}\equiv(\rho_{rad}-\rho_{\gamma})/\rho_{\nu}\), where, \(\rho_{\gamma}\), \(\rho_{\nu}\), and \(\rho_{\rm rad}\) denote the photon energy density, active neutrino energy density and total radiation energy density of the universe respectively [70]. The deviation from 3, the number of active neutrinos can be attributed to various non-trivial effects like non-instantaneous neutrino decoupling, finite temperature QED corrections to the electromagnetic plasma, flavour oscillations of neutrinos [70; 71; 72; 73]. Multiple upcoming experiments like SPT-3G[74], CMB-IV[75] are going to be extremely sensitive to the presence of any new radiation /light degrees of freedom and will put stringent bound on \(\rm\Delta N_{eff}=N_{eff}-N_{eff}^{SM}\approx 0.06\) at 95% confidence level. Various BSM scenarios that entail additional entropy injection to the neutrino sector can face a tough challange from the measurement of \(\rm\Delta N_{eff}\) by both the present and future generation CMB experiments [76; 77; 78; 79; 80; 81]. This precise measurement of \(\rm\Delta N_{eff}\) has also non-trivial implications
on various new physics models that produce dark matter in associated with the injection of additional light degrees of freedom [82; 83; 84; 85].
In this work, we are interested in non thermal production of dark matter from heavier dark sector, where the dark sector may or may not have sizeable interaction with the SM bath. To realize this picture we extend the SM by one complex SM gauge singlet scalar (\(\phi\)), one gauge singlet Dirac fermion (\(\chi\)) and 3 right handed neutrinos (RHN)(\(N_{1,2,3}\)). The three RHNs are responsible for neutrino mass generation through well known Type-I seesaw mechanism[86; 14]. \(\phi\) and \(\chi\) are dark sector particles and an additional discrete \(\mathcal{Z}_{3}\) symmetry has been imposed under which they transform non trivially while the rest of the particles transform trivially. In our analysis lightest dark sector particle \(\chi\) can play the role of DM whereas the heavy dark sector particle (\(\phi\)) is a long lived owing to its very small coupling (\(\lesssim 10^{-12}\)), which will eventually allows \(\phi\rightarrow\chi\nu\) decay at temperature below neutrino decoupling temperature (\(\sim 1\) MeV). Non thermal decay of \(\phi\) is the only source of DM(\(\chi\)) production whereas \(\phi\) freezes out thermally and gains non-zero number density via either of these two mechanisms : (\(i\)) the number changing self interactions (\(3\phi\to 2\phi\)), (\(ii\)) annihilation to SM particles (\(2\phi\to 2\)SM). In this work we emphasise on the first scenario where \(\phi\) has strong self interactions but very weak interaction with the SM bath. The implication of this particular scenario has been so far overlooked. Through our detailed numerical analysis we will highlight the importance of this mechanism in both DM phenomenology and its footprint on CMB. For the shake of completeness of the analysis, we will also consider the second process as well to showcase region of parameter space where these two scenarios are relevant.
It should be noted that in both cases \(\phi\) particle maintain kinetic equilibrium with SM bath via the elastic scattering processes and share common temperature with SM bath contrary to studies that deal with secluded or decoupled dark sector scenarios [87; 88]. If the decay of \(\phi\) is happening after neutrino decoupling then it will increase neutrino bath entropy and contribute to \(\Delta\text{N}_{\text{eff}}\). If the decay is completed before CMB we can trace the signature of DM from \(\Delta\text{N}_{\text{eff}}\) at the time of CMB and find some interesting correlation of freeze in DM and \(\Delta\text{N}_{\text{eff}}\) in our proposed set up.
The rest of the paper is structured as follows: In section II we introduce the model. The possible dynamics of DM production have been discussed in section III. In section IV we discuss the light neutrino production from late time decay of \(\phi\). The outcome of
DM relic density together with the contribution to \(\Delta\text{N}_{\text{eff}}\) at CMB for both scenarios-I and II have been discussed in section V. Finally, we summarize our results in section VI. We show relevant theoretical constraints and limit from the SM Higgs invisible decay width in Appendix A and Appendix B respectively. Feynman diagrams and corresponding thermal averaged cross-section for \(3\phi\to 2\phi\) and \(2\phi\to 2\text{SM}\) processes are explicitly demonstrated in Appendix C and Appendix D respectively.
## II The model
In order to explain DM production from dark sector and its cosmological imprints in CMB, we extend the SM by a complex scalar \(\phi\), one Dirac fermion \(\chi\) and three neutral Majorana fermions, \(N_{1,2,3}\) which are singlet under the SM gauge group. An additional \(\mathcal{Z}_{3}\) symmetry provides the stability of the lightest dark sector particle, under which the field \(\phi\) and \(\chi\) transform non-trivially i.e. \(\{\phi,\ \chi\}\rightarrow\{e^{i\frac{2\pi}{3}}\phi,\ e^{i\frac{2\pi}{3}}\chi\}\) while all the SM fields including \(N_{1,2,3}\) transform trivially i.e. \(\{N_{1,2,3},\ \text{SM}\}\rightarrow\{N_{1,2,3},\ \text{SM}\}\)1. The lightest dark state, \(\chi\) acts as a stable DM candidate which is produced from the late time decay of the other dark sector particle, \(\phi\). The right handed neutrinos (RHN) i.e. \(N_{1,2,3}\) which do not transform under \(\mathcal{Z}_{3}\), will be responsible for light neutrino mass via Type-I seesaw mechanism [14]. All the BSM fields and their corresponding charge assignments under the extended SM Electroweak (EW) gauge group are tabulated in table-1.
Footnote 1: In general any \(\mathcal{Z}_{N}\) symmetry can serve similar kind of scenario with different self interacting number changing processes, \(m\)\(\phi\to 2\)\(\phi\) (\(m\geq 3\)), as well as the standard annihilation to SM particles, \(2\phi\to 2\text{SM}\). For example, \(\mathcal{Z}_{2}\) will provide \(4\phi\to 2\phi\) interactions which are more phase space suppressed for \(M_{\phi}\sim\mathcal{O}(\text{MeV})\) compare to \(3\phi\to 2\phi\) interactions realised in \(\mathcal{Z}_{3}\) symmetry [59].
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \hline \multicolumn{2}{|c|}{BSM Fields} & \multicolumn{1}{c|}{\(SU(2)_{L}\)} & \(U(1)_{Y}\) & \(\mathcal{Z}_{3}\) \\ \hline \hline Dark scalar (DS) & \(\phi\) & 1 & 0 & \(\omega(\equiv e^{i\frac{2\pi}{3}})\) \\ \hline DM & \(\chi\) & 1 & 0 & \(\omega(\equiv e^{i\frac{2\pi}{3}})\) \\ \hline \hline RHN & \(N_{1,2,3}\) & 1 & 0 & 1 \\ \hline \hline \end{tabular}
\end{table}
Table 1: _Charge assignment of BSM fields under the extended SM EW gauge group, \(\mathcal{G}_{\text{SM}}^{\text{EW}}\otimes\mathcal{Z}_{3}\)._
The Lagrangian of this model takes the following form :
\[{\cal L}\,=\,\underbrace{{\cal L}_{\rm SM}^{\rm K+Y}-V(H)}_{\rm SM}+{\cal L}_{\rm N }+{\cal L}_{\rm DS}+{\cal L}_{\rm DS-H}+{\cal L}_{\rm DS-\nu}. \tag{1}\]
Here, \(V(H)\) represents the SM Higgs potential which is given by
\[V(H)\,=\,-\mu_{H}^{2}|H|^{2}+\lambda_{H}|H|^{4}. \tag{2}\]
The BSM part encapsulate interactions of heavy RHN sector (\({\cal L}_{\rm N}\)), dark sector(\({\cal L}_{\rm DS}\)) as well as their connection with the SM. The interaction of heavy RHN sector is described by,
\[{\cal L}_{\rm N}= \sum_{i}i\bar{N}_{i}\gamma^{\mu}\partial_{\mu}N_{i}-\sum_{i,j} \frac{1}{2}M_{N_{ij}}\bar{N}_{i}^{c}N_{j}-\sum_{\ell,j}Y_{\ell j}\bar{L}_{\ell }\tilde{H}N_{j}+h.c. \tag{3}\]
where \(i,j=1,2,3\) and \(\ell=e,\mu,\tau\) are lepton flavour indices. \(L_{\ell}=(\nu_{\ell}\ \ \ell)^{T}\) are left handed the SM lepton doublet and \(H\) is the SM scalar doublet with \(\tilde{H}=i\sigma_{2}H^{*}\). The second term in eq.(3) is the Majorana mass term associated with \(N_{1,2,3}\) and the last term is the Dirac Yukawa interactions with \(N_{1,2,3}\). After electroweak symmetry breaking (EWSB), the SM scalar doublet, \(H\) can be expressed in unitary gauge as \(H=\begin{pmatrix}0&\frac{h+v}{\sqrt{2}}\end{pmatrix}^{T}\) where \(v=246\) GeV is vaccum expectation value (VEV) of SM Higgs. Active neutrino masses can be generated via Type-I seesaw mechanism followed from eq.(3) as \(\big{(}m_{\nu}\big{)}_{3\times 3}\approx\big{(}Yv/\sqrt{2}\big{)}\big{(}M_{N} \big{)}^{-1}\big{(}Y^{T}v/\sqrt{2}\big{)}\) and the mixing angle between active neutrino and RHN is then \(\theta_{\rm mix}\sim\big{(}Yv/\sqrt{2}\big{)}\big{(}M_{N}\big{)}^{-1}\), where \(M_{N}\approx\big{(}M_{N}\big{)}_{3\times 3}\)[14].
The dark sector of this model consists of a complex scalar (\(\phi\)) and a Dirac fermion (\(\chi\)) with similar transformation property under \({\cal Z}_{3}\). The lightest state behave as a stable DM particle. The Lagrangian of the dark sector is described as follows:
\[{\cal L}_{\rm BSM} \supset {\cal L}_{\rm DS}+{\cal L}_{\rm DS-H}+{\cal L}_{\rm DS-\nu} \tag{4}\] \[= \Big{(}|\partial_{\mu}\phi|^{2}-\mu^{2}|\phi|^{2}+i\bar{\chi} \gamma^{\mu}\partial_{\mu}\chi-M_{\rm DM}\bar{\chi}\chi-\lambda_{\phi}|\phi|^{ 4}-\frac{\mu_{\phi}}{3!}(\phi^{3}+{\phi^{*}}^{3})-y_{\phi_{\chi}}\overline{ \chi^{c}}\chi\phi\Big{)}\] \[+\Big{(}-\lambda_{\phi H}|H|^{2}|\phi|^{2}\Big{)}+\Big{(}-\sum_{i }y_{\phi_{N}}\bar{\chi}\phi N_{i}+h.c.\Big{)}\ \,\]
where, \(i=1,2,3\). In the above equation \(\mu\) is the bare mass term of \(\phi\) and \(M_{\rm DM}\) is the mass of dark fermion \(\chi\). For simplicity, in this work we consider all parameters to be real. In the dark scalar sector, we assume \(\mu>0\) and \(\lambda_{\phi}>0\) so that \(\langle\phi\rangle=0\) which implies unbroken \({\cal Z}_{3}\) symmetry. After EWSB the physical mass of \(\phi\) can be expressed as,
\[M_{\phi}^{2}=\mu^{2}+\frac{\lambda_{\phi H}\,v^{2}}{2}. \tag{5}\]
The most important interaction as far as our analysis is concerned, is given by the Yukawa interaction involving the dark scalar (\(\phi\)), the DM (\(\chi\)) and SM neutrinos (\(\nu\)):
\[{\cal L}_{\rm DS-\nu}^{\rm int}=y_{{}_{1}}\overline{\chi}\nu\phi+h.c. \tag{6}\]
This Lagrangian can be realized from the last term in braces in eq.(4) via small mixing angles(\(\theta_{\rm mix}\)) with RHNs(\(N_{1,2,3}\)). The effective Yukawa coupling, \(y_{1}\) can be understood as \(\sum_{i}y_{{}_{\phi N_{i}}}\theta_{\rm mix}^{i}\), where \(i=1,2,3\).
We choose the dark sector of our model parameters in such a manner that we always get \(\chi\) as the lightest dark sector particle. This mass pattern and the underlying discrete symmetry ensure us that the Dirac fermion (\(\chi\)) with mass \(M_{\rm DM}\) is the DM particle and \(\phi\) with mass \(M_{\phi}\) is the next to ligtest particle (NLP) in this framework. The DM interacts with the SM bath only through \(\phi\) via the Yukawa interaction shown in eq.(6). Thus for a given mass hierarchy between \(\phi\) and \(\chi\), the life-time of \(\phi\) is determined by the strength of the Yukawa coupling \(y_{{}_{1}}\). For our analysis, we assume NLP (\(\phi\)) to be a long-lived (\(\tau_{\phi}>\tau_{\rm BBN}\)) particle and for this to happen one requires a very tiny Yukawa coupling \(y_{1}\lesssim 10^{-12}\) (for \(M_{\phi}\sim{\cal O}({\rm GeV})\)). The NLP \(\phi\) can be thermally produced via the sizable Higgs-portal interaction or through number changing self interaction processes. The production of the DM in the thermal bath through scattering process is highly suppressed because of it feeble coupling (\(y_{1}\)). However, it can be produced non-thermally from the decay of long-lived \(\phi\) as shown in Fig.1.
The decay width of \(\phi\) to DM and a light neutrino is given by,
\[\Gamma_{\phi\rightarrow\overline{\chi}\nu}=\frac{y_{1}^{2}\,M_{\phi}}{16\,\pi }\left(1-\frac{M_{\rm DM}^{2}}{M_{\phi}^{2}}\right)^{2}. \tag{7}\]
Besides this, there are two more production channels of the DM \(\chi\): (\(a\)) \(N_{1,2,3}\rightarrow\chi\phi\) and (\(b\)) \(\phi\rightarrow\bar{\chi}^{c}\chi\). The main aim of this work is to connect non-thermal DM and \(\Delta N_{\rm eff}\) producing
Figure 1: _Diagram of DM production with active neutrinos from NLP \(\phi\)_
from self-interacting dark sector (NLP) which is achievable via the decay \(\phi\to\chi\nu\). But the presence of those new channels (\(a\)\(\&b\)) will dilute the effect of the late time decay of \(\phi\) in \(\Delta N_{\rm eff}\) and may even completely imperil our non-thermal dark matter scenario by thermalizing the dark sector. To avoid DM production from RHNs we set \(M_{N_{1,2,3}}\gg T_{\rm RH}\) so that their number densities get Boltzmann suppressed(\(e^{-M_{N}/T}\)) [85]. Therefore for our discussion, we choose the following hierarchy
\[M_{N_{1,2,3}}\gg T_{\rm RH}>M_{\phi}>M_{\rm DM}. \tag{8}\]
In order to get active neutrino mass of the order \(\sim 0.1\) eV, we require \(M_{N_{1,2,3}}\sim{\cal O}(10^{10})\) GeV and \(\theta_{\rm mix}\sim{\cal O}(10^{-10})\)[89] and to satisfy the criteria of eq.8 we set \(T_{\rm RH}=10^{3}\) GeV which is consistent with the bound obtained from BBN [90]. Following this argument and masses of relevant particles of this model, in the rest of our analysis we can safely ignore the production of DM from RHN decay in the computation of \(Y_{\phi}\) and \(Y_{\chi}\). Moreover to suppress the process (\(b\)) we consider \(y_{{}_{\phi\chi}}\ll y_{{}_{1}}\), and this is necessary to exit \(\phi\to\chi\nu\) decay so that \(\phi\) can have the maximal contribution to \(\Delta\)N\({}_{\rm eff}\).
Interestingly, active neutrinos(\(\nu\)) produced from the decay of NLP \(\phi\) along with DM(\(\chi\)) as shown in Fig.1 can have very intriguing consequences in the observation of CMB. We assume the value of Yukawa coupling (\(y_{{}_{1}}\)) such that \(\phi\to\chi\nu\) decay mostly happens between neutrino decoupling temperature (T \(<\) 2 MeV) and CMB formation (T \(\approx\) 0.1eV). This promptly opens up the possibility of probing the impact of extra neutrino production from CMB radiation. And this can be achieved if \(y_{{}_{1}}\) varies in the range (\(10^{-12}-10^{-15}\)) and for such a tiny coupling \(\phi\) becomes a long-lived particle (\(\tau_{\phi}>\tau_{\rm BBN}\)). Thus the aforementioned supplementary active neutrino (\(\nu\)) injection in our proposed scenario increases neutrino sector entropy and which in turn contribute significantly to additional neutrino degrees of freedom or \(\Delta\)N\({}_{\rm eff}\) which is very precisely measured at the time of CMB. Thus any experimental observation on \(\Delta\)N\({}_{\rm eff}\) can have very intriguing impact on the dynamics of dark scalar \(\phi\) which in turn can influence the dark matter (\(\chi\)) abundance via \(\phi\to\chi\nu\) decay process, thus affecting two disjoint (FIMP dark matter \(\&\)\(N_{\rm eff}\)) sectors simultaneously. To explore this phenomenology, we perform a detailed numerical scan over model parameters to show that the precise measurement of \(\Delta\)N\({}_{\rm eff}\) at CMB can indeed restrict certain region of parameter space of non-thermal DM production which is otherwise remains elusive to visible sector due to extremely tiny strength of interactions involved in such non-thermal DM production process.
While doing our numerical analysis, we use the following model parameters:
\[\{M_{\rm DM},\ M_{\phi},\ \lambda_{\phi H},\ \lambda_{\phi},\ \mu_{\phi},\ y_{ {}_{1}}\}, \tag{9}\]
Here, the Higgs portal coupling \(\lambda_{\phi H}\) which decides the interaction between \(\phi\) and SM, plays a significant role in deciding \(\phi\)'s number density through \(2\phi\to 2\) SM annihilation and also in (\(\phi\) SM \(\rightarrow\phi\) SM) elastic scattering processes. On the other hand, the scalar sector parameters \(\lambda_{\phi}\) and \(\mu_{\phi}\) decide the self interactions of \(\phi\) which is relevant for the number changing processes like \(3\phi\to 2\phi\). And finally the effective Yukawa coupling, \(y_{1}\) dictataes both DM abundance and additional contribution to \(N_{\rm eff}\).
## III Dynamics of dark sector
In this section, we discuss the dynamics of the dark sector that leads to the early time production of the heavy NLP dark scalar (\(\phi\)) followed by the late time non-thermal production of DM (\(\chi\)) from the decay of \(\phi\). The number density of DM will be generated at some later epoch (after the neutrino decoupling temperature) of the Universe via the following two steps :
* Step I: _thermal production_ of heavy dark scalar \(\phi\) at the early time of Universe (\(\tau<\tau_{\rm BBN}\)).
* Step II: _non-thermal production_ of DM, \(\chi\) from the late time decay of \(\phi\) (\(\tau>\tau_{\rm BBN}\)).
Figure 2: _A cartoon diagram of DM production(left) and the impact in \(\Delta\)\(\rm N_{eff}\) at the time of CMB(right)._
A cartoon of our proposed setup is shown in Fig.2. In the left panel, we show the variation of co-moving density as a function of temperature. The purple and red solid lines correspond to the thermal production of \(\phi\) (Step I) and the non thermal production of DM (\(\chi\)) (Step II) respectively. We also show two important temperatures, namely, the BBN and CMB that play crucial role in our analysis. Active neutrinos produced in the aforementioned decay of \(\phi\) make substantial contributions to N\({}_{\rm eff}\), which can attract severe constraints from various observational limits on \(\Delta\)N\({}_{\rm eff}\), as shown in the right panel of Fig.2. The gray rectangular band is excluded by the Planck 2018 data at \(1\sigma\)[6]. Having this broad picture in mind we now provide details of the thermal production of NLP followed by non-thermal production of DM in the rest of this section.
**Step-I: Thermal production of \(\phi\)**
We consider a scenario in the early universe, when the interaction rate (\(\Gamma_{\phi}^{\rm int}\)) of the NLP (\(\phi\)) dominates over the expansion rate (\(\mathcal{H}\)) of the Universe, (\(\Gamma_{\phi}^{\rm int}>>\mathcal{H}\)) so that \(\phi\) remains in thermal and chemical equilibrium. As the temperature of the universe cools down, the interaction rate of \(\phi\) falls below the expansion rate of universe (\(\Gamma_{\phi}^{int}<\mathcal{H}\)), thus the system departs from thermal equilibrium and the number density of \(\phi\) freezes out. The number density of \(\phi\) is mainly provided by the following two types of number changing processes: (\(i\)) \(3\phi\leftrightarrow 2\phi\) via \(\phi\) self interactions (shown in Fig.3(a)) and (\(ii\)) \(2\phi\leftrightarrow 2\) SM via the SM Higgs portal interactions (shown in Fig.3(b)). As a result of these two number changing processes, the NLP (\(\phi\)) keeps its chemical equilibrium. On the other hand, the kinetic equilibrium is maintained between \(\phi\) and the SM bath via elastic scatterings, generically expressed as \(\phi\) SM \(\leftrightarrow\)\(\phi\) SM which help \(\phi\) to keep same temperature with SM bath till freeze out takes place.
The complete dynamics of thermal production of \(\phi\) can be described by the following Boltzmann equation(BEQ):
\[\frac{dY_{\phi}}{dx} = -0.116\frac{g_{s}^{2}}{\sqrt{g_{\rho}}}\frac{M_{\phi}^{4}}{x^{5}} M_{pl}\left\langle\sigma v^{2}\right\rangle_{3\phi\to 2\phi}(Y_{\phi}^{3}-Y_{\phi}^{2}Y_ {\phi}^{eq}) \tag{10}\] \[-0.264\frac{g_{s}}{\sqrt{g_{\rho}}}\frac{M_{\phi}}{x^{2}}M_{pl} \left\langle\sigma v\right\rangle_{2\phi\to 2{\rm SM}}(Y_{\phi}^{2}-Y_{ \phi}^{eq}{}^{2})\] \[-\sqrt{\frac{45}{4\pi^{3}}}\left\langle\Gamma_{\phi\to\chi\nu} \right\rangle\frac{x}{M_{\phi}^{2}}\frac{M_{pl}}{\sqrt{g_{\rho}}}Y_{\phi}\.\]
Let us first describe various notations used in eq.(10). \(Y_{\phi}(=\frac{n_{\phi}}{s})\) is the co-moving number density of \(\phi\) where \(s\) is the entropy density and \(x\) is the dimensionless parameter defined as \(x=\frac{M_{\phi}}{T}\). \(Y_{\phi}^{eq}\) is the equilibrium co-moving number density of \(\phi\). \(g_{s}(x)\) and \(g_{\rho}(x)\) are the effective relativistic degrees of freedom associated with entropy density and the energy density respectively and finally \(M_{pl}\) is the Planck mass\(\left(M_{pl}=1.22\times 10^{19}\text{GeV}\right)\). The thermal averaged cross-section of \(2\phi\to 2\) SM process is denoted by \(\left\langle\sigma v\right\rangle_{2\phi\to 2\text{SM}}\) and for self-interacting number changing process (\(3\phi\to 2\phi\)), it is defined as \(\left\langle\sigma v^{2}\right\rangle_{3\phi\to 2\phi}\). The first two terms in eq.(10) lead to non zero density of \(\phi\) via thermal freeze-out mechanism and it occurs at \(x=x_{F}^{\text{tot.}}\), where **tot** in the superscript implies that both number changing processes i.e. \(3\phi\to 2\phi\) and \(2\phi\to 2\text{SM}\) are involved in \(\phi\) freeze-out process. The last term in eq.(10) provides the late time (after BBN) decay of \(\phi\) into DM (\(\chi\)) and SM neutrinos, resulting the dilution of number density of \(\phi\) into \(\chi\) and \(\nu\).
From eq.(10) it is clear that two number changing processes of NLP (\(\phi\)) as discussed above are present to keep \(\phi\) in the thermal bath. However, depending upon the mass and couplings of NLP, it can be shown very easily that one of those two number changing processes is infact sufficient for the freeze-out and the final yield of NLP (\(\phi\)). To justify our argument quantitatively we define the interaction rate of \(3\phi\to 2\phi\) process as: \(\Gamma_{3\phi\to 2\phi}=n_{\phi}^{2}\langle\sigma v^{2}\rangle_{3\phi\to 2\phi}\) and of \(2\phi\to 2\text{SM}\) as: \(\Gamma_{2\phi\to 2\text{SM}}=n_{\phi}\langle\sigma v\rangle_{2\phi\to 2\text{SM}}\). In addition to these number changing processes, \(\phi\) SM \(\rightarrow\phi\) SM number preserving scattering process is also present to keep \(\phi\) in kinetic equilibrium with SM bath. The interaction rate of this process is defined as: \(\Gamma_{[\phi\text{ SM}\rightarrow\phi\text{ SM}]}=n_{\text{SM}}\langle \sigma v\rangle_{[\phi\text{ SM}\rightarrow\phi\text{ SM}]}\). Depending on the relative interaction strength between
Figure 3: _A cartoon of number changing process of \(\phi\): \((a)\) three \(\phi\) annihilate to two \(\phi\) (\(3\phi\to 2\phi\)). and \((b)\) two \(\phi\) annihilate to two SM particles (\(2\phi\to 2\text{SM}\))._
two number changing processes of \(\phi\), we are interested in the following two production modes of \(\phi\):
\[\text{Scenario I}\ : \Gamma_{[\phi\text{ SM}\to\phi\text{ SM}]}\ >\ \Gamma_{3\phi\to 2\phi}\ \gg\ \Gamma_{2\phi\to 2\text{ SM}}\ \,\] \[\text{Scenario II}\ : \Gamma_{[\phi\text{ SM}\to\phi\text{ SM}]}\ >\ \Gamma_{2\phi\to 2\text{ SM}}\gg\ \Gamma_{3\phi\to 2\phi}\ \.\]
In the above hierarchy of scattering processes, \(\Gamma_{[\phi\text{ SM}\to\phi\text{ SM}]}\) plays a decisive part in maintaining kinetic equilibrium of \(\phi\). During the freeze-out of \(\phi\) through processes like: \(n\phi\to 2\phi\), (for \(n>2\)) the rest mass energy of initial state particles can significantly enhance the kinetic energy of final state particles, which in turn can heat up the dark sector [66], leading to an imbalance between the dark sector temperature (\(T_{\phi}\)) and SM bath temperature (\(T\)). Thus, in general, to take into account this temperature imbalance one should consider a new parameter (\(T_{\phi}\)) in the evolution equation of NLP number density (\(Y_{\phi}\)) [85]. However, in our study we can avoid this paradigm by considering kinetic equilibrium between \(\phi\) and SM bath, i.e. by taking \(T_{\phi}=T\) at least upto the temperature at which \(\phi\) freezes out from the thermal bath. And to achieve this, \(\Gamma_{[\phi\text{ SM}\to\phi\text{ SM}]}\) must be larger than interaction rate of the other processes as well as the expansion rate \(\mathcal{H}\) of the universe (i.e \(\Gamma_{[\phi\text{ SM}\to\phi\text{ SM}]}|_{x_{F}}\gtrsim\mathcal{H}(x_{F})\)) [65]. Most importantly this condition must be satisfied in both Scenario I and II. The relevant Feynman diagrams and thermal averaged cross-sections for \(3\phi\to 2\phi\), \(2\phi\to 2\text{SM}\) and \(\phi\text{ SM}\to\phi\text{ SM}\) processes are shown in Appendices C and D.
* **Scenario I**: In this scenario we consider the interaction rate of \(3\phi\to 2\phi\) number changing process (\(\Gamma_{3\phi\to 2\phi}\)) is significantly higher than \(2\phi\to 2\text{SM}\) process (\(\Gamma_{2\phi\to 2\text{ SM}}\)). Thus \(3\phi\to 2\phi\) process successfully keeps \(\phi\) in thermal bath for longer duration in comparison to the process \(2\phi\to 2\text{ SM}\). Hence freeze-out of \(\phi\) is mainly governed by the \(3\phi\to 2\phi\) process and it occurs at \(x=x_{F}^{3\phi\to 2\phi}\approx x_{F}^{\text{tot}}>x_{F}^{2\phi\to 2\text{ SM}}\). Here \(x_{F}^{3\phi\to 2\phi}\) (\(x_{F}^{2\phi\to 2\text{ SM}}\)) signifies the inverse freeze out temperature of \(\phi\) when only \(3\phi\to 2\phi\) (\(2\phi\to 2\text{SM}\)) is considered. In our model, the interaction rate of \(3\phi\to 2\phi\) (\(2\phi\to 2\text{SM}\)) process depends on the couplings \(\lambda_{\phi},\mu_{\phi}/M_{\phi}\) (\(\lambda_{\phi H}\) ) and mass \(M_{\phi}\). To demonstrate the dynamics (where \(\Gamma_{3\phi\to 2\phi}\gg\Gamma_{2\phi\to 2\text{ SM}}\)), we show the variation of the co-moving number density \(Y_{\phi}\) as a function of \(x(=M_{\phi}/T)\) in Fig.4(a) for a sample Benchmark point: \(\{M_{\phi},\ \mu_{\phi}/M_{\phi},\ \lambda_{\phi H},\ \lambda_{\phi}\}=\{20\text{ GeV},\ 0.1,\ 10^{-2},\ 1\}\). The black solid line corresponds t
to the equilibrium co-moving density of \(\phi\) (\(Y_{\phi}^{eq}\)) and the blue dashed line corresponds to the co-moving number density of \(\phi\) considering contributions from both the number changing processes: \(3\phi\to 2\phi\) and \(2\phi\to 2\) SM in eq.(10). The brown solid line (red dotted line) depicts the variation of number density of \(\phi\) when only \(3\phi\to 2\phi\) (\(2\phi\to 2\) SM) process is present in eq.(10). The relative contribution of these two processes in the evolution of \(Y_{\phi}\) is clearly seen in this figure. If we consider only the sub-dominant \(2\phi\to 2\) SM process, \(\phi\) freezes-out earlier (red dotted line) due to small \(\Gamma_{2\phi\to 2\text{ SM}}\), whereas, the dominant \(3\phi\to 2\phi\) process maintains \(\phi\) in thermal bath for longer duration (brown solid line). Thus the freeze-out abundance of \(\phi\) (blue dashed line) is governed mainly by the dominant \(3\phi\to 2\phi\) process due to larger \(\Gamma_{3\phi\to 2\phi}\) for our choice of model parameters. Therefore, we can safely ignore the second term in eq.(10) and the modified BEQ takes the following form:
\[\frac{dY_{\phi}}{dx} = -0.116\frac{g_{s}^{2}}{\sqrt{g_{\rho}}}\frac{M_{\phi}^{4}}{x^{5} }M_{pl}\left<\sigma v^{2}\right>_{3\phi\to 2\phi}(Y_{\phi}^{3}-Y_{\phi}^{2}Y_{ \phi}^{eq}) \tag{11}\] \[-\sqrt{\frac{45}{4\pi^{3}}}\left<\Gamma_{\phi\to\chi\nu}\right> \frac{x}{M_{\phi}^{2}}\frac{M_{pl}}{\sqrt{g_{\rho}}}Y_{\phi}.\]
Based on the above argument we can identify the parameter space for scenario-I satisfying the criteria: \(x_{F}^{3\phi\to 2\phi}>x_{F}^{2\phi\to 2\text{SM}}\). In Fig.4(b) we display the parameter space for this scenario in \(M_{\phi}\) vs. \(\lambda_{\phi}\) plane with \(\mu_{\phi}/M_{\phi}=0.1\) for two different values \(\lambda_{\phi H}=\{10^{-2},10^{-4}\}\) depicted by the blue and red shaded region respectively. The
Figure 4: _(a)Thermal freeze-out of \(\phi\) governed by \(3\phi\to 2\phi\) and (b)Parameter space for scenario-I. For other details see the text._
criteria for scenario-I holds only for the region left to individual lines. With an increase in \(M_{\phi}\),\(\Gamma_{3\phi\to 2\phi}\) becomes more mass suppressed compared to \(\Gamma_{2\phi\to 2\rm SM}\). Hence for fixed values of \(\lambda_{\phi}\), and \(\lambda_{\phi H}\) with increasing \(M_{\phi}\), \(\Gamma_{3\phi\to 2\phi}\) falls below \(\Gamma_{2\phi\to 2\rm SM}\) and scenario-I doesn't hold anymore for the parameter space right to the colored lines. With an increase in \(\lambda_{\phi H}\), \(\Gamma_{2\phi\to 2\rm SM}\) increases and eventually \(\Gamma_{3\phi\to 2\phi}\) falls below \(\Gamma_{2\phi\to 2\rm SM}\) even with lower \(M_{\phi}\). For that reason we see the shaded region moves toward lower \(M_{\phi}\) (towards left) with an increase in \(\lambda_{\phi H}\). The regions right to the colored lines demand a different treatment which will be discussed shortly. Before we conclude this part of our analysis, it is worth noting that the present dark sector dynamics also allows \(4\phi\to 2\phi\) number changing process involving the same \(\lambda_{\phi}\) coupling that is responsible for \(3\phi\to 2\phi\) process. Inspite of the same interaction strength (\(\lambda_{\phi}\)), \(4\phi\to 2\phi\) process is more phase space suppressed compared to that of \(3\phi\to 2\phi\), hence, we neglect it in our numerical calculation of \(Y_{\phi}\).
* **Scenario II**: In this picture we consider \(\Gamma_{2\phi\to 2\rm SM}\gg\Gamma_{3\phi\to 2\phi}\), which is contrary to the previous scenario. In this case freeze-out of \(\phi\) is dictated by \(2\phi\to 2\rm SM\) annihilation process that keeps \(\phi\) in thermal bath for a longer period compared to \(3\phi\to 2\phi\) process. Hence the freeze-out of \(\phi\) occurs at \(x=x_{F}^{\rm tot}\approx x_{F}^{2\phi\to 2\rm SM}>x_{F}^{3\phi\to 2\phi}\).
In Fig.5(a) we report the evolution of \(Y_{\phi}\) as a function of \(x=\frac{M_{\phi}}{T}\) for \(\lambda_{\phi}=0.01\) keeping other parameters same as in Scenario-I. From this figure it is evident that \(Y_{\phi}\) is entirely
Figure 5: _(a)Thermal freeze-out of \(\phi\) governed by \(2\phi\to 2\rm SM\) and (b)Parameter space for scenario-II. For other details see the text._
decided by \(2\phi\to 2\)SM number changing processes contrary to the previous scenario where \(3\phi\to 2\phi\) process was controlling the dynamics. Thus eq.(10) can be simplified by neglecting the sub-dominant \(3\phi\to 2\phi\) process:
\[\frac{dY_{\phi}}{dx} = -0.264\frac{g_{s}}{\sqrt{g_{\rho}}}\frac{M_{\phi}}{x^{2}}M_{pl} \left\langle\sigma v^{2}\right\rangle_{2\phi\to 2\text{SM}}(Y_{\phi}^{2}-(Y_{ \phi}^{eq})^{2}) \tag{12}\] \[-\sqrt{\frac{45}{4\pi^{3}}}\left\langle\Gamma_{\phi\to\chi\nu} \right\rangle\frac{x}{M_{\phi}^{2}}\frac{M_{pl}}{\sqrt{g_{\rho}}}Y_{\phi}\.\]
In Fig.5(b) we display parameter space for this scenario in \(M_{\phi}\) vs. \(\lambda_{\phi H}\) plane with \(\mu_{\phi}/M_{\phi}=0.1\) for two different values \(\lambda_{\phi}=(10^{-1}\) & \(10^{-2})\) depicted by the blue and red shaded region respectively. For the same reason discussed in the context of scenario-I, in this case also \(\Gamma_{3\phi\to 2\phi}\) decreases with decrease in \(\lambda_{\phi}\) and finally falls below \(\Gamma_{2\phi\to 2\text{SM}}\). And this phenomena is true even for lower \(M_{\phi}\). For this reason here also we see that the shaded region shifts towards lower \(M_{\phi}\) (left) with decrease in \(\lambda_{\phi}\).
In summary the main observation of this whole subsection are the following:
\[\text{Scenario I}\,:\quad\Gamma_{3\phi\to 2\phi}\ \gg\ \Gamma_{2\phi\to 2 \text{SM}}\quad\Longrightarrow\ x_{F}^{\text{tot}}\ \ \approx\ x_{F}^{3\phi\to 2\phi}>x_{F}^{2\phi\to 2 \text{SM}}\,\] \[\text{Scenario II}\,:\quad\Gamma_{3\phi\to 2\phi}\ \ll\ \Gamma_{2\phi\to 2 \text{SM}}\quad\Longrightarrow\ x_{F}^{\text{tot}}\ \ \approx\ x_{F}^{2\phi\to 2\text{SM}}>x_{F}^{3\phi\to 2 \phi}. \tag{13}\]
It is worth mentioning that scenario-II is more common and has already been studied in different literature [57; 84], where mother particles are considered to have sizable annihilation cross-section with SM bath. In this work our main focus is on scenario-I, although for the sake of completeness of the analysis we also discuss scenario-II.
**Step-II: Non thermal DM production**
Following our previous discussion we now focus on the non-thermal production of DM (\(\chi\)) from the dilution of \(\phi\) density described by the last term in the R.H.S of eq.(10). We solve the following Boltzmann equation to get the evolution of DM(\(\chi\)) abundance,
\[\frac{dY_{\chi}}{dx}=\sqrt{\frac{45}{4\pi^{3}}}\left\langle\Gamma\right\rangle_ {\phi\to\chi\nu}\frac{x}{M_{sc}^{2}}\frac{M_{pl}}{\sqrt{g_{\rho}}}Y_{\phi}, \tag{14}\]
where, \(Y_{\chi}\) is the co-moving number density of DM \(\chi\). In general the solution of \(Y_{\phi}\) comes from the BEQ in eq.(11) for scenario-I and in eq.(12) for scenario-II respectively. In the calculation of \(\Gamma(\phi\to\chi\nu)\) we consider the Yukawa coupling \(y_{1}\) in the range (\(\sim 10^{-12}-10^{-15}\)
so that the decay of \(\phi\to\chi+\nu\) happens in post BBN and pre CMB era. At this stage, we find it worth discussing one subtle issue regarding the thermal averaged decay width \(\left\langle\Gamma\right\rangle_{\phi\to\chi\nu}\). As we have pointed out before, that at the time when \(\phi\) freezes-out, it maintains the same temperature as the SM bath via the elastic scattering processes. However, this may not be true at the time of decay(\(<T_{\rm BBN}\)) if \(\Gamma_{\left[\phi~{}{\rm SM}\to\phi~{}{\rm SM}\right]}<{\cal H}\) at that time. This results the dark sector to acquire a different temperature \(T^{\prime}\) (\(\neq T\)) than the thermal bath and this must be evaluated in order to get \(\left\langle\Gamma\right\rangle_{\phi\to\chi\nu}\). In this work, as we are studying the dark sector dynamics at low temperature (\(T^{\prime}\ll M_{\phi}\)), and in this limit the thermally averaged decay width can simply be approximated as \(\left\langle\Gamma\right\rangle_{\phi\to\chi\nu}(T^{\prime})\approx\Gamma_{ \phi\to\chi\nu}\)[42; 91], thus reducing the complication of tracking temperature dependence of the evolution of \(\phi\). After solving eq.(14) we get the complete picture of DM production as shown by red solid line in the left panel of Fig.2.
## IV Light neutrino production before CMB
Now we discuss the production of supplementary light neutrinos from the late time decay of \(\phi\) and the relevant mechanism of verifying those light degrees of freedom at CMB. As revealed earlier, neutrinos that are produced after neutrino decoupling (\(T\lesssim 2\) MeV) would inject entropy in the neutrino bath. At the time of CMB, the number of relativistic neutrino degrees of freedom is expressed as,
\[N_{\rm eff}^{\rm CMB}=\frac{8}{7}\left(\frac{11}{4}\right)^{4/3}\left.\frac{ \rho_{\nu}^{\rm SM}}{\rho_{\gamma}}\right|_{\rm T=T_{CMB}}, \tag{15}\]
where, \(\rho_{\nu}^{\rm SM}=3\times 2\times\frac{7}{8}\times\frac{\pi^{2}}{30}(T_{\nu} ^{\rm SM})^{4}\) and \(\rho_{\gamma}=2\times\frac{\pi^{2}}{30}T^{4}\) are energy densities of neutrino and photon respectively. Due to the extra neutrino injection from the non-thermal decay of \(\phi\), the energy density of the neutrino bath increases to \(\rho_{\nu}^{\prime}\) (\(\rho_{\nu}^{\prime}>\rho_{\nu}^{\rm SM}\)). In this case, the relativistic neutrino degrees of freedom(\(N_{\rm eff}^{\prime}\)) also differs from the prediction of SM at the time of CMB. We parameterise this deviation at the time of CMB in the following manner,
\[\Delta{\rm N}_{\rm eff}=\left(\frac{\rho_{\nu}^{\prime}}{\rho_{\nu}^{\rm SM}}- 1\right)\,{\rm N}_{\rm eff}^{\rm SM}\Bigg{|}_{\rm T=T_{CMB}}. \tag{16}\]
We now solve the following Boltzmann equation to estimate the evolution of \(\rho_{\nu}^{\prime}\) with temperature,
\[\frac{d\rho_{\nu}^{\prime}}{dx}=-\frac{4\,\beta\,\rho_{\nu}^{\prime}}{x}+\frac {1}{xH(x)}\left\langle E\Gamma\right\rangle_{\phi\to\chi\nu}Y_{\phi}~{}s~{}, \tag{17}\]
where the term \(\beta\) indicates the variation of \(g_{s}(T)\) with T and is defined as
\[\beta(T)=1+\frac{1}{3}\frac{T}{g_{s}(T)}\frac{d\,g_{s}(T)}{dT}. \tag{18}\]
where, \(x\) is the dimensionless variable as mentioned earlier in context of equation (10). \(Y_{\phi}\) is the co-moving number density of \(\phi\) which is computed by solving eq.(11) or eq.(12) depending on the scenario we consider. \(g_{s}(x)\) is the number of effective degrees of freedom related to the entropy density and \(s\) is the co-moving entropy density. The term \(\left\langle E\Gamma\right\rangle_{\phi\rightarrow\chi\nu}\) in eq.(17) is the most crucial ingredient in this analysis which represents the thermal averaged energy density transferred to neutrino sector and is defined as
\[\left\langle E\Gamma\right\rangle_{\phi\rightarrow\chi\nu}=\frac{|\mathcal{M} |^{2}_{\phi\rightarrow\chi\nu}}{32\pi}\frac{\left(M_{\phi}^{2}-M_{\text{DM}}^ {2}\right)}{M_{\phi}^{2}}\left(1-\frac{M_{\text{DM}}^{2}}{M_{\phi}^{2}}\right).\]
The first term in the R.H.S of eq.(17) is responsible for the dilution of \(\rho^{\prime}_{\nu}\) due to expansion of the universe while the second term decides the evolution of augmented contribution to \(\rho^{\prime}_{\nu}\) from \(\phi\) decay. The evolution of \(\rho^{\text{SM}}_{\nu}\) after the decoupling of neutrinos is governed by only the expansion effect. Thus in the absence of any new source \(\rho^{\text{SM}}_{\nu}\) can be computed by setting the second term of the R.H.S of eq.(17) equal to zero and considering only the dilution of energy density.
## V Relic density and \(\Delta\text{N}_{\text{eff}}\)
So far we have built up the basic framework of the underlying dynamics of dark sector particles (\(\phi\) and \(\chi\)) that provided freeze-in DM as well as yielded extra active light neutrino that with its possible footprints in \(\Delta\text{N}_{\text{eff}}\). In this section, we perform an exhaustive numerical analysis of Scenario-I and Scenario-II to quantitatively estimate phenomenological consequences of the late-time decay of \(\phi\) in the light of current and future measurements of \(\Delta\text{N}_{\text{eff}}\). For this we first scrutiny the dependence of DM relic density and the \(\Delta\text{N}_{\text{eff}}\) on various model parameters as elaborated in sec-III and sec-IV respectively.
### Relic density
To calculate the DM relic density, we numerically solve eq.(14) along with either eq.(11) (for scenario-I) or eq.(12) (for scenario-II). The solution of the coupled BEQs for each scenario yields \(Y_{\phi}\) and \(Y_{\chi}\) as a function of \(x(=M_{\phi}/T)\). Using the co-moving density of DM,
at \(x\rightarrow\infty\), one finds out DM relic density as [23]:
\[\Omega_{\chi}h^{2}=2.755\times 10^{8}\times\left(\frac{M_{\rm DM}}{\rm GeV} \right)\times Y_{\chi}^{\rm today}, \tag{19}\]
where \(Y_{\chi}^{\rm today}=Y_{\chi}(x\rightarrow\infty)\). The precise determination of \(Y_{\chi}^{\rm today}\) is highly model dependent. In the following two sub-sections we pin down \(Y_{\chi}^{\rm today}\) for scenario-I and II and corresponding relic densities.
**Scenario-I**: As shown before, densities of \(\phi\) and \(\chi\) for the scenario-I are mainly driven by \(3\phi\to 2\phi\) number changing process in the dark scalar sector. Based on this number changing process, we calculate the co-moving abundances of \(\phi\) and \(\chi\) and show their evolution with \(x(=M_{\phi}/T)\) in Fig.6. The solid, dashed and dotted lines signify \(Y_{\phi}\), \(Y_{\phi}^{eq}\) and \(Y_{\chi}\) respectively. It can be seen from these figures that the late-time decay of \(\phi\) (solid lines) produces the abundance of \(\chi\). As the \(\phi\rightarrow\chi+\nu\) decay proceeds, the number density of \(\phi\) slowly changes into \(\chi\) number density and eventually at the end of the decay, the density of \(\phi\) completely dilutes to \(\chi\) number density (\(Y_{\chi}\equiv Y_{\phi}\) at \(\tau\gg\tau_{\phi}\)). One can easily understand this from the fact that \(\phi\rightarrow\chi\nu\) decay is the only possible decay mode of the NLP (\(\phi\)) [92]. Thus, in the generation of \(Y_{\chi}\) from \(Y_{\phi}\), the magnitude of the Yukawa coupling (\(y_{{}_{1}}>0\)) has hardly any role to play, except for setting the lifetime of \(\phi\) and this provides \(Y_{\phi}\big{(}x_{F}^{3\phi\to 2\phi}\big{)}\simeq Y_{\chi}^{\rm today}\). Therefore the relic density of DM given in eq.(19) turns out to be \(\Omega_{\chi}h^{2}\propto M_{\rm DM}\times Y_{\phi}\big{(}x_{F}^{3\phi\to 2 \phi}\big{)}\). Consequently in order to get fixed \(\Omega_{\chi}h^{2}\), any increase in \(M_{\rm DM}\) demands a decrease in \(Y_{\phi}\big{(}x_{F}^{3\phi\to 2\phi}\big{)}\) and vice versa. We have pointed out before that \(\phi\rightarrow\chi\)\(\nu\) decay to happen between BBN and CMB the value of \(y_{1}\) should lie in the range : \(\{10^{-12}-10^{-15}\}\). As a sample representative value we set \(y_{{}_{1}}=10^{-12}\) throughout our numerical analysis. To show the evolution of \(Y_{\phi}\) and \(Y_{\chi}\) with temperature, we fix \(\mu_{\phi}/M_{\phi}=0.1\) and \(M_{\rm DM}=400\) keV for both plots. We consider \(\lambda_{\phi H}=10^{-4}\) to realize Scenario-I.
In Fig.6(a) we present evolution of densities \(Y_{\phi}\) (solid line) and \(Y_{\chi}\) (dotted line) as a function of dimension less parameter \(x(=M_{\phi}/T)\) for two different \(M_{\phi}\) and a fixed self-interaction coupling \(\lambda_{\phi}=1.0\). The dynamics of dark sector particles for \(M_{\phi}=1\) GeV and \(M_{\phi}=10\) GeV are depicted by red and blue colors respectively. With the increase of \(M_{\phi}\), \(\left\langle\sigma v^{2}\right\rangle_{3\phi\to 2\phi}\) encounters phase space and propagator suppression which is also understood from the expression given in appendix C. As \(Y_{\phi}\) goes like \(Y_{\phi}\propto 1/\left\langle\sigma v^{2}\right\rangle_{3\phi\to 2\phi}\) (using analytical solution [61]), with the smaller \(\left\langle\sigma v^{2}\right\rangle_{3\phi\to 2\phi}\), the thermal freeze-out of \(\phi\) happens at earlier time with higher abundance \(Y_{\phi}\) and eventually this \(Y_{\phi}\) is transfered to the \(Y_{\chi}\). As a result
is also higher for higher \(M_{\phi}\). This feature is portrayed in the Fig.6 where higher(lower) value of \(M_{\phi}\) leads to the higher(lower) abundance \(Y_{\chi}\) represented by the red(blue) dotted line.
To study the role of dark scalar self-coupling,\(\lambda_{\phi}\) on DM abundance, in Fig.6 we show the variation in \(Y_{\chi}\) for two different values of \(\lambda_{\phi}=0.1\) (red line) and \(1.0\) (blue line) keeping \(M_{\phi}=1\) GeV. It is obvious, that as the value of \(\lambda_{\phi}\) increases, the thermal averaged cross-section \(\langle\sigma v^{2}\rangle_{3\phi\to 2\phi}\) also increases which eventually reduces the abundance \(Y_{\phi}\) and finally this reduced \(Y_{\phi}\) generates lower \(Y_{\chi}\). This is elucidated in Fig.6, where a higher(lower) value of \(\lambda_{\phi}\) gives a lower(higher) \(Y_{\chi}\) as it is shown by the blue(red) dotted line.
The other parameter \(\mu_{\phi}\) with mass dimension is also responsible for \(3\phi\to 2\phi\) processes as \(\langle\sigma v^{2}\rangle_{3\phi\to 2\phi}\propto(\mu_{\phi}/M_{\phi})^{2}\) (see Appendix C). With an increase in the ratio \(\mu_{\phi}/M_{\phi}\), the cross-section will enhance leading to a decrease in \(Y_{\phi}\) as well as \(Y_{\chi}\). For simplicity, we consider the ratio \(\mu_{\phi}/M_{\phi}=0.1\) throughout our analysis and is consistent with the theoretical upper bound on \(\mu_{\phi}\) coming from stable vacuum as discussed in Appendix A.
After describing the dependence of relic abundance on different model parameters, we now present the allowed region of dark sector parameter space from DM observed density, (\(\Omega_{\rm DM}h^{2}=0.120\pm 0.001\)) given by PLANCK [6]. We perform a numerical scan on the model
Figure 6: _Evolution of co-moving abundances of \(\phi\) (solid line) and DM(\(\chi\)) (dotted line) with \(x(\equiv 1/T)\) (\(T\) in GeV) for scenario-I. In (a)for a fixed \(\lambda_{\phi}=1.0\) with two different values of \(M_{\phi}\) and in (b) for a fixed \(M_{\phi}=1\) GeV with two different values of \(\lambda_{\phi}\) are shown. The Higgs portal coupling is considered here to small, \(\lambda_{\phi H}=10^{-4}\) in order to realise the scenario. The other parameters like \(y_{1}=10^{-12}\), \(M_{\rm DM}=400\) keV and \(\mu_{\phi}/M_{\phi}=0.1\) are kept same for both plots._
parameters in the following range
\[M_{\phi}\ :\ \{0.1\ -100\ {\rm GeV}\},\quad\lambda_{\phi}\ :\ \ \{0.001-1\}\ ; \tag{20}\]
to calculate \(\Omega_{\chi}h^{2}\) using eq.(19). We keep other parameters fixed as in Fig.6. and to allow the on shell decay \(\phi\to\chi+\nu\) we set \(M_{\phi}>M_{\rm DM}\).
Our scan result is displayed in Fig.7, where we show points satisfying relic density constraints in \(\lambda_{\phi}\) vs. \(M_{\phi}\) plane. The grey shaded region corresponds to the parameter space where scenario-II dominates which demands a different analysis and will be discussed shortly. The color gradient in the above figure represents DM mass range varying from 0.1 MeV to 10 MeV set by the observed relic density constraint. One can see from this figure that the higher value of \(\lambda_{\phi}\) prefers to higher value of \(M_{\rm DM}\). As explained in the context of Fig.6, with the increase of \(\lambda_{\phi}\), \(Y_{\chi}\) decreases and hence higher value of \(M_{\rm DM}\) is required in order to satisfy the correct relic density as depicted in above Fig.7. For simplicity we restrict our scan within the specified range of \(\lambda_{\phi}\) mentioned above. However one can make the scan even
Figure 7: _DM relic density satisfied points in \(M_{\phi}-\lambda_{\phi}\) plane for scenario-I with \(\mu_{\phi}/M_{\phi}=0.1\), \(\lambda_{\phi H}=10^{-4}\) and \(y_{1}=10^{-12}\). The color gradient indicates the range of \(M_{\rm DM}\) satisfying the correct relic density. The shaded region corresponds to the parameter space where scenario-II is dominating over scenario-I. White regions are just computational artifact associated with the scan._
for higher value of \(\lambda_{\phi}\gtrsim 1.0\) within the perturbativity limit. For those values of \(\lambda_{\phi}\) even heavier DM mass,(\(10~{}{\rm MeV}\lesssim M_{\rm DM}<M_{\phi}\)) will be allowed by the relic density constraint.
**Scenario-II**: We shall now move to the second scenario where the density of \(\phi\) is mainly driven by \(2\phi\to 2\) SM number changing process. Following our earlier discussion we know that the density of \(\phi\) converts into the density of DM \(\left(Y_{\phi}\big{(}x_{F}^{2\phi\to 2{\rm SM}}\big{)}\simeq Y_{\chi}^{\rm today }\right)\) via the late time decay of \(\phi\). Therefore the relic density of DM given in eq.(19) becomes \(\Omega_{\chi}h^{2}\propto M_{\rm DM}\times Y_{\phi}\big{(}x_{F}^{2\phi\to 2{\rm SM }}\big{)}\). Similar to previous scenario \(Y_{\phi}(x_{F}^{2\phi\to 2{\rm SM}})\), decreases with increase in \(M_{\rm DM}\) in order to get fixed density and vice versa. One can also analytically express the yield of \(\phi\) at freeze-out as: \(Y_{\phi}\propto 1/\langle\sigma v\rangle_{2\phi\to 2{\rm SM}}\)[23]. For heavier \(M_{\phi}\), more annihilation processes of \(\phi\) to SM pairs kinematically open up and enhance the cross-section. Thus \(\langle\sigma v\rangle_{2\phi\to 2{\rm SM}}\) can be expressed as \(\sum_{X=SM}\langle\sigma v\rangle_{\phi\phi\to X\overline{X}}~{}\Theta \big{(}M_{\phi}-M_{X}\big{)}\) where \(\Theta\) is the Heaviside step function. For a fixed \(M_{\phi}\), \(Y_{\phi}\) as well as \(Y_{\chi}\) decreases as one increases \(\lambda_{\phi H}\) since \(\langle\sigma v\rangle_{2\phi\to 2{\rm SM}}\propto\lambda_{\phi H}^{2}\). However, here the dependence of \(Y_{\chi}\) on \(M_{\phi}\) is contrary to that of scenario-I. In this case with the increase in \(M_{\phi}\), the annihilation cross-section, \(\langle\sigma v\rangle_{2\phi\to{\rm SM}}\) also increases for the aforementioned reasons and thus resulting a decrease in \(Y_{\phi}\) as well as \(Y_{\chi}\)[57].
Now in order to find a consistent parameter space satisfying observed relic density measured by PLANCK[6], we perform a numerical scan of the relevant parameters for scenario-II in the following range:
\[M_{\phi}~{}:~{}\{10-100~{}{\rm GeV}\},\quad\lambda_{\phi H}~{}:~{}\{10^{-3}-1 0^{-1}\}~{}; \tag{21}\]
whereas the other parameters are kept fixed as \(\mu_{\phi}/M_{\phi}=0.1,~{}\lambda_{\phi}=0.1\) and \(y_{1}=10^{-12}\). The choices of dark sector parameters in eq.(21) ensure that \(\Gamma_{2\phi\to 2{\rm SM}}\gg\Gamma_{3\phi\to 2\phi}\) which is required for the scenario-II. We consider \(M_{\phi}\) up to \(100\) GeV, beyond that \(Y_{\phi}\) is more suppressed resulting in a negligible contribution to \(\Delta{\rm N_{eff}}\) which will be discussed in due course of time.
In Fig.8 we plot correct relic density satisfied points in the \(M_{\phi}\) vs. \(\lambda_{\phi H}\) plane. The variation of color gradient represents the variation of \(M_{\rm DM}\) considered here. The correct relic density constraint sets the DM mass in the range \(M_{\rm DM}\): \(\sim\{10~{}{\rm MeV}-10~{}{\rm GeV}\}\) for our chosen parameters. The gray shaded region on lower left corner of the above figure represents the region where scenario-II does not work paving way to Scenario-I. With increase in \(\lambda_{\phi}(>0.1)\), Scenario-I will start to dominate over scenario-II even with higher value of \(M_{\phi}\) and the shaded region will move towards right accordingly. For \(M_{\phi}<m_{h}/2\), \(h\to\phi\phi^{*}\)
decay opens up and contributes to the SM Higgs invisible decay width (\(\Gamma_{h}^{\rm inv.}\)) which is very precisely measured by CMS [93]. The bound from \(\Gamma_{h}^{\rm inv.}\) (discussed in Appendix B) excludes a significant part of the parameter space as shown by the light cyan region in Fig.8. As understood from the figure, scenario-II works in the higher range of \(M_{\phi}\) and the moderate values \(\lambda_{\phi H}\) leading to lower \(Y_{\chi}\) as discussed earlier in this subsection. The \(2\phi\to 2\)SM annihilation cross-section near Higgs pole, \(M_{\phi}\sim m_{h}/2\), causes further suppression in \(Y_{\chi}\). For \(M_{\phi}>M_{W}\), more final states open up resulting in even larger \(\langle\sigma v\rangle_{2\phi\to 2\rm SM}\). Thus to satisfy the observed DM density one has to reduce \(\lambda_{\phi H}\) in that region as shown in top right corner (white area) of the figure. Therefore the scenario-II allows higher DM mass to satisfy the correct relic density upto few GeV.
Figure 8: _DM relic density satisfied points for scenario-II are shown in the \(\lambda_{\phi H}\) vs. \(M_{\phi}\) plane with \(\lambda_{\phi}=0.1\), \(\mu_{\phi}=0.1M_{\phi}\), \(y_{1}=10^{-12}\) and the color gradient represents the variation in \(M_{\rm DM}\) satisfying the correct relic density. The gray shaded region corresponds to the parameter space where scenario-I is dominating._
### Contribution to \(\Delta{\rm N}_{\rm eff}\) at CMB
In earlier sections, we have established that our main thrust of this whole exercise is to calculate contributions to \(\Delta{\rm N}_{\rm eff}\) by extra active neutrinos produced in association with FIMP like DM from the late time decay of a self interacting dark scalar \(\phi\). Simultaneously, we have also emphasized the possibility of correlating the dark matter mass with the measured value of \(\Delta{\rm N}_{\rm eff}\). Thus, any precise determination of \(\Delta{\rm N}_{\rm eff}\) would provide an indirect probe of the dynamics of dark sector ivolving a strongly self interactiong particle \(\phi\) as well as FIMP like DM. Based on our discourse in sec-IV we will now investigate dependence of dark sector model parameters in \(\Delta{\rm N}_{\rm eff}\) which is completely determined by the ratio \(\rho^{\prime}_{\nu}/\rho^{\rm SM}_{\nu}\).
In Fig.9 we show the evolution of \(\Delta{\rm N}_{\rm eff}\) with temperature T for different set of model parameters as shown in the figure caption. We first numerically evaluate \(\rho^{\prime}_{\nu}/\rho^{\rm SM}_{\nu}\) by solving eq.(19) along with eq.(11) and then plug it into eq.(16) to estimate \(\Delta{\rm N}_{\rm eff}\). From both the figures Fig.9 and 9 we notice that at high \(T\), the \(\Delta{\rm N}_{\rm eff}\) is almost negligible because the entropy injection to neutrino bath is very small during the earlier epoch of \(\phi\to\chi\nu\) decay. With the decrease in temperature, \(\phi\) freezes out from the thermal bath and decays into \(\chi+\nu\) after BBN, generating a new source of active neutrinos that inject extra energy density to neutrino bath. This added neutrino density causes continuous growth of \(\Delta{\rm N}_{\rm eff}\) with lowering of temperature. With further decrease in the temperature, at some point \(\phi\) decay is completed and any auxiliary neutrino production
supplementary energy transfer to neutrino bath takes place and the ratio \(\rho^{\prime}_{\nu}/\rho^{\rm SM}_{\nu}\) attains its maximum possible value at that temperature. After that both \(\rho^{\prime}_{\nu}\) and \(\rho^{\rm SM}_{\nu}\) dilutes in the same fashion with further decrease in temperature, resulting in a fixed ratio \(\rho^{\prime}_{\nu}/\rho^{\rm SM}_{\nu}\) which corresponds to a constant value of \(\Delta N_{\rm eff}\). Since \(\rho^{\prime}_{\nu}\propto Y_{\phi}\) (following eq.(17)), higher value of \(Y_{\phi}\) leads to higher energy transfer to neutrino bath resulting in larger \(\Delta\)N\({}_{\rm eff}\) and vice-versa. We plot the evolution of \(\Delta N_{\rm eff}\) in Fig.9 for \(M_{\phi}=1\) GeV (red line) and \(M_{\phi}=0.1\) GeV (blue line) keeping \(\lambda_{\phi}=1.0\) fixed. In Fig.9 we show the similar plot as in Fig.9 but this time for a fixed \(M_{\phi}=0.1\) GeV and taking two values of \(\lambda_{\phi}=1.0\) (red line) and \(0.1\) (blue line). While generating these two plots, we fix \(M_{\rm DM}=400\) MeV, and \(y_{1}=10^{-12}\). The behavior of \(\Delta\)N\({}_{\rm eff}\) with the model parameters (\(M_{\phi},~{}\lambda_{\phi}\) and \(\mu_{\phi}/M_{\phi}\)) is same as of \(Y_{\phi}\) as discussed earlier for scenario-I (Fig.6 and 6) and the same dependence is depicted in Fig.9 and 9.
The effect of \(\lambda_{\phi H}\) on \(\Delta\)N\({}_{\rm eff}\) in Scenario-II is similar to Scenario-I. Following our previous argument, for any increase in the value of Higgs portal coupling \(\lambda_{\phi H}\), \(\phi\) number density \(Y_{\phi}\) decreases and that leads to a diminished contribution of active neutrinos in \(\Delta\)N\({}_{\rm eff}\). However, \(M_{\phi}\) dependence of \(\Delta\)N\({}_{\rm eff}\) shows opposite behaviour in Scenario-II than Scenario-I, here, \(\Delta\)N\({}_{\rm eff}\) decreases with an increase in \(M_{\phi}\). The reason for this contrary nature follows the same argument as we revealed in the context of relic density calculation. For heavier \(M_{\phi}\), due to enhanced phase space one gets larger \(\langle\sigma v\rangle_{2\phi\to 2{\rm SM}}\) that leads to lower \(Y_{\phi}\) and finally lower \(\Delta\)N\({}_{\rm eff}\). Hence the energy transferred to neutrino sector is too less to contribute significantly in \(\Delta\)N\({}_{\rm eff}\) and the \(\Delta\)N\({}_{\rm eff}\) for scenario-II will be far below the sensitivity of the current and future generation experiments. In this paper we do not display the explicit parameter dependence in \(\Delta\)N\({}_{\rm eff}\) in scenario-II, however similar study could be found in [57].
Finally, we calculate the \(\Delta\)N\({}_{\rm eff}\) for different values of the model parameters in scenario-I and displayed our findings in Fig.10. We present \(\Delta\)N\({}_{\rm eff}\) as a function of \(M_{\phi}\) and the color gradient represents the range of \(M_{\rm DM}\) allowed by observed DM relic density [6]. In the figure, we show different existing exclusion bounds as well as future sensitivities on \(\Delta\)N\({}_{\rm eff}\) depicted by different coloured patches. We notice that a decrease in \(M_{\rm DM}\) yields a increase in \(\Delta\)N\({}_{\rm eff}\) also. This is easily understood as lower value of \(M_{\rm DM}\) requires higher value of \(Y_{\chi}\) to satisfy the observed relic density. As we analyzed earlier, \(Y_{\chi}\) is governed by \(Y_{\phi}\) and higher value of \(Y_{\chi}\) corresponds to higher value of \(Y_{\phi}\). Thus for a higher value of \(Y_{\phi}\), more energy gets transferred to the neutrino sector, leading to the higher value of \(\Delta\)N\({}_{\rm eff}\). We also notice that
higher values of \(M_{\phi}\) corresponds to the points yielding higher value of \(\Delta\text{N}_{\text{eff}}\). This is also understandable as higher value of \(M_{\phi}\) leads to higher value of \(Y_{\phi}\) resulting higher value of \(\Delta\text{N}_{\text{eff}}\) for the same reason discussed above. In the same plot, we present the current upper limits(\(1\sigma\) and \(2\sigma\)) on \(\Delta N_{\text{eff}}\) from PLANCK 2018 and future sensitivities of two upcoming CMB experiments. The present \(2\sigma\) and \(1\sigma\) limit on \(\Delta\text{N}_{\text{eff}}\) from Planck 2018 excludes DM mass below few hundred keV. The future generation experiments like SPT-3G [74] in \(1\sigma\) limit and CMB-S4 \(2\sigma\) limit [75] may probe heavier DM mass upto 1 MeV. The allowed parameter space from the constraints of \(\Delta\text{N}_{\text{eff}}\) is also consistent with bound on free streaming length of DM coming from Lyman-\(\alpha\) forest[94; 53].
## VI Conclusion
In this work we have proposed a minimal extension of the Type-I seesaw model with a complex scalar singlet(\(\phi\)) and a singlet Dirac fermion (\(\chi\)). To ensure the stability of the
lightest dark sector particle, an additional \(\mathcal{Z}_{3}\) symmetry has been imposed under which \(\phi\) and \(\chi\) transform non-trivially while the rest of SM particles and three RHNs transform trivially.
Mass spectrum of the dark sector particles are such that the Dirac fermion \(\chi\) is the lightest particle and plays the role of DM, while the singlet scalar \(\phi\) is the next to lightest particle. The DM with its tiny coupling with SM bath can only be produced from the late time decay of \(\phi\) and obtains its abundance. On the other hand \(\phi\) remains in thermal bath due to its strong self coupling and after its freeze out it decays to DM and active neutrinos. Depending on the thermal history of \(\phi\), we have divided the analysis into two scenarios. In the first Scenario (I), \(\phi\) gains its number density through freeze out mechanism via the number changing strong self-interactions within the dark sector whereas, in the second Scenario (II) \(\phi\) freezes out via the SM Higgs portal coupling to SM particles. The RHNs(\(N_{1,2,3}\)) which are responsible for generating light neutrino masses and mixing angles by type-I seesaw model, are sufficiently heavy (\(M_{N_{1,2,3}}\gg T_{\rm RH}\)) such that their number densities do not contribute to DM relic. However, the presence of RHNs in the particle content allows an effective interaction between \(\phi\), \(\chi\) and active neutrinos(\(\nu\)) which leads to extra neutrino production from the late time decay of \(\phi\). To track the abundances of \(\phi\) and \(\chi\) we have solved two coupled Boltzmann equations. We have first checked the effects of different model parameters on the relic density of DM by solving those Boltzmann equations and identifying the parameter space giving correct relic density in both scenarios (I & II). Apart from producing the right amount of DM relic, the late time decay of \(\phi\) makes significant impact on the total radiation energy density at the time of CMB formation which is parameterized as \(\Delta\)N\({}_{\rm eff}\). To compute \(\Delta\)N\({}_{\rm eff}\) we have evaluated the extra radiation energy density injected into light neutrino bath from \(\phi\) by solving the required Boltzmann equation. In scenario-I DM mass up to a few hundred keV is excluded from the present \(1\sigma\) limit on \(\Delta\)N\({}_{\rm eff}\) from Planck 2018 data. The future generation experiments like SPT-3G, CMB-IV will be sensitive enough to test DM mass up to a few MeV. However, in scenario-II where the abundance of the mother particle (\(\phi\)) is suppressed due to sizable interactions with SM bath, we have found that the entropy injection is insensitive to the bounds on \(\Delta\)N\({}_{\rm eff}\) coming from present and future-generation experiments. Thus in this paper we have explicitly shown an alternative way of probing FIMP dark matter from the precise measurement of \(\Delta\)N\({}_{\rm eff}\) even when the mother particles do not have sizable interactions with SM bath which is otherwise absent in
literature. Consequently, we are expecting some very exciting results from next generation CMB experiments, like SPT-3G and CMB-IV which can shed some light on various dark sector models, like the one discussed in this paper.
## Acknowledgement
SJ and PG thanks D. Nanda for the helpful discussions during this project. The authors would like to thank Abhijit Kumar Saha, Sougata Ganguly and Deep Ghosh for useful discussion and comments. SJ is funded by CSIR, Government of India, under the NET JRF fellowship scheme with Award file No. 09/080(1172)/2020-EMR-I.
## Appendix A Theoretical constraints
### Stability
The scalar potential is bounded from below when the quartic couplings of the scalar potential satisfy these co-positivity conditions[95]:
\[\lambda_{H}\geq 0,\ \ \lambda_{\phi}\geq 0,\ \ \lambda_{\phi H}+2\sqrt{\lambda_{ \phi}\lambda_{H}}\geq 0. \tag{10}\]
The estimation of the lifetime of the desired the stable vacuum which essentially puts an upper bound on the trilinear dark coupling as [96]
\[\mu_{\phi}/M_{\phi}<2\sqrt{\lambda_{\phi}}. \tag{11}\]
### Perturbative unitarity
The tree-level unitarity of the theory, coming from all possible \(2\to 2\) scattering amplitudes will form the \(S\) matrix and constrain the quartic couplings of the scalar potential[97]. The eigenvalues of the \(S\) matrix are bounded from above as[98]:
\[|\lambda_{H}|\leq 4\pi,\ \ |\lambda_{\phi H}|\leq 8\pi,\ \ | \lambda_{\phi}|\leq 4\pi,\] \[|2\lambda_{\phi}+3\lambda_{H}\pm\sqrt{2\lambda_{\phi H}^{2}+(2 \lambda_{\phi}-3\lambda_{H})^{2}}|\leq 8\pi. \tag{12}\]
The quartic and Yukawa couplings of the interaction Lagrangian should also obey following inequality equations to maintain perturbativity[99]:
\[|\lambda_{H}|\lesssim\frac{2\pi}{3},|\lambda_{\phi}|\lesssim\pi,\ | \lambda_{\phi H}|\lesssim 4\pi,\] \[\text{and}\quad|y_{\phi N}|<\sqrt{4\pi}. \tag{10}\]
## Appendix B Constraint from Higgs invisible decay
The dark complex scalar, \(\phi\) is very weakly coupled with SM Higgs via the Higgs portal interaction. The late time decay of \(\phi\) decides both the relic abundance of DM and the contribution to the \(\Delta\text{N}_{\text{eff}}\) which require a light scalar mass of the order of MeV-few GeV which is well below \(M_{h}/2\) (will be discussed in the next section). In that case, Higgs can decay to the dark scalar, \(\phi\), and contribute to Higgs's invisible decay width. The Higgs invisible decay width is given by
\[\Gamma_{h\rightarrow\phi\phi*}=\frac{(\lambda_{\phi H}v)^{2}}{16\pi M_{h}} \sqrt{1-\frac{4M_{\phi}^{2}}{M_{h}^{2}}}\, \tag{11}\]
where \(M_{h}=125.06\) GeV and \(v=246\) GeV. The current analysis of the CMS collaboration [93] at LHC puts a strong constraint on the Higgs invisible decay in the following form
\[\text{BR}^{\text{inv}}=\frac{\Gamma_{h}^{\text{inv}}}{\Gamma_{h}^{\text{inv} }+\Gamma_{h}^{\text{SM}}}<11\%\ \, \tag{12}\]
where \(\Gamma_{h}^{SM}=4\) MeV. If \(M_{h}<2M_{\phi}\) then this decay is absent. In this work the Higgs invisible constraint only applicable for scenario-II where we require relatively large \(\lambda_{\phi H}\).
## Appendix C \(3\phi\to 2\phi\)
In our setup \(3\phi\to 2\phi\) number changing processes in dark sector occur through \(\phi\)\(\phi\)\(\phi\)\(\phi\)\(\phi\)\(\phi^{*}\), \(\phi\)\(\phi^{*}\)\(\phi^{*}\)\(\rightarrow\)\(\phi\)\(\phi\) and their conjugate processes i.e. \(\phi^{*}\)\(\phi^{*}\)\(\phi^{*}\)\(\rightarrow\)\(\phi^{*}\)\(\phi\), \(\phi^{*}\)\(\phi^{*}\)\(\phi\)\(\rightarrow\)\(\phi^{*}\)\(\phi^{*}\) respectively. Some of these processes are mediated by \(\phi\) only and the rest are mediated by both \(\phi\) and \(h\). However, for light \(M_{\phi}(\lesssim\mathcal{O}(\text{GeV}))\), h-mediated diagrams are heavily suppressed due to heavy propagator suppression and small Higgs portal coupling, \(\lambda_{\phi H}\). Therefore for simplicity, one can ignore the Higgs-mediated diagrams. All the \(\phi\) mediated
Feynman diagrams for \(\phi\)\(\phi\)\(\phi\)\(\phi\rightarrow\phi\)\(\phi^{*}\) and \(\phi\)\(\phi^{*}\)\(\phi^{*}\rightarrow\phi\)\(\phi\) processes are shown in Fig.11 and Fig.12 respectively.
The amplitude for \(\phi\)\(\phi\)\(\phi\rightarrow\phi\)\(\phi^{*}\) number changing scattering processes is given by
\[\mathcal{M}_{\phi\phi\phi\rightarrow\phi\phi^{*}} = \mathcal{M}_{1}+\mathcal{M}_{2}^{t}+\mathcal{M}_{2}^{u}+ \mathcal{M}_{3}^{t}+\mathcal{M}_{3}^{u}\] \[= \Big{[}\frac{4\mu_{\phi}\lambda_{\phi}}{\left(s-M_{\phi}^{2} \right)}+\frac{4\mu_{\phi}\lambda_{\phi}}{\left(t-M_{\phi}^{2}\right)}+\frac{4 \mu_{\phi}\lambda_{\phi}}{\left(u-M_{\phi}^{2}\right)}+\frac{\mu_{\phi}^{3}}{ \left(s-M_{\phi}^{2}\right)\left(t-M_{\phi}^{2}\right)}+\frac{\mu_{\phi}^{3}}{ \left(s-M_{\phi}^{2}\right)\left(u-M_{\phi}^{2}\right)}\Big{]}.\]
Figure 11: _Feynman diagrams for \(\phi\)\(\phi\)\(\phi\)\(\rightarrow\phi\)\(\phi^{*}\) number changing processes. Note that for each t-channel, there is an u-channel diagram._
Figure 12: _Feynman diagrams for \(\phi\)\(\phi^{*}\)\(\phi^{*}\)\(\rightarrow\phi\)\(\phi\) number changing processes. Note that for each t-channel, there is an u-channel diagram._
And the amplitude for \(\phi\)\(\phi^{*}\)\(\phi^{*}\rightarrow\phi\)\(\phi\) number changing scattering processes is given by
\[\mathcal{M}_{\phi^{*}\phi^{*}\phi\rightarrow\phi\phi} = \mathcal{M}_{1}+\mathcal{M}_{2}^{t}+\mathcal{M}_{2}^{u}+\mathcal{M }_{3}^{t}+\mathcal{M}_{3}^{u}+\mathcal{M}_{4}+\mathcal{M}_{5}^{t}+\mathcal{M}_ {5}^{u}++\mathcal{M}_{6}^{t}+\mathcal{M}_{6}^{u}\] \[= \Big{[}\frac{4\mu_{\phi}\lambda_{\phi}}{\big{(}s-M_{\phi}^{2} \big{)}}+\frac{\mu_{\phi}^{3}}{\big{(}t-M_{\phi}^{2}\big{)}^{2}}+\frac{\mu_{ \phi}^{3}}{\big{(}u-M_{\phi}^{2}\big{)}^{2}}+\frac{4\mu_{\phi}\lambda_{\phi}}{ \big{(}t-M_{\phi}^{2}\big{)}}+\frac{4\mu_{\phi}\lambda_{\phi}}{\big{(}u-M_{ \phi}^{2}\big{)}}\] \[\qquad\qquad+\frac{4\mu_{\phi}\lambda_{\phi}}{\big{(}t-M_{\phi}^ {2}\big{)}}+\frac{4\mu_{\phi}\lambda_{\phi}}{\big{(}u-M_{\phi}^{2}\big{)}}+ \frac{\mu_{\phi}^{3}}{\big{(}t-M_{\phi}^{2}\big{)}\big{(}s-M_{\phi}^{2}\big{)} }+\frac{\mu_{\phi}^{3}}{\big{(}u-M_{\phi}^{2}\big{)}\big{(}s-M_{\phi}^{2}\big{)} }\Big{]}.\]
The total thermal averaged cross section for \(3\phi\to 2\phi\) number changing processes can be expressed using non-relativistic approximation as[100]:
\[\langle\sigma v^{2}\rangle_{3\phi\to 2\phi} = \langle\sigma v^{2}\rangle_{\phi\phi\phi\rightarrow\phi\phi^{*}} +\langle\sigma v^{2}\rangle_{\phi\phi^{*}\phi^{*}\rightarrow\phi\phi} \tag{13}\] \[\approx \frac{\sqrt{5}}{192\pi M_{\phi}^{3}}\Big{(}|\mathcal{M}_{\phi \phi\phi\rightarrow\phi\phi^{*}}|^{2}+|\mathcal{M}_{\phi^{*}\phi^{*} \phi^{*}\rightarrow\phi\phi^{*}}|^{2}\Big{)}\] \[\qquad+\frac{\sqrt{5}}{192\pi M_{\phi}^{3}}\Big{(}|\mathcal{M}_{ \phi\phi^{*}\phi^{*}\rightarrow\phi\phi}|^{2}+|\mathcal{M}_{\phi^{*}\phi \phi\rightarrow\phi^{*}\phi^{*}}|^{2}\Big{)}\] \[= \frac{\sqrt{5}}{192\pi M_{\phi}^{3}}\Big{(}2|\mathcal{M}_{\phi \phi\phi\rightarrow\phi\phi^{*}}|^{2}+2|\mathcal{M}_{\phi\phi^{*}\phi^{*} \rightarrow\phi\phi}|^{2}\Big{)},\]
where \(|\mathcal{M}_{\phi\phi\phi\rightarrow\phi\phi^{*}}|^{2}=|\mathcal{M}_{\phi^{*} \phi^{*}\phi^{*}\rightarrow\phi\phi^{*}}|^{2}\) and \(|\mathcal{M}_{\phi\phi^{*}\phi^{*}\rightarrow\phi\phi}|^{2}=|\mathcal{M}_{\phi ^{*}\phi\phi\rightarrow\phi^{*}\phi^{*}}|^{2}\).
## Appendix D \(2\phi\to 2\) SM and \(\phi\) SM
There is another type of number-changing process between the dark sector, \(\phi\), and the visible sector, SM where two dark scalar \(\phi\) annihilates into two SM particles via \(h\) mediated diagram. Note that our analysis mostly focuses on the light-dark scalar with mass up to a few GeV. Therefore \(\phi\) can only annihilate into light fermion pairs. The Feynman diagrams of corresponding number-changing processes are shown in Fig.13.
The thermal averaged cross-section for \(2\phi\to 2\)SM number changing process is given by:
\[\langle\sigma v\rangle_{2\phi\to 2\text{SM}} = \sum_{f}\langle\sigma v\rangle_{\phi\phi^{*}\to f \overline{f}}\] \[= \sum_{f}\frac{x}{16TM_{\phi}^{4}K_{2}(x)^{2}}\int_{4M_{\phi}^{2}}^ {\infty}\big{(}\sigma v\big{)}_{\phi\phi^{*}\to f\overline{f}}K_{1} \big{(}\frac{\sqrt{s}}{T}\big{)}s\sqrt{s-4M_{\phi}^{2}}\ ds\]
where \(x=\frac{M_{\phi}}{T}\) and \(\big{(}\sigma v\big{)}_{\phi\phi^{*}\to f\overline{f}}\) can be written as:
\[\big{(}\sigma v\big{)}_{\phi\phi\to f\overline{f}} = \Big{(}\frac{1}{4\pi s\sqrt{s}}\frac{N_{c}\lambda_{\phi H}^{2}m_{f }^{2}}{(s-m_{h}^{2})^{2}+m_{h}^{2}\Gamma_{h}^{2}}(s-4m_{f}^{2})^{\frac{3}{2}} \Big{)}\Theta(M_{\phi}-m_{f}). \tag{100}\]
In the above expression \(N_{c}=1\) for leptons and \(N_{c}=3\) for quarks.
The scattering between DM and SM, \(\phi\) SM \(\rightarrow\phi\) SM is also important for our discussion which is required for analysing the kinetic equilibrium of the DM in early universe. The Feynman diagram for the scattering between DM and SM fermions are shown in Fig.14.
The thermal averaged scattering cross-section between DM and SM is followed by:
\[\langle\sigma v\rangle_{\phi\text{ SM}\rightarrow\phi\text{ SM}} = \sum_{f}\Big{\langle}\sigma v\rangle_{\phi f\rightarrow\phi f}+ \langle\sigma v\rangle_{\phi^{*}f\rightarrow\phi^{*}f}\Big{)}=2\sum_{f} \langle\sigma v\rangle_{\phi f\rightarrow\phi f} \tag{101}\] \[= \sum_{f}\frac{x}{16TM_{\phi}^{2}m_{f}^{2}K_{2}(M_{\phi}/T)K_{2}(m _{f}/T)}\] \[\qquad\times\int_{\big{(}M_{\phi}+m_{f}\big{)}^{2}}^{\infty} \big{(}\sigma v\big{)}_{\phi f\rightarrow\phi f}K_{1}\big{(}\frac{\sqrt{s}}{T }\big{)}s\sqrt{s-\big{(}M_{\phi}+m_{f}\big{)}^{2}}\ ds\,\]
where \(x=\frac{M_{\phi}}{T}\) and the scattering cross-section, \(\big{(}\sigma v\big{)}_{\phi f\to\phi f}\) is given by,
\[\big{(}\sigma v\big{)}_{\phi f\to\phi f} = \frac{1}{4\pi s\sqrt{s}}\frac{1}{2\sqrt{s}}\sqrt{\big{[}s-\big{(} M_{\phi}+m_{f}\big{)}^{2}\big{]}\big{[}s-\big{(}M_{\phi}-m_{f}\big{)}^{2}\big{]}} \tag{45}\] \[\qquad\qquad\times\Big{[}-2(t-4M_{\phi}^{2})\Big{(}\frac{\lambda _{\phi H}v}{t-m_{h}^{2}}\Big{)}^{2}\Big{]}\.\]
|
2309.08230 | A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in
Machine Unlearning Services | The right to be forgotten requires the removal or "unlearning" of a user's
data from machine learning models. However, in the context of Machine Learning
as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning
request is impractical due to the lack of training data on the service
provider's side (the server). Furthermore, approximate unlearning further
embraces a complex trade-off between utility (model performance) and privacy
(unlearning performance). In this paper, we try to explore the potential
threats posed by unlearning services in MLaaS, specifically over-unlearning,
where more information is unlearned than expected. We propose two strategies
that leverage over-unlearning to measure the impact on the trade-off balancing,
under black-box access settings, in which the existing machine unlearning
attacks are not applicable. The effectiveness of these strategies is evaluated
through extensive experiments on benchmark datasets, across various model
architectures and representative unlearning approaches. Results indicate
significant potential for both strategies to undermine model efficacy in
unlearning scenarios. This study uncovers an underexplored gap between
unlearning and contemporary MLaaS, highlighting the need for careful
considerations in balancing data unlearning, model utility, and security. | Hongsheng Hu, Shuo Wang, Jiamin Chang, Haonan Zhong, Ruoxi Sun, Shuang Hao, Haojin Zhu, Minhui Xue | 2023-09-15T08:00:45Z | http://arxiv.org/abs/2309.08230v2 | # A Duty to Forget, a Right to be Assured? Exposing Vulnerabilities in Machine Unlearning Services
###### Abstract
The right to be forgotten requires the removal or "unlearning" of a user's data from machine learning models. However, in the context of Machine Learning as a Service (MLaaS), retraining a model from scratch to fulfill the unlearning request is impractical due to the lack of training data on the service provider's side (the server). Furthermore, approximate unlearning further embraces a complex trade-off between utility (model performance) and privacy (unlearning performance). In this paper, we try to explore the potential threats posed by unlearning services in MLaaS, specifically over-unlearning, where more information is unlearned than expected. We propose two strategies that leverage over-unlearning to measure the impact on the trade-off balancing, under black-box access settings, in which the existing machine unlearning attacks are not applicable. The effectiveness of these strategies is evaluated through extensive experiments on benchmark datasets, across various model architectures and representative unlearning approaches. Results indicate significant potential for both strategies to undermine model efficacy in unlearning scenarios. This study uncovers an underexjord gap between unlearning and contemporary MLaaS, highlighting the need for careful considerations in balancing data unlearning, model utility, and security.
Network and Distributed System Security (NDSS) Symposium 2024
26 February - 1 March 2024, San Diego, CA, USA
ISBN 1-891562-93-2
[https://dx.doi.org/10.14722/ndss.2024.24252](https://dx.doi.org/10.14722/ndss.2024.24252)
www.ndss-symposium.org
## I Introduction
Deep Neural Networks (DNN) models are often trained on large amounts of data, including personal data [18]. Nevertheless, the General Data Protection Regulation (GDPR) [43] and the California Consumer Privacy Act (CCPA) [42], _i.e., Right to be Forgotten_, enforce the service providers to necessitate removal of users' training data when requested by individuals due to privacy regulation compliance. To practically grant the right to be forgotten, machine unlearning techniques [6] are developed, aiming to protect users' privacy by removing the contribution of a data sample or several data samples from a trained ML model on request or after a particular timescale. However, machine unlearning on deep models is still in its infancy. Deleting data from a database [9] can be relatively straightforward and intuitive, while it is much more complicated in the case of deep models because of the model complexity and the randomness in the training algorithms [51]. A commonly adopted unlearning approach is retraining, _i.e.,_ retraining the model from scratch where the data to be unlearned are removed. However, retraining is almost always associated with heavy computational cost, especially when models are deep models with millions or even billions of parameters.
The trend of Machine Learning as a Service (MLaaS) has gained significant momentum in recent years [28]. Implementing deep models on the cloud platform as API services enjoys several advantages, such as privacy enhancement (splitting the storage of training data from service provider), accessibility, cost-effectiveness, and scalability [47]. One successful example is the case of Lufthansa Technik using the MLaaS of Google Cloud AutoML [1]. Neglecting to eliminate data from the deployed models timely may lead to punitive measures, reinforcing the importance of effective machine unlearning strategies in cloud-based model management. For example, UK's data regulator, the Information Commissioner's Office (ICO), has issued guidance on the data protection implications of using AI and generative models [34]. The U.S. Federal Trade Commission (FTC) made the cloud storage application Ever delete both user data and any deployed models trained on users' data in 2021 [16]. However, to achieve data removal on deep models in MLaaS, the feasibility of prevailing efficient retraining methods is often compromised due to the absence of the server's direct access to the original training dataset. Conducting retraining procedures on a local scale and subsequent redeployment also incurs supplementary costs and induces delays. Ideally, the unlearning procedure could be hosted by the cloud as well. A promising industrial paradigm of data removal from the cloud database is exemplified by Google Analytics 4, involving the utilization of the User Deletion API [3]. Fortunately, some approximate unlearning approaches directly modify the model parameters and do not require the original training dataset to be integrated as an Unlearning API, which can greatly facilitate the server to provide machine unlearning services. Thus, similarly, with the Unlearning API, a user who once contributed data to train the model can freely ask the service provider for the deletion of her or his data on the model if she or he wishes to opt out, as shown in Figure 1.
**Research Gap.** Current machine unlearning methodologies have been developed and assessed within a local development context, wherein the developer has comprehensive access to both the model and its training data. However, the advent of Machine Learning as a Service (MLaaS) imposes limitations on the availability of training data and resources, rendering
most existing unlearning approaches [6, 20, 30, 50, 4] to be futile. Furthermore, MLaaS transitions the model from a local environment to a public deployment setting, introducing the potential for untrustworthy or even malicious users. These users may seek to carry out compromising activities through unlearning requests, reducing the practicability to implement the machine unlearning process in the MLaaS context. _Given some approximate unlearning methods can be applied in the MLaaS, however, none of the existing works investigate what types of risks malicious users can bring to the server._
**Research Question.** From the standpoint of the MLaaS server, a balance must be struck between sustaining normal business services (primary tasks) and accommodating machine unlearning services (secondary tasks). Catering to unlearning requests from users inevitably diminishes the model's utility, as the contributions made by the user's data are excised. As such, the deployment of the unlearning process within the MLaaS platform hinges on the assurance of equilibrium between model fidelity and unlearning effectiveness. In this context, the research question is: _Is it feasible for a user to compromise the normal business of the server by requesting machine unlearning services in MLaaS? And how easily can the user achieve the compromise?_ Unfortunately, existing research either tends to concentrate on fooling the model, or neglects to evaluate the trade-off entirely, hence falling short in addressing the research question due to their inability to satisfy the constraints of MLaaS settings (detailed in Section II).
**Motivation.** To answer this question, we conduct the first investigation to identify the potential threats that can be associated with machine unlearning services under MLaaS scenarios. We identify the major threat that a user can pose to the server: _over-unlearning_. Specifically, as depicted in Figure 1 and Figure 2, over-unlearning represents that the information the server's model unlearned exceeds what it ought to unlearn by manipulating the unlearned samples to contain more knowledge than expected.
To mimic the malicious users' behaviors and measure the feasibility of potential risks, we present two strategies that can be leveraged for the manipulation to achieve over-unlearning, while only granting the user black-box access to the model of the server. Specifically, over-unlearning is instigated by deliberately _blending_ additional samples from a disparate task into the original unlearned samples through meticulous sample-level manipulation. When the server unlearns the blended data, the model will remove the additional information about that particular task. Furthermore, we propose an advanced over-unlearning approach by _pushing_ the samples close to the decision boundary via pixel-wise manipulation. We consider that the user can intentionally augment the unlearning effect of the unlearned data on the server's model by moving the unlearned data to the decision boundary of the model. Compared to the original unlearned data, the modified unlearned data become more informative to the model. Thus, by uploading the modified data instead of the original data for the server to unlearn, the model unlearns more information than the case of unlearning the original data, which achieves the goal of over-unlearning. By answering this question, we try to provide an evaluation pipeline toward the deployability of machine unlearning in real-world MLaaS implementing scenarios.
**Contribution.** Our contributions are summarized as follows:
* This paper is the _first_ to investigate the real-world machine unlearning service pipeline and to identify potential threats related to machine unlearning services in the MLaaS environment. It illuminates the risk of over-unlearning, which could significantly compromise the model utility of the server when the malicious unlearning request is submitted.
* We identify two novel strategies that allow a user to achieve over-unlearning, with only black-box access to the server's model, making them easily applicable in real-world MLaaS environments.
* Extensive experiments are conducted using benchmark datasets, across various model architectures and representative unlearning approaches. The findings validate the effectiveness of the proposed strategies in inducing over-unlearning.
* An extensive ablation study is performed to evaluate the effectiveness of the proposed strategies in different settings. The results demonstrate the effectiveness of the strategies across various settings, emphasizing their potential impact and the importance of mitigating such risks in MLaaS environments.
## II Related Work and Threat Model
In this section, we first introduce recent studies in machine unlearning domain and then describe the threat model of machine unlearning service particularly in MLaaS scenarios.
### _Related Work_
Machine unlearning is motivated by a variety of reasons: _i)_ to comply with privacy regulations; _ii)_ the trained model itself needs to be updated or repaired. For example, a model trained on a poisoned dataset exhibits unexpected or undesirable behavior in response to benign inputs or deliberately crafted inputs. Thus, the model needs to be repaired to ensure its safety [9]. To achieve machine unlearning, a naive yet natural approach is retraining, _i.e._, retraining the model on the training dataset with the unlearned data removed. However, retraining the model can result in expensive computation costs when the model or the training dataset is large-scale. For example, it shows that 34 days are needed to train OpenAI's GPT-3 [8] even on 1,024 NVIDIA A100 GPUs [38]. To overcome the
Fig. 1: An overview of the over-unlearning threat in machine unlearning as a service.
limitation of naively retraining while fulfilling the demand of machine unlearning, many unlearning approaches have been proposed recently, and they can be mainly divided into two categories: exact unlearning and approximate unlearning.
**Exact Unlearning.** Exact machine unlearning requires the model to be retrained from scratch, _i.e.,_ retrain the model on the training dataset with the unlearned data removed with some skills to achieve efficiency. The advantage of exact unlearning is that it guarantees the impacts of the unlearned data are completely removed because the unlearned model is never trained on the unlearned data. To achieve efficiency, either the training dataset or the model architecture is carefully crafted before the training process. Specifically, the first line of work splits the training dataset into different non-overlap shards, where a constituent model is trained on each of the shards. Thus, when unlearning a data sample, only the constituent model trained on the shard containing the unlearned data is required to be retrained. Based on the intuition of dataset partition, Bourtoule et al. [6] propose the unlearning approach of SISA that is generic to a variety of models with different architectures and complexities. Another line of work focuses on carefully designing the architecture of the model so that a data sample only contributes to part of the model. When unlearning a data sample, only the partial of the model that is influenced by the unlearned data is required to be retrained. Based on carefully designing the tree structure, Brophy and Lowd [7] propose data removal-enabled forests that support efficient unlearning of a data sample for random forests. Although exact unlearning has a perfect unlearning guarantee, the disadvantage is that it usually requires storing the training dataset, which may limit its applicability in many cases like MLaaS, where users' data is not allowed to be stored or the training dataset is deleted to comply with data regulations.
**Approximate Unlearning.** Approximate machine unlearning directly modifies the parameters of the trained model to obtain the unlearned model that approximates the model retrained from scratch. Approximate unlearning is usually achieved by updating the parameters of the trained model with a few numbers of iterations using the information calculated from the unlearned data. There are mainly two kinds of approximate unlearning methods, which are based on the influence function [17] and the gradient of the loss function, respectively. Specifically, the first one [24, 29] leverages the influence function to calculate the influence of the unlearned data on the model parameters so that the trained model can apply a Newton step to remove the influence for obtaining the unlearned model The advantage of the first kind of approximate unlearning method is that one-step Newton update is enough for removing the contribution of the unlearned data. However, the drawbacks are also obvious: _i)_ it requires computing the inverse Hessian matrix of the loss function, which can be difficult for large-scale deep models; _ii)_ the training dataset might not be applicable in cases where it is not stored or deleted.
The second kind of unlearning method [51, 55, 39] calculates the gradients of the unlearned data contributed to the trained model during the training process. Then, to unlearn the data, the trained model is updated by adding back these gradients to approximate the model that is retrained from scratch. A state-of-the-art gradient-based unlearning method is proposed by [54]. The high-level intuition of this unlearning method [54] is to overwrite the unlearned data from the trained model. An advantage of this method [54] is that it only requires to access the unlearned data, which makes it practical in many cases especially in MLaaS where only the unlearned data is applicable.
In addition to the two kinds of unlearning methods, fine-tuning has been widely used as an empirical unlearning baseline in existing works [20, 21, 23, 56]. Fine-tuning first randomly selects incorrect labels and uses them to relabel the unlearned samples. Then, the trained model is fine-tuned on these relabeled unlearned data for several iterations for unlearning. The intuition is to confuse the model's understanding of the unlearned sample so that it cannot output the correct prediction of the unlearned data. However, information of the unlearned data can still remain in the parameters of the unlearned model produced by fine-tuning.
In this study, we mainly focus on evaluating the risks that may exist in the gradient-based approximate unlearning approaches as they are feasible in MLaaS but the retraining method is not. Particularly, we involve fine-tuning as the empirical unlearning baseline and the approaches used in [54] as the state-to-the-art approximation-based unlearning baseline in our evaluation.
### _Our Threat Model_
In this subsection, we describe the potential threat to machine unlearning that may exist particularly in MLaaS scenarios. To formalize, we let \(\mathcal{D}_{\text{train}}=\mathcal{D}_{u}\cup\mathcal{D}_{r}\), where \(\mathcal{D}_{u}\) is the dataset that contains the unlearned data and \(\mathcal{D}_{r}\) is the dataset that contains the remaining data. Let \(\mathbf{\theta}^{*}\) be the model trained on \(\mathcal{D}_{\text{train}}\). Machine unlearning aims to obtain an unlearned model \(\mathbf{\theta}^{*}_{u}\) by removing the contribution of \(\mathcal{D}_{u}\) that has contributed during the training process to \(\mathbf{\theta}^{*}\).
**MLaaS Scenario.** We assume a developer who, upon training a model with dataset \(\mathcal{D}_{\text{train}}\), retains proprietary ownership of the resulting model. Subsequent to the training phase, the developer deploys the model on a server to offer MLaaS service for commercialization. The server has a test dataset \(\mathcal{D}_{\text{test}}\) and works as an agent responsible for model deployment and maintenance, including monitoring the performance of the models and updating them. We call the user who has once contributed training data \(\mathcal{D}_{u}\subset\mathcal{D}_{\text{train}}\) to train the model as an _authorized unlearning user_: they can revoke the contribution of their data due to data protection regulations. Generally, different unlearning procedures may involve retraining and have different impacts on the quality of the model. Thus, to maintain the model with the compliance of regulations, the developer has agreed on a pre-selected unlearning method that can ensure the balance between model fidelity and unlearning efficacy with the server. When an authorized unlearning user decides to revoke, the server will fulfill the unlearning requests using the pre-selected unlearning method.
Under the MLaaS scenarios, there are three entities involved in machine unlearning services: the model provider, the MLaaS server, and the users who utilize the model. The model provider transfers or delegates the high-utility model to the MLaaS server for professional business services as well as machine unlearning services, while the model users can utilize the model provided by the MLaaS server via APIs.
For authorized unlearning users, they can also raise unlearning requests by uploading \(\mathcal{D}_{u}\) to the server for deleting their data when they decide to opt-out. The knowledge of the developer, users, and server is summarized in Table I.
We detail the capabilities and knowledge of the model provider, MLaaS server, and user as follows:
**Model Provider.** They are model developers or model owners, who have full control of the model, including the training data and white-box information of the model. After they delegate the model to MLaaS for commercialization, the control is also transferred to the server.
**MLaaS Server.** The server is the agency that provides MLaaS to users for normal business and also provides machine unlearning services for compliance with privacy regulations. To simulate a practical scenario, we assume the server only has a test dataset but does not have the original training dataset for two reasons. First, users' training data may contain sensitive or personally identifiable information. Thus, to ensure their privacy and confidentiality, users may not be willing to have their data stored by the server. Second, many regions and countries have strict data protection regulations that govern the collection, storage, and usage of personal data. For example, the tech giant Meta in May 2023 was fined a record 1.2 billion euros (_i.e.,_ $1.3 billion) and ordered to stop transferring data collected from Facebook users in Europe to the United States [2]. Thus, the server itself may choose not to store users' training data to simplify compliance with these regulations.
**Authorized Unlearning Users.** Authorized unlearning users are the users who are authorized to raise machine unlearning requests to the server if they wish to opt-out by submitting the unlearned data to the server. They can be the data provider for the model training. These users can also access the model provided by the server to utilize its predictive ability as normal API users. The unlearning procedure will be conducted at the server using the pre-selected unlearning strategies.
**Machine Unlearning Threats.** We anticipate a scenario where some authorized users, with permission to initiate unlearning, might have malicious intent or could be compromised. These users, despite only contributing a small proportion of the training data, aim to induce severe performance degradation in the server's model by exploiting unlearning requests with a few samples. Such situations underscore the necessity for robust security measures and stringent user regulations within machine learning systems.
_Property of Malicious Unlearning._ The properties of the malicious unlearning behaviors are:
* _Performance degradation._ The server's normal business performance incurs an unexpected degradation when fulfilling the user's unlearning requests: the utility of the model is heavily compromised.
* _Stealthiness of the unlearned sample._ The manipulated unlearned sample for over-unlearning is not easy to be distinguished from normal samples.
* _Stealthy prediction of the unlearned sample._ Note that the authorized user could also submit a label of the unlearned sample. The submitted label of the manipulated unlearned sample may be needed to be consistent with the prediction derived from the deployed model.
The latter two properties of malicious unlearning are proposed from the MLaaS server's perspective: the server may be aware of the malicious unlearning requests and implement protections to protect its model. Specifically, the server can verify both the quality of the sample's features and the prediction label of the unlearned sample. Thus, malicious unlearning requires to satisfy these two properties for ensuring that it can bypass basic protections implemented by the server.
_Capabilities of Malicious Users._ We assume such a malicious user only has black-box access to the model of the server, which means the user can submit a data sample \(\mathbf{x}\) to query the model and obtain a vector of probabilities \(\mathcal{Y}\) but cannot know the parameters and architectures of the model. Note that black-box access assumption is very practical and also strict for the malicious user: his adversarial knowledge is similar to the knowledge of a normal user. Based on \(\mathcal{D}_{u}\) and black-box access, the malicious user aims to construct a perturbed unlearn dataset \(\mathcal{D}^{\prime}_{u}\), which will be sent to the server for unlearning as well as compromising its model. The practicality of this threat is also manifested in that a malicious user could theoretically superimpose an unlimited amount of information on the data set to be unlearned, causing an unanticipated decline in model performance. Besides, the degradation of the model's performance is multifaceted. For instance, taking the example of unlearning 100 samples from class A, malicious unlearning can easily be achieved by injecting additional information into these 100 samples. This would not only result in the over-forgetting of information from category A but may also cause the model's performance to decline across all categories. Even more, it could potentially target a specific category B by injecting information about B into these 100 samples of A, thereby accomplishing a targeted malicious unlearning.
### _Difference from Existing Threats to Machine Unlearning_
Currently, there are two works [36, 19] that investigate the potential threats to machine unlearning. The first work [36]
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Treats to** & \begin{tabular}{c} **Adversary’s Knowledge Requirement** \\ **Machine Unlearning** \\ \end{tabular} & \begin{tabular}{c} **Unlearning** \\ **Training** \\ **Procedure** \\ \end{tabular} & \begin{tabular}{c} **Win-box** \\ **Dataset** \\ \end{tabular} & \begin{tabular}{c} **Exact** \\ **Model** \\ **unlearning** \\ \end{tabular} \\ \hline \hline \begin{tabular}{l} Slaa-down Unlearning [36] \\ **Computaged Pusionsing**[19] \\ **Over-anlearning (Ours)** \\ \end{tabular} & \begin{tabular}{c} **Ours** \\ **Ours** \\ **Ours** \\ \end{tabular} & \begin{tabular}{c} **Ours** \\ **Ours** \\ **Ours** \\ \end{tabular} &
\begin{tabular}{c} **Ours** \\ **Ours** \\ **Ours** \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE II: An overview of threats to machine unlearning.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \(\mathcal{D}_{\text{train}}\) & \(\mathcal{D}_{u}\) & \(\mathcal{D}_{\text{test}}\) \\ \hline \hline Model Provider & \begin{tabular}{c} \(\bullet\) \\ MLaaS Server \\ \end{tabular} & \begin{tabular}{c} Unknown \\ **O** \\ **Authorized Unlearning User** \\ \end{tabular} & \begin{tabular}{c} **Ours** \\ **Ours** \\ **Ours** \\ \end{tabular} &
\begin{tabular}{c} **Ours** \\ **Ours** \\ **Ours** \\ **Ours** \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table} TABLE I: Knowledge of dataset available for different entities in MLaaS.
proposes slow-down attacks that aim to increase the computational cost of the unlearning process by adding perturbations to the original unlearned samples. The second work [19] proposes targeted attacks that aim to cause the model to misclassify particular target test samples. To achieve the targeted attacks in machine unlearning, the adversary first creates poison samples that contain features of the target test samples and adds them to the training dataset. Then, by submitting an unlearning request to unlearn the poison samples, the model will be triggered to make wrong predictions on the target test samples. Table II summarizes the two existing threats and our proposed over-unlearning threat to machine unlearning.
Our over-unlearning threat is significantly different from the two existing threats in adversary's knowledge requirement, threat scenarios and threat goals. First, we are the first to investigate the threat under the MLaaS scenario, where the existing two threats cannot be materialized because the resources available in MLaaS are not enough for the adversary to pose the threat. Specifically, as depicted in Table II, the existing two threats require the adversarial to know the training procedure, have access to the training dataset, and have white-box access to the model, while our threat is more practical and does not need such adversarial knowledge. Second, the goal of over-unlearning is different from the two existing threats: over-unlearning is to compromise the model's utility, while the existing two threats focus on increasing the computational cost of unlearning or achieving targeted attacks on specific samples (_i.e.,_ fool the model).
## III Methodology of Malicious Unlearning
In the context of machine learning, there is a limited understanding of how individual data samples impact a machine learning model. Although we have the influence function [17, 32] to measure the effect of a single training sample on model parameters, it only works perfectly on convex models while cannot precisely measure the effect on complex non-convex models such as DNNs [6]. The difficulty of measuring the amount of knowledge or information contained in a single data sample provides avenues for malicious users to exploit in machine unlearning services: the user can intentionally manipulate their data so that the server unlearns the revised data having a higher effect on the model than unlearning the original data.
In this section, we formulate the over-unlearning and introduce two possible strategies that an authorized unlearning user can leverage for malicious unlearning in MLaaS scenarios, to answer the research question "Is it feasible for a user to compromise the normal business of the server by requesting machine unlearning services in MLaaS? And how easily can the user achieve the compromise?" Based on the different granularity of the input space, we introduce two types of methods that the user can leverage to materialize the threats of over-unlearning. The first type works on _sample-wise_ modification of the data, while the second one works on _pixel-wise_ modification of the data.
### _Problem Statement_
In this paper, we mainly focus on ML classification, as it is one of the most common ML applications. Let \((\mathbf{x},y)\) be a data sample with multidimensional features. An ML classifier is a function \(f(\cdot)\) that takes as input \(\mathbf{x}\) and outputs a vector of probabilities \(\mathcal{Y}\). The length of \(\mathcal{Y}\) equals the number of classes in the classification task. Each entry \(y_{i}\) in \(\mathcal{Y}\) represents the posterior probability of the model assigning \(\mathbf{x}\) to a class (_i.e.,_ the label) \(c_{i}\in\mathcal{C}\), where \(\mathcal{C}\) is the set of all classes.
Let \(\mathcal{M}(\cdot,\cdot)\) be a fixed unlearning method, and \(\mathbf{\theta}^{*}\) be the trained model owned by the server. Let \(\mathcal{D}_{u}\subset\mathcal{D}_{\text{train}}\) be an unlearned dataset owned by a user. Let \(\mathcal{D}^{\prime}_{u}\) be a perturbed version of \(\mathcal{D}_{u}\) generated by the user for the purpose of posing the over-unlearning threat to the server. To facilitate the understanding of over-unlearning, we first introduce the scenarios of normal unlearning and malicious unlearning.
**Normal Unlearning.** Normal unlearning is defined as the case that the user faithfully submits \(\mathcal{D}_{u}\) to the server for unlearning. Based on \(\mathcal{M}(\cdot,\cdot)\) and \(\mathcal{D}_{u}\), the server produces an unlearned model \(\mathbf{\theta}^{*}_{u}\).
**Malicious Unlearning.** Malicious unlearning is defined as the case that the user maliciously submits \(\mathcal{D}^{\prime}_{u}\) to the server for unlearning. Based on \(\mathcal{M}(\cdot,\cdot)\) and \(\mathcal{D}^{\prime}_{u}\), the server produces an unlearned model \(\mathbf{\theta}^{*}_{t}\).
Note that the server may have limited capacity to check the quality of the unlearned model, and the most feasible indicator is to check the test accuracy of the model using a test dataset. The investigation of accuracy can be conducted class by class to check the quality of each class. Let \(\mathcal{D}_{\text{test}}\) be a test dataset. For simplicity, we consider the data \((\mathbf{x},y)\in\mathcal{D}_{u}\) hosted by the malicious unlearning user being from one specific class A. Let \(\mathcal{D}_{A}\subset\mathcal{D}_{\text{test}}\) be a sub-test dataset containing the test samples of the class \(A\) in \(\mathcal{D}_{\text{test}}\). We define \(\alpha_{1}\) as the test accuracy of \(\mathbf{\theta}^{*}_{u}\) on \(\mathcal{D}_{A}\) and \(\alpha_{2}\) as the test accuracy of \(\mathbf{\theta}^{*}_{t}\) on \(\mathcal{D}_{A}\). We can define over-unlearning as follows:
**Definition 1** (Over-unlearning).: _We call the case Over-unlearning if the utility of \(\mathbf{\theta}^{*}_{t}\) is no greater than that of \(\mathbf{\theta}^{*}_{u}\) on \(\mathcal{D}_{A}\), e.g., the accuracy \(\alpha_{2}\leq\alpha_{1}\)._
Over-unlearning represents that the information the server unlearned exceeds what the server ought to unlearn. For example, a classifier trained to determine "dog" and "cat" has a test accuracy of 90% on the class "dog". Unlearning 10% of the training data of "dog" produces an unlearned model with a test accuracy of 88% on "dog", _i.e.,_ the accuracy degradation is 2%. We consider over-unlearning to happen if unlearning the same training data with some modifications using the same unlearning method produced an unlearned model with a test accuracy of less than 88% on "dog", e.g., 80%.
Over-unlearning may result in two types of degradation, as shown in Figure 2. Over-unlearning-I highlights the degradation observed in class A when manipulating samples of class A, while Over-unlearning-II uncovers the degradation experienced in classes other than A. The impacts of these two types hinge on whether the supplementary information introduced to instigate over-unlearning regarding classes other than A or not.
### _Blending as Naive Over-unlearning_
The most simple way to achieve over-unlearning is to incorporate additional sample information into the unlearned
sample, e.g., blending the original unlearned sample from class A with a sample from another class B without introducing any computational overheads. Therefore, by unlearning \(\mathcal{D}_{u}^{*}\) that contains the features of the blended sample from class B, the model will suffer an unexpected performance degradation on class B as a result of additional information over-unlearned. Therefore, we first use such a lightweight sample-wise strategy to illustrate the feasibility of over-unlearning. Here, the additional information introduced to cause over-unlearning is other than the original class A, then the measured impact belongs to over-unlearning-II.
To meet the stealthiness requirement of the unlearned sample, we consider the malicious user can embed the features of the samples of the target class into the data samples in \(\mathcal{D}_{u}\). Specifically, we consider the malicious user can use the blend technique in [14] to embed the features of the samples of another class into the data samples in \(\mathcal{D}_{u}\). The user leverages an injection function \(\Pi(\cdot,\cdot)\) to blend \(\mathbf{x}\) with \(\mathbf{x_{b}}\), which is defined as follows:
\[\Pi(\mathbf{x},\mathbf{x}_{b})=\lambda\cdot\mathbf{x}+(1-\lambda)\cdot\mathbf{x}_{b}, \tag{1}\]
where \(\lambda\in[0,1]\) is the hyper-parameter representing the blending ratio. Here, both \(\mathbf{x}\) and \(\mathbf{x}_{b}\) are in their vector representations. The \(\lambda\) can be considered as the hyperparameter to control the transparency level of the blending. We visualize an example of blending a "airplane" sample with a "cat" sample in Figure 3.
To further satisfy the stealthiness of the manipulated unlearned sample, we modify the unlearned data \(\mathbf{x}\) by blending it with the sample \(\mathbf{x}_{b}\) in class B, while modifying \(y\) to the predicted label of the server's model on the modified data. The submitted label is consistent with the prediction of the deployed model. From the server's perspective, it is difficult to notice the malicious behavior of the user based on the submitted label.
### _Pushing as Advanced Over-unlearning_
Even though the blending-based strategy is cost-efficient and model-irrelevant, the over-unlearning is conducted in a blind way and the degradation performance is not effective on all datasets (detailed in subsection V-A). Therefore, we introduce an advanced over-unlearning method.
**Motivation.** The behavior of ML models is reflected in their decision boundary, which represents the region where the model assigns different class labels or makes decisions based on the input features. We have a key observation that ML models can be more confused when predicting samples that are near the decision boundary, given that even a slight change in the input can lead to different predictions by the model. Thus, we consider that samples near the decision boundary of a model are more informative than those that are farther away from it because samples near the decision boundary carry more ambiguity in their class assignments. This intuition is similar to the entropy [46] in information theory which measures the average amount of information contained in a random variable. In information theory, a random variable is considered more informative (_i.e.,_ with higher entropy) when its possible outcomes are more random or unpredictable.
Based on the above motivation, we consider the malicious user can enlarge the unlearning effect of the samples in \(\mathcal{D}_{u}\) by intentionally moving them to the decision boundary of the model. When the data sample is moved near but still within the decision boundary, the additional information to cause over-unlearning is still about the original class A. Therefore, the over-unlearning type is Over-unlearning-I, where the performance degradation derived from over-unlearning mainly affects class A. However, after the data sample is moved to cross the decision boundary, the additional information to cause over-unlearning involves classes other than class A. Correspondingly, the over-unlearning type is both Over-unlearning-I (primary) and II (secondary), where the performance degradation derived from over-unlearning affects other influenced classes. To investigate these two scenarios, we explore two pushing-based over-unlearning methods as follows.
Fig. 3: An example of blending “airplane” sample with “cat” sample (with \(\lambda=0.1\)).
Fig. 2: An illustration of two types of implications of over-unlearning. The white circle represents the information that the \(\mathbf{\theta}^{*}\) should unlearn.
**Over-unlearning by Pushing-I.** This method moves the data sample toward the decision boundary but not across it. From the perspective of the model, it can still correctly predicts the label of the sample because the sample did not cross the decision boundary. Pushing-I helps to evaluate how unlearning samples near the decision boundary can affect the performance of the model.
**Over-unlearning by Pushing-II.** This method moves the data sample just across the decision boundary. After the manipulation, such data samples are beyond the prediction ability of the model because they are moved to a region other than their correct labels. Pushing-II also helps to evaluate how unlearning samples that are hard to make predictions can affect the performance of the model.
Note that samples in \(\mathcal{D}_{u}\) themselves may be close to the decision boundary of the model. However, this does not undermine our methods: First, the majority of the training samples are not located near the decision boundary, which is often true when the model is trained on well-distributed and balanced datasets [5]. Second, even if the samples in \(\mathcal{D}_{u}\) are close to the decision boundary, we can move them further to the decision boundary. Last, we focus on investigating whether our method of moving samples to the decision boundary can enlarge their unlearning effect. As long as one of the malicious users can leverage the method to materialize the over-unlearning threat, we can consider the threat that exists in the machine unlearning services.
Let \(\mathbf{x}\in\mathcal{D}_{u}\) be an unlearned sample. Based on our method, the user aims to construct a perturbed version of \(\mathbf{x}\):
\[\mathbf{x}^{\prime}=\mathbf{x}+\mathbf{\delta}, \tag{2}\]
where \(\mathbf{x}^{\prime}\) is the perturbed version of \(\mathbf{x}\), and \(\mathbf{\delta}\) is the perturbation. \(\mathbf{x}^{\prime}\) is required to satisfy:
\[\text{Dis}(\mathbf{x}^{\prime},\mathbf{\theta}^{*})\leq\epsilon, \tag{3}\]
where \(\text{Dis}(\cdot,\cdot)\) is the distance of \(\mathbf{x}^{\prime}\) to the decision boundary of the model \(\mathbf{\theta}^{*}\), and \(\epsilon\) is a small value of the distance threshold. However, calculating the exact distance of a data sample to the decision boundary of ML models, especially deep models can be challenging because of the complexity of the model and the lack of analytical solutions [45]. Thus, we propose to leverage adversarial perturbation [22], which is a practical and commonly used technique to move samples closer to the decision boundary of a model by adding small noises to them [41], while without requiring to exactly calculate \(\text{Dis}(\mathbf{x}^{\prime},\mathbf{\theta}^{*})\).
Given that the user only has black-box access to the model of the server, we consider the user can leverage the adversarial technique from the black-box Carlini and Wagner (CW) adversarial attack [12, 13]. The Carlini and Wagner (CW) attack is an optimization-based attack that aims to find the minimum amount of perturbation that when added to the input will lead to misclassification. The loss function for the CW attack is defined as follows:
\[\mathcal{L}(\mathbf{x},\mathbf{x}^{\prime},y_{\text{true}})=||\mathbf{x}-\mathbf{x}^{\prime} ||_{2}^{2}+c*f(\mathbf{x}^{\prime}), \tag{4}\]
where \(\mathbf{x}\) is the original input, \(\mathbf{x}^{\prime}\) is the perturbed input, \(y_{\text{true}}\) is the true class label of \(\mathbf{x}\), \(||\cdot||_{2}^{2}\) is the L2 norm, and \(c\) is a constant. \(f(\mathbf{x}^{\prime})\) is a function defined as:
\[\text{max}(\text{max}(Z(\mathbf{x}^{\prime})_{i}:i\neq y_{\text{true}})-Z(\mathbf{x}^ {\prime})_{(y_{\text{true}},-k)}, \tag{5}\]
where \(Z(\mathbf{x}^{\prime})\) are the logits, \(i\) is the class not equal to \(y_{\text{true}}\), and \(k\) is a margin.
In this work, we consider the black-box setting, where the user has no knowledge about the architecture or parameters of the model. Here, the only information received from the model is the predicted class probabilities, \(\text{Pr}(y|\mathbf{x})\) for all classes \(y\) via querying input \(\mathbf{x}\) for the deployed model. The CW attack can be performed using techniques such as zeroth order optimization [13]. The gradients of the model's output with respect to the input can be approximated using finite differences, _i.e.,_ by querying the black-box model with slightly perturbed inputs and observing changes in the model's output. The zeroth order optimization method can be formalized as the following optimization problem:
\[\min_{\mathbf{\delta}\in\mathbb{R}^{d}} c||\mathbf{\delta}||_{2}^{2}+\max\{Z(\mathbf{x}+\mathbf{\delta})_{y_{\text{true} }}-\max_{i\neq y_{\text{true}}}[Z(\mathbf{x}+\mathbf{\delta})]_{i},-k\}\] \[s.t. \mathbf{x}+\mathbf{\delta}\in[0,1]^{d} \tag{6}\]
Here, the term \(Z(\mathbf{x}+\mathbf{\delta})\) represents the logits of the model at input \((\mathbf{x}+\mathbf{\delta})\). To perform the optimization, zeroth order optimization uses the zeroth order stochastic coordinate descent
Fig. 4: An illustration of over-unlearning using adversarial perturbation. Moving the unlearned data to the decision boundary for unlearning can significantly change the decision boundary of the model.
method, which can be written in the following form:
\[\mathbf{\delta}_{i}^{(t+1)}=\mathbf{\delta}_{i}^{(t)}-\eta\mathbf{g}_{i}^{(t)}, \tag{7}\]
where \(\mathbf{g}_{i}^{(t)}\) is an estimate of the gradient of the loss function at iteration \(t\), calculated using finite differences as follows:
\[\mathbf{g}_{i}^{(t)}=\frac{\mathcal{L}(\mathbf{x}+\mathbf{\delta}^{(t)}+h\mathbf{e}_{i})- \mathcal{L}(\mathbf{x}+\mathbf{\delta}^{(t)})}{h}. \tag{8}\]
Here, \(h\) is a small constant used for the finite difference approximation, and \(\mathbf{e}_{i}\) is the \(i\)-th standard basis vector. Note that there are other black-box adversarial techniques such as substitute models [35]. The zeroth order optimization is selected due to avoid the need to train a substitute model.
For a sample \((\mathbf{x},y)\in\mathcal{D}_{u}\), we use the adversarial technique described above to move it towards the decision boundary and obtain \((\mathbf{x}^{(1)},y^{(1)}),\cdots,(\mathbf{x}^{(t-1)},y^{(t-1)}),(\mathbf{x}^{(t)},y^{(t)})\). Assume \(y^{(t-1)}=y\) and \(y^{(t-1)}\neq y\), then Pushing-I method selects \(\mathbf{x}^{(t-1)}\) as the modified sample and Pushing-II method selects \(\mathbf{x}^{(t)}\) as the modified sample.
## IV Experimental Settings
In this section, we first introduce the datasets, models, and evaluation metrics used for the experiments. Then, we introduce the unlearning settings and the unlearning benchmarks used for evaluating the over-unlearning threat.
### _Datasets and Models_
**Dataset.** In our experiments, we use three datasets to evaluate the proposed two methods for over-unlearning. The three datasets are benchmark datasets for image classification tasks, which cover a wide range of object categories with different learning complexities.
* **CIFAR-10 [33].** This dataset contains 60,000 color images with 50,000 images in the training dataset and 10,000 images in the test dataset. It consists of 10 different classes, with each image labeled with one of the following classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, or truck. Each image in the dataset has dimensions of \(32\times 32\) pixels.
* **CIFAR-100 [33].** Just like CIFAR-10, this dataset contains 60,000 color images with 50,000 images in the training dataset and 10,000 images in the test dataset. CIFAR-100 includes 100 fine-grained classes, such as different types of animals, plants, household objects, and vehicles. The image size in the CIFAR-100 is the same as CIFAR-10 with dimensions of \(32\times 32\) pixels.
* **STL-10 [15].** This dataset consists of 13,000 color images with 5,000 training images and 8,000 test images. STL-10 has 10 classes of airplanes, birds, cars, cats, deer, dogs, horses, monkeys, ships, and trucks with each image having a higher resolution of \(96\times 96\) pixels. Compared to the above two datasets, STL-10 can be considered as a more challenging dataset with higher learning complexity.
**Models.** We use the VGG model [49] and the ResNet model [25] to evaluate our proposed methods for over-unlearning. These two model architectures are benchmark classification models for image classification tasks. The detailed description of the models used in this paper is in Appendix VIII-B.
**Metric.** As the MLaaS service provider, the utility of the model is a priority of the server because it aims to offer models that are accurate for providing predictions to their users. The server may have limited capacity to check the quality of the unlearned model, and the most feasible indicator is to check the test accuracy of the model using a test dataset. We report the accuracy of the model on the test dataset to assess the utility of the model in the experiments.
### _Unlearning Settings and Benchmarks_
**Number of Unlearned Samples.** To mimic a practical scenario where the malicious user only contributes a small proportion of training data, we assume the user has no more than 50% training data of a class that can be modified for over-unlearning. Specifically, we assume the user has no more than 200, 200, and 2,000 samples in CIFAR-10, STL-10, and CIFAR-10, respectively.
**Perturbation Magnitude for Pushing.** To ensure the stealthy of the modified samples in pushing methods, the perturbation added to the original sample should be imperceptible. In our experiments, we bound the perturbations by \(\ell_{2}\)-norm perturbations of 20 to ensure the stealthiness of the modified samples. We provide visualization of modified samples with the maximum perturbation and analysis of the perceptual similarity between the modified samples and the original samples in Appendix VIII-D and Table XIII. The modified samples have high perceptual similarity with the corresponding original samples (SSIM [53] mean value around \(0.97\), the closer to 1, the more similar, LPIPS [58] mean value around \(0.03\), the smaller, the more similar).
**Unlearning Benchmarks.** We evaluate the effectiveness of our proposed methods for over-unlearning on two benchmark unlearning methods: one is a state-of-the-art gradient-based unlearning method [54], and the other is an empirical unlearning method of fine-tuning [56] that has been widely used as the unlearning baseline. Both of the two unlearning methods are feasible for machine unlearning services where the server requires to perform unlearning with the usage of only the unlearned data sent from the users.
* **Fine-tuning based Unlearning Method.** Fine-tuning has been widely used as an empirical unlearning baseline in existing works [20, 21, 23, 56]. This method aims to let the model output wrong predictions on the unlearned samples by fine-tuning the model on the unlearned samples with wrong labels. As an empirical unlearning method, the intuition of fine-tuning is the privacy information of the unlearned sample cannot be inferred because the model outputs wrong predictions. However, information of the unlearned samples might still be inferred from the parameters of the model. In our experiments, we follow the same strategy that the server randomly relabels the images sent from the user and fine-tunes the model on relabeled images for unlearning.
* **Gradient-based Unlearning Method.** We evaluate on a state-of-the-art gradient-based unlearning method developed by Warnecke et al. [54], which is detailed introduced by Equation 12 and Equation 13 in Appendix VIII-A. The
intuition is to leverage an irrelevant sample to overwrite the unlearned sample. In our experiments, we follow the same hyper-parameter settings in [54] and use the images consisting of random noise as irrelevant samples to overwrite the unlearned images.
Compared to the fine-tuning based unlearning method, the state-of-the-art gradient-based unlearning method guarantees that the unlearned model approximates the model that is retained on the training dataset with the unlearned data removed, i.e., the removal in the unlearned model produced by the state-of-the-art gradient-based unlearning method is certified. Thus, in the experiments, we mainly report the results on the gradient-based unlearning method, while we provide the experimental results showing the effectiveness of our methods on fine-tuning in Appendix VIII-C.
Note that our proposed methods may also allow a malicious user to achieve over-unlearning in MLaaS via machine unlearning requests when the server uses other unlearning methods. In our experiments, we aim to show that over-unlearning can be materialized by our proposed methods when the server uses the state-of-the-art gradient-based unlearning method or the widely used empirical unlearning method. The investigation of achieving over-unlearning by new techniques or the effectiveness of our methods on new unlearning methods is orthogonal to our work.
## V Evaluation
In this section, we first demonstrate the performance of the naive over-unlearning method of blending to demonstrate the feasibility of over-unlearning. Then, we present the effectiveness of the advanced over-unlearning methods of Pushing-I and Pushing-II for achieving over-unlearning.
### _Effectiveness of Blending Method_
To demonstrate the effectiveness of the blending method for over-unlearning, we conduct experiments on CIFAR-10, CIFAR-100, and STL-10 using the VGG model. We consider the malicious user has no more than 50% of the training samples of "airplane", "apple", and "airplane" in CIFAR-10, CIFAR-100, and STL-10, respectively. We select the class of "cat" for CIFAR-10, "orange" for CIFAR-100, and "cat" for STL-10 to embed the information of another class into the unlearned samples of the user. Here, both the class of the unlearned data and the additional class are randomly selected. Because the additional information is from the additional class, we mainly report the test accuracy of the model on the additional class, while reporting the overall test accuracy on all classes in Table XI in the Appendix.
As depicted in Table III, we can see that the blending method is effective on CIFAR-10, which demonstrates the feasibility of over-unlearning. When unlearning 400 samples of "airplane" containing information of "cat" on CIFAR-10, the blending method can degrade around 1.4% accuracy of the unlearned model on "cat" compared to normal unlearning. More performance degradation of 8.1% can be achieved when the number of unlearned samples increased to 2,000. However, we did not observe the effectiveness of the blending method on CIFAR-100 and STL-10. This indicates that the naive over-unlearning method of simply blending features of the samples in another class cannot achieve over-unlearning on complex datasets, which highlights the necessity of advanced methods for over-unlearning.
**Takeaway 1** The naive method of blending for over-unlearning is only effective in the simple classification task, while it cannot be applied to complex tasks with many class categories or complex patterns.
### _Effectiveness of Pushing-I and Pushing-II_
To demonstrate the effectiveness of Pushing-I and Pushing-II for over-unlearning, we conduct experiments on CIFAR-10, CIFAR-100, and STL-10 using the VGG model. We consider the malicious user has no more than 50% of the training samples of "airplane", "apple", and "airplane" in CIFAR-10, CIFAR-100, and STL-10, respectively. Note that the class of the unlearned data is randomly selected and we will demonstrate the choice of the class does not affect the effectiveness of our methods in the ablation study. For Pushing-I, we move the data sample near the decision boundary. For Pushing-II, we move the data sample across the decision boundary, _i.e.,_ move the unlearned data to a decision region other than its original label.
As we can see in Table IV, both Pushing-I and Pushing-II can degrade the overall accuracy of the model compared to normal unlearning, which indicates the effectiveness of the two methods for Over-unlearning-I and Over-unlearning-II. To better demonstrate how severe over-unlearning can be achieved by Pushing-I and Pushing-II, we report the test accuracy of the model on the class that the unlearned data comes from, _i.e.,_ reporting the performance for Over-unlearning-I in the following parts.
As depicted in Figure 5, we can see that both Pushing-I and Pushing-II can achieve the goal of Over-unlearning-I on all datasets: the utility of the model is smaller than the model under normal unlearning. For example, as depicted in
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Dataset** & \begin{tabular}{c} **\# of** \\ **Samples** \\ \end{tabular} & \begin{tabular}{c} **Blending** \\ **Ratio** \\ \end{tabular} &
\begin{tabular}{c} **Performance** \\ **Degradation** \\ \end{tabular} \\ \hline \multirow{4}{*}{CIFAR-10} & 400 & 0.3 & 0.7\% \(\downarrow\) \\ & 0.5 & 1.4\% \(\downarrow\) \\ \cline{2-4} & 2,000 & 0.3 & 2.8\% \(\downarrow\) \\ & 0.5 & 8.1\% \(\downarrow\) \\ \hline \multirow{4}{*}{CIFAR-100} & 40 & 0.3 & \(\bigcirc\) \\ & 0.5 & \(\bigcirc\) \\ \cline{2-4} & 200 & 0.3 & \(\bigcirc\) \\ & 0.5 & \(\bigcirc\) \\ \hline \multirow{4}{*}{STL-10} & 40 & 0.3 & \(\bigcirc\) \\ & 0.5 & \(\bigcirc\) \\ \cline{2-4} & 200 & 0.3 & \(\bigcirc\) \\ \cline{1-1} \cline{2-4} & 200 & 0.5 & \(\bigcirc\) \\ \hline \hline \end{tabular}
\end{table} TABLE III: Effectiveness of the blending method for over-unlearning-II when unlearning 10% and 50% training data of a class on CIFAR-10, CIFAR-100, and STL-10.
Figure 5(a), when unlearning 10% training data of a class in CIFAR-10, Pushing-I and Pushing-II can reduce the utility of the model from 87.8% to around 29.5%, while normal unlearning has the accuracy of the model of 81.7%. Over-unlearning becomes more severe when unlearning 50% training data of the class. As depicted in Figure 5(b), Pushing-I and Pushing-II can reduce the utility of the model on that class to around 2%, indicating that the model becomes useless on predicting the class of the unlearned data. The results in Figure 5 demonstrate the effectiveness of Pushing-I and Pushing-II in achieving the property of performance degradation in malicious unlearning. To demonstrate the property of the stealthiness of the unlearned samples in Pushing-I and Pushing-II, we visualize two "airplane" samples in STL-10 in Figure 9. As we can see, human inspection cannot easily notice the existence of perturbations in the unlearned data.
To study how the unlearned model produced by malicious unlearning is different from the unlearned model produced by normal unlearning, we visualize the prediction distribution of the unlearned model on CIFAR-10 in Figure 6. As we can see, the prediction distribution of the baseline model produced by normal unlearning is very different from that of the unlearned model produced by Pushing-I or Pushing-II. After normal unlearning 400 training samples of "airplane", there are 817 test samples of "airplane" that can be corrected and predicted as "airplane" by the unlearned model. However, when malicious unlearning of Pushing-I or Pushing-II happens, the over-unlearned model can only predict correctly on around 290 test samples, while largely assigning the rest of the testing samples to the label of "2" ("bird") and "8" ("ship"). This phenomenon exacerbates when unlearning 2,000 training data of that class as depicted in Figure 7.
Motivated by Figure 6 and Figure 7 that most of the wrongly predicted labels are assigned to "bird" or "ship", we further ask that is it possible to control the wrongly predicted label of the unlearned model on test samples of "airplane" in Over-unlearning-I. If so, Over-unlearning-I can be more dangerous than just reducing the utility of the model. To investigate this possibility, we select a particular decision region to move the unlearned data, _i.e.,_ we move all the unlearned data to the particular decision region.
We conduct an experiment on CIFAR-10 to investigate whether it is possible to control the wrongly predicted label of the unlearned model. We leverage Pushing-I and Pushing-II methods to move the unlearned samples of "airplane" near or
\begin{table}
\begin{tabular}{l c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Test accuracy when unlearning 10\% data of a class} & \multicolumn{3}{c}{Test accuracy when unlearning 50\% data of a class} \\ \cline{3-10} Dataset & Training accuracy & Test accuracy & Normal unlearning & Pushing-I & Pushing-II & Normal unlearning & Pushing-I & Pushing-II \\ \hline CIFAR-10 & 81.5\% & 79.8\% & 79.3\% & 73.5\% (5.8\%\(\downarrow\)) & 73.75\% (5.6\%\(\downarrow\)) & 78.7\% & 66.4\% (12.3\%\(\downarrow\)) & 66.9\% (11.8\%\(\downarrow\)) \\ CIFAR-100 & 76.3\% & 51.1\% & 51.1\% & 50.7\% (0.4\%\(\downarrow\)) & 50.7\% (0.4\%\(\downarrow\)) & 49.7\% & 49.1\% (0.6\%\(\downarrow\)) & 49.2\% (0.5\%\(\downarrow\)) \\ STL-10 & 96.3\% & 56.6\% & 56.6\% & 56.2\% (0.4\%\(\downarrow\)) & 49.7\% (6.9\%\(\downarrow\)) & 50.2\% & 49.5\% (0.7\%\(\downarrow\)) & 32.8\% (17.4\%\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table} TABLE IV: The overall accuracy of the model before unlearning, with normal unlearning, with Pushing-I, and with Pushing-II on CIFAR-10, CIFAR-100, and STL-10.
Fig. 5: Effectiveness of Pushing-I and Pushing-II for over-unlearning-I when unlearning 10% and 50% training data of a class on CIFAR-10, CIFAR-100, and STL-10.
Fig. 6: Prediction distribution comparison between the model produced by normal unlearning and the models produced by Pushing-I and Pushing-II when unlearning 400 samples of “airplane” on CIFAR-10. The x-axis represents the category that exists in the datasets and “0” represents the class “airplane” of the unlearned data. The y-axis represents the number of predicted labels under that category.
Fig. 7: Prediction distribution comparison between the model produced by normal unlearning and the models produced by Pushing-I and Pushing-II when unlearning 2,000 samples of “airplane” on CIFAR-10.
across the decision region of "cat". Table V shows the prediction results of the predicted labels of the model on CIFAR-10. As we can see, before unlearning, there are 878 test samples of "airplane" that are correctly predicted by the trained model. When unlearning 400 samples under normal unlearning, the model can predict most of the testing samples as "airplane" and predict only two test samples of "airplane" as "cat". However, under malicious unlearning, the model increases the wrong predictions on "cat" to 20 (Pushing-I) and 25 (Pushing-II). The wrong predictions on "cat" increase greatly when the user can modify 2,000 samples: there are more than 245 (Pushing-II) and 375 (Pushing-I) wrong predictions of the unlearned model on "cat". This demonstrates that the wrongly predicted label of the unlearned model can indeed be controlled by the malicious user by moving the unlearned data to a fixed decision region of a class. The results in Table V demonstrate the pushing method can be more dangerous than just reducing the utility of the model. We find the same conclusion on CIFAR-100, and the results are demonstrated in Table VI.
**Takeaway 2**
* The proposed Pushing-I and Pushing-II methods are effective and reliable in achieving over-unlearning.
* The wrongly predicted label of the unlearned model can be controlled in over-unlearning, which is a more severe threat than just reducing the utility of the model.
## VI Ablation Study
### _Ablation Study for the Blending Method_
Although the blending method cannot achieve over-unlearning in complex classification tasks, it works at a certain level on simple tasks with the advantage of no computational overheads. Thus, for simple classification tasks, malicious users may still consider the blending method as an option for over-unlearning. We study how the number of unlearned samples and the blending ratio may affect the performance of the blending method. We also study whether the blending method is generic when the embedded information is from different classes than the class we have evaluated.
**Number of Unlearned Samples.** We first study how the number of unlearned samples can affect the performance of the blending method. Intuitively, the more data the malicious user has, the more information of the additional class the user can embed, which should cause a larger effect on the unlearned model on that class. We set the blending ratio to 0.3. We use the model of VGG and vary the number of unlearned samples from 400, 800, 1,200, 1,600, to 2,000. We embed "cat" into the unlearned data of "airplane" in CIFAR-10. We report the difference (_i.e.,_ performance degradation) between the accuracy on the "cat" class of the model under normal unlearning and the model produced by the blending method.
As we can see in Figure 8(a), a larger number of unlearned samples can enable the blending method to be more effective, which aligns with our intuition. When the malicious user has 400 samples, the blending method can achieve an accuracy degradation of around 0.7% on the model compared to normal unlearning. When having 2,000 unlearned samples, the blending method can achieve around 2.8% accuracy degradation. Although the performance degradation caused by the blending method is not large, this naive approach demonstrates the feasibility of over-unlearning.
**Blending Ratio.** We study how the blending ratio affects the effectiveness of the blending method. Intuitively, the larger the blending ratio is, the more information of the additional class can be embedded into the unlearned data. We use the VGG model and set the number of unlearned samples to 400. We vary the blending ratio from 0.1, 0.2, 0.3, 0.4, to 0.5 and embed "cat" into the unlearned data of "airplane" in CIFAR-10.
Figure 8(b) shows the accuracy of the unlearned model produced by the blending method under different blending ratios. As we can see, the blending method is more effective when setting a higher blending ratio. However, with a higher blending ratio, the pattern of the sample in the other class is more obvious, which can make the injected information of the other class to be less stealthy.
**Class Options.** We study whether the blending method works on other classes than the class we have evaluated. We use the VGG model and set the number of unlearned samples to 2,000 and the blending ratio to 0.3. We embed the information of the classes of "bird", "horse", and "ship" respectively, into the samples of "airplane" in CIFAR-10.
Table VII shows the accuracy of the unlearned model under normal unlearning and the blending method on "bird", "horse", and "ship". As we can see, the blending method only reduces the accuracy of the model slightly compared to normal unlearning. This suggests that the naive blending for over-unlearning method may not be generic and reliable, which highlights the importance of using the advanced pushing method to achieve over-unlearning.
**Takeaway 3** With a greater number of unlearned samples and higher blending ratios, the naive blending method can achieve over-unlearning more effectively.
### _Ablation Study for Pushing-I and Pushing-II_
**Number of Unlearned Samples.** We first study how the number of unlearned data can affect the performance of Pushing
Fig. 8: Accuracy degradation of the model w.r.t. the number of unlearned samples and the blending ratio in the blending method. In general, more unlearned samples and a higher blending ratio can result in higher accuracy degradation of the model.
I and Pushing-II. Intuitively, the more data the malicious user has, the more effect on the model the user can cause. We use the VGG model and vary the number of unlearned data samples from 400, 800, 1,200, 1,600 to 2,000 in the class "airplane" in CIFAR-10. We report the difference (_i.e.,_ performance degradation) between the accuracy of the model produced by normal unlearning and the model produced by Pushing-I and Pushing-II.
As we can see in Figure 10, a larger number of unlearned samples can enable both Pushing-I and Pushing-II to be more effective. When the malicious user has 400 samples, Pushing-I and Pushing-II can degrade around 6% accuracy of the model compared to normal unlearning. When having 2,000 unlearned samples, Pushing-I and Pushing-II can achieve around 12% accuracy degradation, which is twice compared to the case of having 400 samples. Comparing Pushing-I and Pushing-II, it seems each has respective advantages when the malicious user has a different number of samples. However, note that Pushing-I has the advantage of sample stealthiness of both the feature and the label. Thus, Pushing-I may be more preferred by the malicious user for achieving over-unlearning.
**Model Architecture.** We study how different model architectures may affect the effectiveness of Pushing-I and Pushing-II. We use CIFAR-10 and set the number of unlearned samples in the class "airplane " to 400 in CIFAR-10. We use two different model architectures of VGG and ResNet.
Table VIII demonstrates the effectiveness of Pushing-I and
\begin{table}
\begin{tabular}{c c|c|c c|c c c c c c} \hline \hline \multicolumn{2}{c|}{Number of} & \multicolumn{8}{c}{Number of the different predicted labels of the model} \\ \cline{3-10} \multicolumn{2}{c|}{unlearned samples} & \multicolumn{1}{c|}{Model status} & \multicolumn{1}{c|}{Airplane} & \multicolumn{1}{c|}{Automobile} & \multicolumn{1}{c|}{Bird} & \multicolumn{1}{c|}{Cat} & \multicolumn{1}{c}{Deer} & \multicolumn{1}{c}{Dog} & \multicolumn{1}{c}{Frog} & \multicolumn{1}{c}{Horse} & \multicolumn{1}{c}{Ship} & \multicolumn{1}{c}{Truck} \\ \hline
0 & Before unlearning & 878 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \cline{2-10} & Normal unlearning & 817 & 4 & 11 & 2 & 5 & 1 & 1 & 0 & 19 & 18 \\
400 & Pushing-I & 509 & 18 & 128 & 25 & 16 & 1 & 11 & 5 & 127 & 38 \\ & Pushing-II & 538 & 16 & 117 & 20 & 13 & 1 & 11 & 5 & 117 & 40 \\ \cline{2-10} & Normal unlearning & 708 & 9 & 53 & 5 & 7 & 1 & 1 & 4 & 58 & 32 \\
2,000 & Pushing-I & 26 & 34 & 100 & 378 & 20 & 1 & 2 & 15 & 214 & 88 \\ & Pushing-II & 31 & 35 & 164 & 247 & 19 & 1 & 3 & 14 & 281 & 83 \\ \hline \hline \end{tabular}
\end{table} TABLE V: The predicted labels of the model on CIFAR-10 before unlearning, with normal unlearning, and with malicious unlearning. “airplane” is the label of the unlearned data and “cat” is the target label.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c|}{Number of} & \multicolumn{1}{c}{Bird} & \multicolumn{1}{c}{Horse} & \multicolumn{1}{c}{Ship} \\ \hline \hline \multicolumn{2}{c|}{Normal unlearning} & 74.6\% & 80.5\% & 94.0\% \\ Blending & 74.4\% (0.2\%\(\downarrow\)) & 80.4\% (0.1\%\(\downarrow\)) & 93.8\% (0.2\%\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table} TABLE VII: Unlearned model accuracy w.r.t. different classes under the blending method.
Fig. 10: Accuracy degradation of the model w.r.t. number of unlearned samples. In general, more unlearned samples can result in higher accuracy degradation of the model.
\begin{table}
\begin{tabular}{c c|c|c|c|c} \hline \hline \multicolumn{2}{c|}{Number of} & \multicolumn{1}{c}{Number of the predicted labels of the model} \\ \cline{3-6} \multicolumn{2}{c|}{samples} & \multicolumn{1}{c|}{Model status} & \multicolumn{1}{c|}{apple} & \multicolumn{1}{c|}{ear} & \multicolumn{1}{c|}{super paper} & \multicolumn{1}{c}{orange} & \multicolumn{1}{c}{trout} \\ \hline
0 & Before unlearning & 71 & 0 & 0 & 0 & 0 \\ \cline{2-6} & Normal unlearning & 70 & 0 & 1 & 0 & 0 \\
40 & Pushing-I & 57 & 2 & 10 & 2 & 0 \\ & Pushing-II & 59 & 3 & 8 & 1 & 0 \\
200 & Normal unlearning & 54 & 4 & 13 & 0 & 0 \\ & Pushing-I & 2 & 7 & 41 & 19 & 2 \\ & Pushing-II & 5 & 7 & 45 & 14 & 0 \\ \hline \hline \end{tabular}
\end{table} TABLE VI: The predicted labels (Top-5) of the model on CIFAR-100 before unlearning, with normal unlearning, and with malicious unlearning. “apple” is the label of the unlearned data. “sweet pepper” is the target label.
Fig. 9: Stealthiness of the modified examples in the Pushing method: STL-10 samples of “airplane” used for Pushing-I and Pushing-II. Human inspection cannot easily notice the existence of perturbations in the unlearned data.
Pushing-II across VGG and ResNet. As we can see, Pushing-I can degrade the model accuracy to around 5.7% and 2.5% on VGG and ResNet compared to normal unlearning, and Pushing-II can degrade the model accuracy to around 5.6% and 1.1%. The effectiveness of Pushing-I and Pushing-II on both model architectures implies that our proposed strategy of moving data to the decision boundary of the model for achieving over-unlearning is model-agnostic. Comparing the accuracy degradation across VGG and ResNet, we find that VGG suffers more accuracy degradation than ResNet, while VGG has higher accuracy than ResNet when the data is normally unlearned. This suggests that models with higher utility might be more vulnerable to the threat of over-unlearning, which further implies the importance of the investigation of malicious unlearning.
**Model Depth.** We study how the depth of the model may affect the effectiveness of Pushing-I and Pushing-II. We set the number of unlearned samples to 400 in the class of "airplane" in CIFAR-10. We use the VGG model with the different number of blocks to simulate the models with different depths.
Table IX shows the accuracy of the unlearned model with different depths under normal unlearning, Pushing-I, and Pushing-II, respectively. As we can see, both Pushing-I and Pushing-II are effective in achieving the goal of over-unlearning on models with different numbers of VGG blocks. When comparing the models with different depths under the same unlearning setting, we find that the utility of the model with more blocks is usually smaller than that of the model with fewer blocks. For example, the unlearned model with 3 blocks has an accuracy of 73.6% under Pushing-I, while with 5 blocks the model has an accuracy of 65.6%. Under normal unlearning, the unlearned model with 3 blocks has an accuracy of 79.3%, while with 5 blocks the unlearned model has an accuracy of 73.3%. This reminds us that models with deeper depths might be more easily affected by unlearning, either under normal unlearning or malicious unlearning.
**Class Option.** We study whether Pushing-I and Pushing-II can work when the user has training samples of other classes than the class we have evaluated. We use the VGG model and set the number of unlearned samples to 400. We vary the class of "bird", "horse", and "ship" in CIFAR-10, respectively.
Table X shows the accuracy of the unlearned model under Pushing-I and Pushing-II when the user has the unlearned samples in the class of "bird", "horse", and "ship". As we can see, for all the classes, Pushing-I and Pushing-II can degrade the accuracy of the model by around 2% to 5% compared to normal unlearning. The effectiveness of both Pushing-I and Pushing-II in achieving the goal of over-unlearning-I across different classes demonstrates the generalization ability of the advanced pushing methods for over-unlearning.
**Takeaway 4**
* Pushing-I and Pushing-II are effective across different model architectures and depths, and they are generic to achieve over-unlearning across different classes.
* More unlearned samples can result in more effective Pushing-I and Pushing-II for over-unlearning.
## VII Discussion
**Naive v.s. Advanced Over-unlearning.** Even though the blending-based strategy is cost-efficient and model-irrelevant, we demonstrate that the over-unlearning is conducted in a blind way. It is only effective in simple classification tasks and hard to generalize to complex tasks with many class categories or complex patterns. Nevertheless, it is a good motivation for advanced over-unlearning strategies by producing perturbation to push the sample to be close to the decision boundary.
**Difference from Poisoning and Adversarial Attacks.** The blending-based over-unlearning has resemblance to data poisoning, particularly when the subtlety of the label is disregarded. However, a key distinction lies in the fact that our blending scenario does not necessitate the inclusion of a training or re-training procedure, which is typically essential for executing a standard poisoning attack. The pushing pipeline is akin to an adversarial example, but there is a fundamental difference in our objectives. Unlike adversarial attacks which aim to deceive a classifier, our goal is to push the sample closer to the decision boundary. On the other hand, existing advanced techniques for data poisoning attacks and adversarial attacks can be also incorporated into our design.
**Possible Defence.** In our study, we show that over-unlearning can break the trade-off between model performance and unlearning services. To protect against over-unlearning, there are several possible protection strategies the server might leverage.
_Hashing as a Possible Defense._ The server can consider hashing techniques [52] to verify the authenticity of unlearning requests raised from the users. Specifically, the model owner can hash training samples and send the hash values to the server for storage. The server rejects the malicious unlearning requests if he finds that the hash values of the unlearned sample did not match the stored hash value. However, applying
\begin{table}
\begin{tabular}{l c c c} \hline \hline Base class & Bird & Horse & Ship \\ \hline Normal unlearning & 76.2\% & 78.4\% & 77.5\% \\ Pushing-I & 74.1\% (2.1\%\(\downarrow\)) & 75.6\% (2.8\%\(\downarrow\)) & 73.5\% (4.0\%\(\downarrow\)) \\ Pushing-II & 73.7\% (2.5\%\(\downarrow\)) & 75.8\% (2.6\%\(\downarrow\)) & 72.9\% (4.6\%\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table} TABLE X: Unlearned model accuracy w.r.t. different classes.
\begin{table}
\begin{tabular}{l c c c} \hline \hline VGG model & 3 blocks & 4 blocks & 5 blocks \\ \hline Normal unlearning & 79.3\% & 76.5\% & 73.3\% \\ Pushing-I & 73.6\% (5.7\%\(\downarrow\)) & 72.7\% (3.8\%\(\downarrow\)) & 65.6\% (7.7\%\(\downarrow\)) \\ Pushing-II & 73.7\% (5.6\%\(\downarrow\)) & 69.5\% (7.1\%\(\downarrow\)) & 65.7\% (7.6\%\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table} TABLE IX: Unlearned model accuracy w.r.t. different model depths.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Normal unlearning & Pushing-I & Pushing-II \\ \hline VGG & 79.3\% & 73.6\% (5.7\%\(\downarrow\)) & 73.7\% (5.6\%\(\downarrow\)) \\ ResNet & 69.4\% & 66.9\% (2.5\%\(\downarrow\)) & 68.3\% (1.1\%\(\downarrow\)) \\ \hline \hline \end{tabular}
\end{table} TABLE VIII: Unlearned model accuracy w.r.t. different model architectures.
hashing techniques in the MLaaS context is plausible in both theory and practice because of several concerns. The first concern is privacy breach. One of the most significant advantages of isolating the dataset between the model developer and the service provider is to protect the privacy of data contributors. Put differently, the service provider should have no knowledge of the model training sets. Providing hashing results or dataset identification access to service providers may expose the nature of the dataset used locally, leading to potential privacy breaches, such as membership of an individual [48, 26] or privacy linkage attack [37] (e.g., re-identify individuals in anonymized datasets). The second concern is false rejections of legitimate unlearning requests. Hashing algorithms designed for ensuring unlearning authenticity may only function effectively when the data samples uploaded by the user and received by the server are identical to the original data samples provided by the user for training the model. If the data sample changes due to compression (eg, PNG to JPEG), network transmission issues, and transcoding, hashing can lead to false rejection of users' legitimate unlearning requests, which may lead to severe consequences for the service provider, including potential legal fines due to GDPR [43].
_Membership Inference._ The server can also consider membership inference techniques [48, 11, 44] for authenticity verification because membership inference techniques can identify whether a data sample is a training sample or not. However, membership inference techniques suffer from heavy computational resources for training an inference model and low inference accuracy. Also, they are not very effective on well-generalized models [27].
_Other Defences._ The server can use anomaly detection methodologies [40] to scrutinize the submitted unlearning sample further. However, since the proposed over-unlearning is contingent on the user's request, generic anomaly detection is ineffective due to lack of training data similar to users' submission. One heuristic mitigation is we recommend the service provider to carefully monitor the run-time model performance during deployment.
As we discussed above, the existing defence techniques have limitations in defending against over-unlearning threats. More advanced adversarial attacks and more robust countermeasures could be incorporated into the over-unlearning design recursively, which provides an avenue for future research.
## VIII Conclusion
This paper has provided a pioneering exploration into the threats associated with machine unlearning services in the real Machine Learning as a Service (MLaaS) environment. Through a comprehensive investigation, we identified over-unlearning as a significant risk that can compromise the model's utility when malicious unlearning data is submitted by users. We proposed effective strategies for exploiting these risks, requiring only black-box access to the models, thus shedding light on the vulnerabilities of current unlearning methods in MLaaS contexts. Our extensive experiments and comprehensive ablation studies have shown that these strategies can effectively induce over-unlearning across different settings and with various model architectures. These findings underline the critical need to address the highlighted risks, specifically for servers providing MLaaS to maintain their service integrity and ensure compliance with privacy regulations.
As machine unlearning services continue to become increasingly relevant in light of privacy concerns, our research serves as a stepping stone in understanding and mitigating the potential threats posed by these services. Future work should continue this line of inquiry, developing more secure unlearning methods and policies to assure the delicate balance between data privacy, model utility, and service security in MLaaS environments.
## Acknowledgments
The work has been supported by Cybersecurity and Quantum Systems group at CSIRO's Data61. Minhui Xue and Shuo Wang are the corresponding authors of this paper.
|
2309.07103 | Comparing Llama-2 and GPT-3 LLMs for HPC kernels generation | We evaluate the use of the open-source Llama-2 model for generating
well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on
different parallel programming models and languages (e.g., C++: OpenMP, OpenMP
Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python:
numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built
upon our previous work that is based on the OpenAI Codex, which is a descendant
of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot.
Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline
by using a similar metric. Llama-2 has a simplified model that shows
competitive or even superior accuracy. We also report on the differences
between these foundational large language models as generative AI continues to
redefine human-computer interactions. Overall, Copilot generates codes that are
more reliable but less optimized, whereas codes generated by Llama-2 are less
reliable but more optimized when correct. | Pedro Valero-Lara, Alexis Huante, Mustafa Al Lail, William F. Godoy, Keita Teranishi, Prasanna Balaprakash, Jeffrey S. Vetter | 2023-09-12T01:19:54Z | http://arxiv.org/abs/2309.07103v1 | # Comparing Llama-2 and GPT-3 LLMs for HPC kernels generation
###### Abstract
We evaluate the use of the open-source Llama-2 model for generating well-known, high-performance computing kernels (e.g., AXPY, GEMV, GEMM) on different parallel programming models and languages (e.g., C++: OpenMP, OpenMP Offload, OpenACC, CUDA, HIP; Fortran: OpenMP, OpenMP Offload, OpenACC; Python: numpy, Numba, pyCUDA, cuPy; and Julia: Threads, CUDA.jl, AMDGPU.jl). We built upon our previous work that is based on the OpenAI Codex, which is a descendant of GPT-3, to generate similar kernels with simple prompts via GitHub Copilot. Our goal is to compare the accuracy of Llama-2 and our original GPT-3 baseline by using a similar metric. Llama-2 has a simplified model that shows competitive or even superior accuracy. We also report on the differences between these foundational large language models as generative AI continues to redefine human-computer interactions. Overall, Copilot generates codes that are more reliable but less optimized, whereas codes generated by Llama-2 are less reliable but more optimized when correct.
Keywords:LLM HPC Llama-2 GPT.
## 1 Introduction
Generative-AI large language models (LLMs) are transforming the software industry by automating manual tasks, such as developing, testing, and deploying applications. The use of LLMs could lead to faster and more cost-effective software development. LLMs are also revolutionizing entertainment, education, and healthcare industries by creating realistic images, text, music, and code. However, there are social and ethical concerns surrounding LLMs, including the risk of deep fakes being created and distributed as misinformation or to harm individuals. Therefore, the risks and benefits of LLMs must be carefully considered before widespread adoption.
The emergence of exascale computing presents a challenge in developing software for high-performance computing (HPC) systems owing to the varying hardware and programming models in these complex architectures. To address this challenge, AI-assisted code generation could be used. LLMs can generate code in commonly used programming languages, including C++, Fortran, Python, and Julia. This innovation could make software development for HPC more efficient and manageable. However, limitations exist for AI-assisted code generation given it may only sometimes produce code that is as efficient or reliable as human-written code. The current state of practice, the limitations, and the potential of LLMs must be fully understood to realize their benefits.
The effort described in this paper builds on our previous work [7], in which we investigated the effectiveness of OpenAI Codex for generating HPC code for various numerical kernels in different programming languages and models, including C++, Fortran, Python, and Julia. The study found that the output of OpenAI Codex for C++ is closely linked to the popularity and sophistication of programming models. For example, OpenMP [17] and CUDA [15] received high scores because they are widely used and well-established programming models. However, HIP [1] received lower scores because it is a newer programming model that is not as widely used. The study also found that prompts in Fortran or Python can benefit from incorporating code keywords. However, Julia's prompts perform adequately without the need for code keywords for its mature HPC programming models.
This paper also describes our evaluation of Meta AI's LLM (Llama-2) for generating HPC kernels. The version of Llama-2 we used has 70 billion parameters and was provided by Hugging Chat, an open-source chat bot platform that relies on LLMs to power its conversations. This platform is built on top of the Hugging Face ecosystem. Our evaluation involves generating code for three fundamental numerical kernels: AXPY, GEMV, and GEMM. We then test the resulting 144 kernel codes in four programming languages, C++, Fortran, Python, and Julia, by using various programming models and compilers. These included OpenMP, OpenACC [16], CUDA, HIP, numpy [21], Numba [12], cuPy [14], pyCUDA [10], Julia's Base Threads [11], CUDA.jl [3], and AMDGPU.jl [18].
The paper is organized as follows: Section 2 provides an overview of related efforts that have brought attention to these topics in computer science. Section 3 outlines our methodology for generating and evaluating the code with Llama-2. In Section 4, we present the results of our evaluation and our findings for each language, kernel, and programming model along with additional keyword inputs on the generated outputs. Finally, Section 5 presents our conclusions.
## 2 Related Work
The Generative Pre-trained Transformer 3 (GPT-3) [5] is a game changer in the evolution of human-computer interactions. Developed by OpenAI,1 GPT-3 is the third generation of the prediction-based foundational LLM used for
several AI-generated, human-like text applications. GPT-3 is used in several natural language processing tasks [9], including ChatGPT, due in part to the large investment ($12 million USD) and size of its training model (175 billion parameters at 800 GB). Hence, GPT-3 and its successor GPT-44 are defining several societal questions for the near future. Today, we are at the beginning of a race to develop the best LLM model. In addition to GPT, we can find recently released foundational LLMs such as Llama-2 [20] and PaLM 25.
Footnote 4: [https://openai.com/product/gpt-4](https://openai.com/product/gpt-4)
Footnote 5: [https://ai.google/discover/palm2/](https://ai.google/discover/palm2/)
As we enter the exascale computing era, which is dominated by the extreme heterogeneity of hardware and programming models [23], AI-assisted code generation could play a key role in how we develop, deploy, and test software that targets HPC systems. Traditional human-readable code in languages such as C++[19], Fortran [2], Python [22], and more recently Julia [4], are a straightforward application for LLM's capabilities--capabilities that could help redefine software development. In fact, this rapidly evolving field was recently surveyed in our previous work [7], in which we evaluated the performance of the GPT-3 descendant OpenAI Codex for HPC kernel generation by using GitHub Copilot for several parallel programming models. The quality of the responses depends largely on the number of repositories and programming model maturity. Nichols et al. [13] fine-tuned the use of LLMs to improve the generation of OpenMP pragmas in parallel algorithm implementations, including MPI cases. Chen et al. [6] presented LM4HPC, a framework to conduct HPC-specific tasks in the context of LLMs, and highlighted the lack of training and evaluation datasets in HPC. Hence, we expect to see more work in the convergence of HPC and generative AI via LLMs because of the field's rapid evolution. To the best of our knowledge, this is the first evaluation of Llama-2 for the generation and correctness of HPC kernels and comparison to our baseline from previous work.
## 3 Methodology
First, we use prompts similar to those in our previous research [7], which are simple prompts based on the programming language, kernel, and programming model. The quality of the prompt is important because it determines how the LLM will generate the requested code based on the information provided. So, several adjustments were made to the prompt until Llama-2 was outputting the code requested. Importantly, the output from the LLM also depends on the data used to train the model. For example, the LLM may not be trained well enough for a particular language or model and may therefore produce inaccurate code no matter the prompt given.
The methodology used in this study involves two main characteristics that will be discussed in the next subsections: (1) how we prompted Llama-2 for code generation based on the kernel, parallel programming model, and programming language and (2) a code correctness metric that will be evaluated by expert observation.
### Experimental Setup
For our experiments, we used the Hugging Chat website, which, as of August 2023, uses the largest model of Llama-2 called Llama-2-70B. We created an account on Hugging Face to access the necessary features. As shown in Figure 1, the website features a chat box for the user to input their query for the LLM.
An example of the prompt and the generated code on Llama-2 is illustrated in Figure 1. The structure of the prompt is as follows:
* Create 3 code suggestions using the following parameters: \(\langle\)Programming Language\(\rangle\)\(\langle\)Programming Model\(\rangle\)\(\langle\)Kernel\(\rangle\).
* Create 3 code suggestions using the following parameters: \(\langle\)Programming Language\(\rangle\)\(\langle\)Programming Model\(\rangle\)\(\langle\)Kernel\(\rangle\).
Unlike our previous study based on the GitHub Copilot model [7], which can provide one or more codes, we must specify the number of code suggestions we want when using Llama-2. Importantly, the first prompt is used for C++, Fortran, and Python, whereas the second prompt is used only for Julia. This is because, according to previous research, they determined there was slight sensitivity in Julia prompts when adding a keyword [7]. For the \(\langle\)Kernel\(\rangle\) section, instead of prompting "GEMV" or "GEMM," we used the full form of the abbreviations, which are "general matrix-vector multiply" and "general matrix-matrix multiply," respectively. This is because Llama-2 does not interpret what the abbreviations mean. Additionally, Llama-2 has a character limit, so when prompting for three code suggestions, sometimes it could not finish all three codes. Whenever this was the case, we prompted the LLM to continue with the
Figure 1: Hugging Chat website interface.
code generation by saying, "please continue with the code," "you stopped, please continue," or similar.
Next, Table 1 lists all the programming languages, programming models, and keywords used in this study. We used the AXPY, GEMV, and GEMM kernels for every programming model. These kernels correspond to one specific operation of the three different levels of the Basic Linear Algebra Subprograms (BLAS) library.6 the AXPY level-1 BLAS routine computes a scalar-vector multiplication, the GEMV level-2 BLAS routine computes a matrix-vector multiplication, and the GEMM level-3 BLAS routine computes a matrix-matrix operation. The BLAS library operations increase in complexity with each level. Also, the higher the level of the BLAS routine, the more possibilities for optimizations.
Footnote 6: [https://www.netlib.org/blas/](https://www.netlib.org/blas/)
We used a total of 48 prompts, which resulted in 144 codes generated by Llama-2. These codes will be evaluated by the correctness metric described in the next subsection, and we will compare the results to those of the LLM Copilot model from earlier work [7].
### Correctness metric
To evaluate the correctness of the generated codes, we use the simple metric approach from our previous work [7]. We consider five levels of correctness and proficiency labels between \([0]\), or _non-knowledge_, and \([1]\), or _expert_, when observing the suggested answers provided by Llama-2 for each combination in Table 1.
* _non-knowledge_: No code at all or not a single correct code.
\begin{table}
\begin{tabular}{|l|l|l|} \hline \multicolumn{3}{c|}{Kernels: AXPY, GEMV, GEMM} \\ \hline Programming Language & Programming Model & Keyword \\ \hline C++ & OpenMP & function \\ & OpenMP(offload) & function \\ & OpenACC & function \\ & CUDA & function \\ & HIP & function \\ \hline Fortran & OpenMP & subroutine \\ & OpenMP(offload) & subroutine \\ & OpenACC & subroutine \\ \hline Python & numpy & def \\ & Numba & def \\ & pyCUDA & def \\ & cuPy & def \\ \hline Julia & Threads & \\ & CUDA & \\ & AMDGPU & \\ \hline \end{tabular}
\end{table}
Table 1: Parameters used for code generation
25 _novice_: One correct code, but it includes other several correct or incorrect programming models (e.g., OpenACC suggestions in an OpenMP prompt).
2. _learner_: One correct code, and there are other incorrect codes, but all of them use the requested programming model.
3. _proficient_: All codes are correct and use the programming model requested.
4. _expert_: Only one piece of code is provided, and it is totally correct.
As mentioned, to make the analysis similar to our previous study on the GitHub Copilot LLM, and to obtain more than one code from Llama-2, we must specify the number of codes that we want. So, we will use the highest metric (expert) for cases in which Llama-2 generates all the three requested codes and does so correctly.
## 4 Results
The following subsections describe our evaluation of the HPC kernels generated by the Llama-2 LLM for four different programming languages: C++, Fortran, Julia, and Python. The code generated by Llama-2 has also been collected and uploaded to a GitHub repository.7
Footnote 7: [https://github.com/mustafalail/Llama-2-70b-HPC-Kernels](https://github.com/mustafalail/Llama-2-70b-HPC-Kernels)
### C++
C++ has become the primary programming language used for heterogeneous HPC architectures due to the support that the open-source and vendor communities provide in terms of programming models and compilers. Examples include OpenMP, OpenACC, and CUDA, among others such as HIP, Kokkos, and SYCL. In this study, we focused on the most popular, mature, and widely used programming models in the HPC community: OpenMP, OpenACC, CUDA, and HIP.
#### 4.1.1 OpenMP
OpenMP is considered the de facto standard for parallel programming. The OpenMP codes generated by Llama-2 have the highest quality among the C++ codes. Notably, Llama-2 can leverage relatively advanced OpenMP techniques, including tasking (#pragma omp task), atomic operations (#pragma omp atomic update), and single instruction multiple data (SIMD) primitives (#pragma omp simd), among others (#pragma omp critical). However, not all codes are correct. Also, in some cases, the OpenMP code provided used a defined number of threads. This is very dependent on the architecture to be used. In general, the number of threads should be equal to the number of cores (#pragma omp parallel num_threads(4)). In some particular cases, in the codes corresponding to the AXPY kernel, Llama-2 provided codes that, although similar to the operation conducted by this BLAS routine, were not exactly the same. For instance, the codes did not use a scalar, or they computed other operations,
such as dot product. This is not the same for the other operations evaluated (i.e., matrix-vector and matrix-matrix multiplication) in which the codes provided were correct and functional.
We also see significant errors for the OpenMP target offloading case. In most cases, the code generated was a mix of CUDA and OpenMP codes. Also, the OpenMP primitives used did not correspond to OpenMP target offloading. Unlike the previous case, all generated codes were incorrect.
#### 4.2.3 OpenACC
A similar scenario is observed in the OpenACC case for the AXPY operation. All codes provided were incorrect and were a mix of CUDA codes with OpenACC primitives. However, much higher quality was found in the other two kernels, in which the OpenACC primitives were effectively used. Indeed, we see some advanced techniques, such as the use of "collapse" to enroll two nested and independent for loops (#pragma acc loop independent collapse(2)). Also, we see an effective movement of data between CPU and GPU in some codes and an effective use of tiling/blocking to decompose the matrices. In this case, the codes provided for the kernels of the matrix-vector and matrix-matrix multiplications were correct.
#### 4.2.4 Hip
For HIP codes, we found the same error in most of the codes that correspond to the computation of the thread index (int ind = hipBlockDim_x * hipBlockIdx_x + hipThreadIdx_x;). In some cases, this index was not even computed or it was only partially computed. This relatively simple error breaks the entire code, even if the rest of the code is correct. Other common errors found include using the same names for both CPU and GPU memory pointers, using bi-dimensional blocks of threads to launch the kernels when the kernel implementation only uses uni-dimensional blocks of threads (or vice-versa), and the wrong use of GPU shared memory. Also, as in the previous OpenMP and OpenACC analyses, we saw a mix of HIP and CUDA codes. In this case, we found errors in all three test cases (AXPY, matrix-vector, and matrix-matrix multiplications).
#### 4.2.5 CUDA
Although we found better quality codes for CUDA than for HIP, the Llama-2-generated CUDA codes still contained some important errors. For instance, using _device_ function decorators for the kernels implementation when the correct decorator is _global_, wrong name of CUDA library functions (hipCublasSdot), and initializing GPU memory arrays from the CPU are just a few examples of the errors found. However, all of these errors were found in the AXPY kernel. The code generated for the other two kernels was correct and free of errors. In fact, we observed the effective use of important optimization techniques, such as shared memory (_shared_ float smem[32][32];) and registers (register float rA[32];), which are used to implement relatively complex algorithms based on tiling/blocking for matrix computation.
### Fortran
Fortran was one of the first widely used programming languages for HPC back in the 1970s. In fact, with reasonably good support for current HPC standards, Fortan is still an important programming language for HPC. In the Fortran community, there are two predominant parallel programming models: (1) OpenMP, which is more focused on providing parallel codes for CPUs, and (2) OpenACC, which is more focused on GPUs.
#### 4.2.1 OpenMP
Unlike the C++ codes generated by Llama-2 for OpenMP, we see much better results from Llama-2 when generating Fortran code for OpenMP, especially for the AXPY routine. All generated codes were correct and made use of a scalar. Also, the code generated for the other two kernels used the OpenMP decorators efficiently. Notably, although no advanced OpenMP primitives (e.g., SIMD, collapse) were used, relatively highly optimized algorithms based on tiling/blocking for the matrix-matrix multiplication kernels were used. Unfortunately, this was not the case for OpenMP target offloading, a case in which all the codes provided did not make correct use of the OpenMP primitives.
#### 4.2.2 OpenACC
For OpenACC, Llama-2 provided the wrong OpenACC codes for AXPY kernels and used OpenMP decorators instead of OpenACC ones. Better codes were generated for the other two kernels, and at least one functional code was provided. The OpenACC primitives were not used correctly in many cases, and some of the primitives used do not even exist in the OpenACC standard.
### Julia
Julia provides a dynamic, compiled front end to LLVM to target scientific computing and data science. Julia's use in HPC is still an area of active exploration [8] and community engagement. In this section, we evaluate the correctness of three different Julia packages: Base.Threads.jl, CUDA.jl, and AMDGPU.jl, which are used for parallel programming on CPUs, NVIDIA GPUs, and AMD GPUs, respectively.
#### 4.2.3 Base.Threads.jl
For the parallel CPU codes that use the Base.Threads.jl Julia package, we found that at least one code provided correct matrix-vector and matrix-matrix multiplication. Unfortunately, this is not the case for the AXPY kernel, and all the codes provided for AXPY were invalid. This could be because of Julia's novelty as a programming language in HPC. Notably, in some cases, it can be challenging to generate different codes that implement exactly the same requested operation, such as AXPY using Base.Threads.jl. Common errors found here include missing keywords (@threads) or the use of other packages (Distributed.jl).
CUDA.jl and AMDGPU.jlThe codes generated using the CUDA package (CUDA.jl) were incorrect. Notably, the generated codes attempted to decorate the nested loops that correspond to the kernels in a way that is similar to how they are decorated when using the Base.Threads.jl package. However, using CUDA.jl is not much different from classic CUDA (i.e., the kernels must be implemented out of the main code, and these must be called/launched by using a very specific syntax [CUDA.@sync @cuda threads = threads blocks = blocks kernel(x...)]). We found exactly the same issues for the Llama-2-generated AMDGPU.jl codes.
### Python
Python is a high-level, interpreted, general-purpose programming language. The Python community is one of the biggest software communities today together with C and C++.8 In this study, we used the most popular parallel solutions in the Python ecosystem: numpy, cuPy, pyCUDA, and Numba. Like with C++, the codes generated by Llama-2 for the AXPY kernels were incorrect, and they did not compute the AXPY operation. And again, unlike the AXPY case, Llama-2 provided much better codes for matrix-vector and matrix-matrix multiplication kernels when using numpy and Numba in particular.
Footnote 8: [https://www.tiobe.com/tiobe-index/](https://www.tiobe.com/tiobe-index/)
Notably, the quality of these successful cases lies in the use of optimization techniques, such as the decomposition of the matrices into chunks or doing stridden memory accesses. However, we found an error that is common in all of the codes generated for cuPy: using the __shared__ decorator for the GPU functions instead of the __device__ decorator, which is the one that must be used. Unfortunately, although the rest of the code is correct, this relatively small error breaks the entire code.
### Llama-2 versus Copilot
This section compares the results of the GitHub Copilot model against the results presented above. For the Copilot model, we use the results presented by W. Godoy et al. [7]. The codes generated by the Copilot model are hosted in a GitHub repository.9
Footnote 9: [https://github.com/keitaTN/Copilot-hpc-kernels](https://github.com/keitaTN/Copilot-hpc-kernels)
First, we focus on C++. Figure 2 illustrates the results (correctness) of the C++ codes generated for OpenMP, OpenMP offload, OpenACC, CUDA, and HIP. As shown, Copilot can provide at least one correct code for most of the kernels and programming models, whereas Llama-2 provided correct codes for OpenMP, OpenACC, and CUDA. Although Llama-2 was unable to provide correct codes for OpenMP offload and HIP, the codes that it did correctly generate were higher quality (i.e., optimized) than the ones generated by Copilot.
For Fortran (Figure 3), we have a similar conclusion to that of the C++ study, with the exception of the AXPY kernel. Here, we actually see that Llama-2 achieved much better performance for the AXPY kernel. Once again, however,
Llama-2 provided very poor performance for OpenMP offload. Notably, Llama-2 generated high-quality OpenMP codes for all kernels. Copilot still generated at least one correct code for all kernels and programming models and provided the same quality except for the AXPY-OpenMP test case.
For Julia, Llama-2 did not generate correct codes for any of the test cases with the exception of the matrix-vector and matrix-matrix multiplications using the Base.Threads.jl Julia package. This case contained at least one correct code (Figure 4). Unlike Llama-2, GitHub Copilot provided correct codes for all tests except for AMDGPU.jl, for which neither LLM was able to generate correct codes.
Finally, Figure 5 illustrates the results for the Python codes. Copilot was able to generate at least one correct code for most of the test cases with the exception of the level-2 and level-3 BLAS kernels using Numba. Llama-2 provided the best results for these cases. Llama-2 also generated correct results for some numpy codes.
Overall, the main difference between the Copilot and Llama-2 LLMs is that, although Copilot can provide at least one correct code for most of the programming languages and models (albeit the generated codes are not optimized), Llama-2 is more aggressive in terms of optimizations, thereby providing well
Figure 3: Results for Fortran kernels (left) and programming models (right).
Figure 2: Results for C++ kernels (left) and programming models (right).
optimized codes at the cost of generating incorrect codes in multiple cases. So, in general, Copilot generates codes that are more reliable but less optimized, and codes generated by Llama-2 are less reliable but more optimized.
## 5 Conclusions
We evaluated the Llama-2 model as an HPC code generator for different programming languages (e.g., C++, Fortran, Julia, and Python) and models used for multicore CPUs (e.g., OpenMP, Base.Threads.jl), NVIDIA GPUs (e.g., CUDA, CUDA.jl, OpenACC, numpy, cuPy, pyCUDA, and Numba), and AMD GPUs (e.g., HIP and AMDGPU.jl).
Llama-2 can provide good-quality HPC codes for some of the previously mentioned solutions. When compared with GitHub Copilot, we realized that the Llama-2 model attempts to provide more optimized codes at the cost of not being as reliable as Copilot. In this study, Llama-2 was able to generate at least one correct code for 40% (C++), 66% (Fortran), 22% (Julia), and 33% (Python) of the test cases. GitHub Copilot provided at least one correct code in 80% (C++), 100% (Fortran), 66% (Julia), and 83% (Python) of the same test cases.
Figure 4: Results for Julia kernels (left) and programming models (right).
Figure 5: Results for Python kernels (left) and programming models (right).
## Acknowledgment
This work is funded by Bluestone, an X-Stack project in the DOE Advanced Scientific Computing Office with program manager Hal Finkel.
|
2309.11761 | CASM Monte Carlo: Calculations of the thermodynamic and kinetic
properties of complex multicomponent crystals | Monte Carlo techniques play a central role in statistical mechanics
approaches for connecting macroscopic thermodynamic and kinetic properties to
the electronic structure of a material. This paper describes the implementation
of Monte Carlo techniques for the study multicomponent crystalline materials
within the Clusters Approach to Statistical Mechanics (CASM) software suite,
and demonstrates their use in model systems to calculate free energies and
kinetic coefficients, study phase transitions, and construct first-principles
based phase diagrams. Many crystal structures are complex, with multiple
sublattices occupied by differing sets of chemical species, along with the
presence of vacancies or interstitial species. This imposes constraints on
concentration variables, the form of thermodynamic potentials, and the values
of kinetic transport coefficients. The framework used by CASM to formulate
thermodynamic potentials and kinetic transport coefficients accounting for
arbitrarily complex crystal structures is presented and demonstrated with
examples applying it to crystal systems of increasing complexity. Additionally,
a new software package is introduced, casm-flow, which helps automate the
setup, submission, management, and analysis of Monte Carlo simulations
performed using CASM. | Brian Puchala, John C. Thomas, Anton Van der Ven | 2023-09-21T03:48:15Z | http://arxiv.org/abs/2309.11761v1 | CASM Monte Carlo: Calculations of the thermodynamic and kinetic properties of complex multicomponent crystals
###### Abstract
Monte Carlo techniques play a central role in statistical mechanics approaches for connecting macroscopic thermodynamic and kinetic properties to the electronic structure of a material. This paper describes the implementation of Monte Carlo techniques for the study multicomponent crystalline materials within the Clusters Approach to Statistical Mechanics (CASM) software suite, and demonstrates their use in model systems to calculate free energies and kinetic coefficients, study phase transitions, and construct first-principles based phase diagrams. Many crystal structures are complex, with multiple sublattices occupied by differing sets of chemical species, along with the presence of vacancies or interstitial species. This imposes constraints on concentration variables, the form of thermodynamic potentials, and the values of kinetic transport coefficients. The framework used by CASM to formulate thermodynamic potentials and kinetic transport coefficients accounting for arbitrarily complex crystal structures is presented and demonstrated with examples applying it to crystal systems of increasing complexity. Additionally, a new software package is introduced, casm-flow, which helps automate the setup, submission, management, and analysis of Monte Carlo simulations performed using CASM.
## 1 Introduction
The thermodynamic and kinetic properties of a material determine how it behaves during heat treatments and when used as part of a device or a load-bearing structure. Thermodynamic and kinetic properties are also essential ingredients to meso-scale and continuum models that describe the temporal evolution of a material when taken out of thermal, chemical and/or mechanical equilibrium.[1; 2; 3; 4; 5; 6; 7; 8] While intrinsic thermodynamic and kinetic properties are generally measured experimentally, they can also be calculated from first principles. A reliance on statistical mechanics is then essential due to the importance of entropy for most environmental conditions.[9; 10; 11; 7; 12]
Monte Carlo techniques [13; 14] play a central role in statistical mechanics schemes that seek to connect macroscopic thermodynamic and kinetic properties to the electronic structure of a material.[15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 21; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 43; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80; 81; 82; 83; 84; 85] They enable the numerical calculation of thermodynamic properties through the sampling of equilibrium atomic and/or electronic excitations.[63; 28; 44; 86] Monte Carlo techniques are also invaluable in the calculation of kinetic properties. In crystalline materials where atomic hops are rare events, kinetic Monte Carlo simulations [87] can be used to calculate atomic transport coefficients in systems where the complexities due to correlated diffusion require simulated times that far exceed those typically possible using molecular dynamics.[88; 89; 90; 91; 92; 93; 94; 95; 96; 97; 98; 99; 100; 101; 102; 43; 103; 104; 105; 60; 106; 107; 61; 62; 108; 109]
This paper describes implementations of Monte Carlo techniques within the Clusters Approach to
Statistical Mechanics (CASM) software suite.[110] The focus is on statistical mechanics schemes to calculate the thermodynamic and kinetic properties of multi-component crystals. The crystal structures of most materials are complex, often hosting multiple sublattices that each only accommodate a subset of chemical species. These crystallographic complexities impose constraints on concentration variables and the form of thermodynamic potentials.[111] They also impose constraints on diffusion mechanisms and the mathematical form of diffusional flux expressions.[112; 104] A generalized framework with which to formulate thermodynamic potentials and kinetic transport coefficients for arbitrarily complex crystal structures is developed. These quantities can be calculated from first principles using Monte Carlo sampling techniques. The approach relies on generalized cluster expansion Hamiltonians [113; 9; 114; 7; 110] to interpolate expensive first-principles electronic structure calculations within Monte Carlo simulations of crystalline materials.
The paper is structured as follows. First, a general approach of tracking concentration within crystal structures that can host different sets of species on different sublattices is introduced. This is necessary to formulate semi-grand canonical free energies for arbitrarily complex crystal structures. The semi-grand canonical ensemble is especially convenient for Monte Carlo methods that calculate the thermodynamic properties of multi-component solids.[63; 47; 61] General statistical mechanics expressions are derived within the semi-grand canonical ensemble. The effect of crystallographic constraints on diffusional flux expressions within an arbitrarily complex crystal structure is derived next. This is followed by a brief overview of statistical mechanics principles that describe stochastic atomic hop events that are responsible for long-range diffusion in the crystalline state. A brief overview of cluster expansion techniques is provided to set the stage for illustrations of the types of results that can be calculated with Monte Carlo methods for a model system and a description of free energy integration techniques. The paper ends with a description of different Monte Carlo algorithms implemented within CASM and a summary of utilities to automate high throughput Monte Carlo simulations.
## 2 Thermodynamics of alloyed crystals
Thermodynamic descriptions of crystals require a careful consideration of the crystallographic constraints that limit the allowed variations in the concentration of the different chemical constituents of the crystal.[111; 112] This section introduces concentration variables that account for the crystallographic constraints that emerge when holding the number of crystal sites constant. Characteristic thermodynamic free energies are introduced next and a general definition of the semi-grand canonical free energy for an arbitrarily complex crystal is then formulated. Equations of state and response functions are derived from common characteristic potentials and the connection to the atomic and electronic scale is made using statistical mechanics.
### Concentration variables of crystals
The structure of a crystal can be generated by periodically repeating a unit cell and a basis of atoms. In three dimensions, the unit cell is defined by three vectors \(\vec{l}_{1}\), \(\vec{l}_{2}\) and \(\vec{l}_{3}\). Integer linear combinations of the unit cell vectors generate the lattice of the crystal. The coordinates of the \(n\) basis sites of the unit cell, denoted \(\vec{r}_{b}\), \(b=1,\ldots,n\), can then be translated to each lattice site to form the full crystal. Each basis site within the unit cell defines a sublattice. Figure 1 illustrates the unit cell of a 2-dimensional crystal, spanned by \(\vec{l}_{1}\) and \(\vec{l}_{2}\), and possessing basis atoms at positions \(\vec{r}_{1}\) and \(\vec{r}_{2}\). The resulting crystal is a two-dimensional honeycomb network.
A crystal may host \(s\) chemical species. The number of each chemical species, \(i\), is tracked with a variable \(N_{i}\). This quantity can be normalized by the number of unit cells in the crystal, \(N_{u}\), yielding
Figure 1: The unit cell of a 2-dimensional crystal, spanned by \(\vec{l}_{1}\) and \(\vec{l}_{2}\), and possessing basis atoms at positions \(\vec{r}_{1}\) and \(\vec{r}_{2}\).
concentration variables \(n_{i}=N_{i}/N_{u}\). The concentrations of all chemical species in the crystal can be collected in the vector \(\vec{n}^{\sf T}=[n_{A},n_{B},\dots]\). In many crystals, different chemical species may segregate to only a subset of sublattices. For example, an oxynitride may consist of sublattices that host different transition metal cations and a separate set of sublattices that host oxygen and nitrogen anions. It is also common that a subset of sublattices host vacancies in appreciable numbers.
In many applications, it is convenient to define thermodynamic and kinetic quantities for a crystal with a fixed number of unit cells, \(N_{u}\). This is a relevant thermodynamic boundary condition for many experimental situations where a solid maintains its crystal structure while undergoing an internal redistribution of chemical species through diffusional processes.[104] Often thermodynamic quantities are normalized by the number of unit cells (or equivalently the number of sites) in a crystal. A constant number of unit cells is also a common boundary condition in Monte Carlo simulations. The constraints that emerge when holding \(N_{u}\) constant play an important role in determining the form of characteristic thermodynamic potentials and of diffusional flux expressions, as will be described in subsequent sections.
The concentration variables \(n_{i}\), \(i=1,\dots,s\) cannot be varied independently of each other in a crystal with a fixed number of unit cells, \(N_{u}\). This is illustrated for a ternary A-B-C alloy having a crystal with a one-atom basis (e.g., fcc or bcc). While the concentration variables \(\vec{n}^{\sf T}=[n_{A},n_{B},n_{C}]\) reside in an \(s=3\) dimensional space, their allowed variations in a crystal with a fixed number of unit cells are restricted to a two-dimensional space spanned by the vectors \(\vec{q}_{1}\) and \(\vec{q}_{2}\) that have their origin at \(\vec{n}_{0}\). This is illustrated in Figure 2 for a ternary alloy having a one atom unit cell. In Figure 2, the chosen origin is the crystal in which all sites are occupied by A atoms. The spanning vectors \(\vec{q}_{1}\) and \(\vec{q}_{2}\) represent crystal preserving exchanges of A atoms with B atoms and A atoms with C atoms, respectively
In general, the subspace of allowed compositions for a crystal with a constant number of unit cells can be described mathematically as
\[\vec{n}=\vec{n}_{0}+\sum_{i=1}^{k}x_{i}\vec{q}_{i} \tag{1}\]
where \(\vec{n}_{0}\) points to a chosen origin in the subspace. The \(k\) variables \(x_{i}\) are referred to as parametric concentrations and can be varied independently of each other. By collecting the spanning vectors \(\vec{q}_{i}\), \(i=1,\dots,k\) as a \(s\times k\) matrix \(\mathbf{Q}=[\vec{q}_{1},\dots,\vec{q}_{k}]\), Eq. 1 can be expressed more compactly as
\[\vec{n}=\vec{n}_{0}+\mathbf{Q}\vec{x}, \tag{2}\]
where the parametric concentrations are collected as a \(k\)-dimensional vector \(\vec{x}^{\sf T}=[x_{1},\dots,x_{k}]\), which can be called the parametric composition.
In general, vectors that span the subspace of allowed compositions when holding \(N_{u}\) constant do not form an orthonormal set. It will be useful to introduce the dual spanning basis, \(\vec{r}_{i}\), which satisfy \(\vec{r}_{i}^{\sf T}\vec{q}_{j}=\delta_{i,j}\), where \(\delta_{i,j}\) is the Kronecker delta. By collecting these vectors in the \(s\times k\) matrix \(\mathbf{R}=[\vec{r}_{1},\dots,\vec{r}_{k}]\), the following identities hold \(\mathbf{R}^{\sf T}\mathbf{Q}\mathbf{=}\mathbf{Q}^{\sf T}\mathbf{R}=\mathbf{I}\), where \(\mathbf{I}\) is a \(k\times k\) identity matrix. These vectors relate the parametric composition \(\vec{x}\) to the concentration variables per unit cell according to
\[\vec{x}=\mathbf{R}^{\sf T}(\vec{n}-\vec{n}_{0}) \tag{3}\]
The composition vector \(\vec{n}\) in a crystal with fixed \(N_{u}\) is restricted to a \(k\)-dimensional subspace of the full \(s\) dimensional space of concentration variables \([n_{1},\dots,n_{s}]\). The subspace that is orthogonal to the \(k\) dimensional subspace of allowed compositions also plays an important role in the thermodynamic and kinetic formalism to be developed in the following sections. This subspace can be spanned with a
Figure 2: The space of allowed compositions in a ternary A-B-C alloy having a one-atom basis, spanned by vectors \(\vec{q}_{1}\) and \(\vec{q}_{2}\) that have their origin at \(\vec{n}_{0}\). The vector \(\vec{t}_{3}\) spans the null of the allowed composition space.
set of \(s-k\) orthonormal vectors \(\vec{t}_{k+1},\ldots,\vec{t}_{s}\). For the ternary A-B-C alloy of Figure 2, this is a one dimensional space spanned by \(\vec{t}_{3}^{\sf T}=(1/\sqrt{3})[1,1,1]\). To ensure that \(\vec{n}\) resides within the fixed crystal subspace, it is necessary for \(\vec{t}_{j}^{\sf T}(\vec{n}-\vec{n}_{0})=0\) for \(j=k+1,\ldots,s\). By collecting the \(\vec{t}_{j}\) in the \(s\times(s-k)\) dimensional matrix \(\mathbf{T}=[\vec{t}_{k+1},\ldots,\vec{t}_{s}]\), these orthogonality criteria can be expressed compactly as
\[\mathbf{T}^{\sf T}(\vec{n}-\vec{n}_{0})=\vec{0} \tag{4}\]
where \(\vec{0}\) is a \(s-k\) vector of zeros. The appendix illustrates these concepts for a variety of multi-component crystals that are more complicated than the ternary alloy having a crystal with one basis site.
### Characteristic thermodynamic potentials
The characteristic thermodynamic potential, often referred to as the free energy, embeds all the thermodynamic information about a system in the form of first and second derivatives. Furthermore, its minimum with respect to internal degrees of freedom determines the equilibrium state of the system, thereby providing information about the direction of spontaneous processes of unstable and metastable states.
The specific form of the free energy is determined by the imposed thermodynamic boundary conditions. Thermodynamic boundary conditions specify, for each pair of conjugate state variables, whether the extensive variable or the conjugate intensive variable is held constant. For example, if the volume of a system is held constant, then the system in equilibrium will adopt a particular pressure that can be measured. The volume is then the control, or _natural variable_. If however, the pressure is controlled and the volume is measured, then the pressure is the natural variable.
The characteristic free energy can be obtained by applying a Legendre transform to the internal energy, \(U\), of the system for each intensive natural variable that is controlled as a thermodynamic boundary condition.[115] When the only intensive natural variable is the temperature, \(T\), and consequently all extensive variables except entropy, \(S\), are also controlled, the characteristic potential is the Helmholtz free energy
\[F=U-TS. \tag{5}\]
When the pressure, \(P\), is a natural variable in addition to temperature, the characteristic potential is the Gibbs free energy
\[G=U-TS+PV. \tag{6}\]
Another common thermodynamic boundary condition occurs when the chemical potential of one of the \(s\) chemical species is held constant, in addition to temperature and pressure. For example, the chemical potential of Li within an intercalation compound such as Li\({}_{x}\)CoO\({}_{2}\) can be controlled in an electrochemical cell by fixing the open cell voltage.[12] The characteristic potential for this boundary condition is a grand canonical free energy
\[\Lambda=U-TS+PV-\mu_{Li}N_{Li} \tag{7}\]
The general rule to identify the characteristic free energy for a given set of boundary conditions is that a Legendre transform is applied to the internal energy for each intensive variable that is controlled.
An important free energy for a crystal with a fixed number of unit cells is referred to as the semi-grand canonical potential [63; 47]. For arbitrarily complex, multi-component crystals, we generalize the definition of the semi-grand canonical potential as the characteristic potential whose intensive natural variables are conjugate to the parametric concentrations, \(\vec{x}^{\sf T}=[x_{1},\ldots,x_{k}]\). The parametric compositions are the independent composition variables that satisfy the constraints imposed upon fixing the number of unit cells of the crystal. The mathematical form of the semi-grand canonical potential can be identified by starting with Euler's theorem, which upon rearranging terms relates the Gibbs free energy, under conditions of constant temperature and pressure, to the sum of chemical potentials times the number of atoms of each species \(i\) according to
\[G=\sum_{i=1}^{s}\mu_{i}N_{i}=N_{u}\vec{\mu}^{T}\vec{n} \tag{8}\]
where \(N_{i}=N_{u}n_{i}\) was used to obtain the second equality and where \(\vec{\mu}^{\sf T}=[\mu_{1},\ldots,\mu_{s}]\). Inserting Eq. 2 into Eq. 8, yields
\[G=N_{u}\vec{\mu}^{T}(\vec{n}_{0}+\mathbf{Q}\vec{x})=N_{u}\left(\vec{ \mu}^{T}\vec{n}_{0}+\vec{\vec{\mu}}^{T}\vec{x}\right) \tag{9}\]
where
\[\vec{\vec{\mu}}=(\mathbf{Q}^{T}\vec{\mu}) \tag{10}\]
are referred to as exchange chemical potentials. Equation 9 shows that the intensive variables that
are conjugate to \(N_{u}\vec{x}\) are the elements of the exchange chemical potential vector \(\vec{\vec{\mu}}\). The semi-grand canonical free energy is therefore defined as
\[\Phi=G-N_{u}\vec{\vec{\mu}}^{T}\vec{x}=N_{u}\vec{\mu}^{T}\vec{n}_{0} \tag{11}\]
where the second equality is derived using Eq. 9. The semi-grand canonical potential can be normalized by the number of primitive unit cells in the crystal to yield
\[\phi=g-\vec{\vec{\mu}}^{T}\vec{x}=\vec{\mu}^{T}\vec{n}_{0} \tag{12}\]
where \(\phi=\Phi/N_{u}\) and \(g=G/N_{u}\). The Appendix derives semi-grand canonical potentials for a variety of multi-component crystals having different crystal structures and sublattice constraints.
### Equations of state
Equations of state relate thermodynamic variables to characteristic potentials in the form of derivatives of the characteristic potential with respect to its natural variables.[115] The equations of state can be extracted from the differential of the characteristic potential. For example, the differential of the Gibbs free energy takes the form
\[dG=-SdT+VdP+\sum_{i=1}^{s}\mu_{i}dN_{i} \tag{13}\]
where the natural variables, \(T\), \(P\) and the \(N_{i}\) (\(i=1,\ldots,s\)), which are the thermodynamic boundary conditions, appear as differentials. The equations of state then become
\[S = -\left(\frac{\partial G}{\partial T}\right)_{P,N_{i}} \tag{14}\] \[V = \left(\frac{\partial G}{\partial P}\right)_{T,N_{i}}\] (15) \[\mu_{i} = \left(\frac{\partial G}{\partial N_{i}}\right)_{T,P,N_{j\neq i}} \tag{16}\]
The equations of state relate each thermodynamic variable that is not controlled to the slope of the characteristic free energy with respect to its conjugate thermodynamic variable that is controlled as a boundary condition.
When considering crystals, it is also useful to formulate equations of state in terms of the parametric composition \(\vec{x}\) and the number of unit cells. By relating \(N_{i}=N_{u}n_{i}\) and using Eq. 2, the differential of the Gibbs free energy can be expressed as
\[dG=-SdT+VdP+N_{u}\vec{\vec{\mu}}^{\sf T}d\vec{x}+\vec{\mu}^{\sf T}\vec{n}dN_{u} \tag{17}\]
In this form, variations in the number of atoms can be performed either by holding the number of unit cells fixed while varying the relative amounts of the constituents, or by holding the composition fixed while varying the number of unit cells. The equations of state at constant \(T\) and \(P\) then become
\[\tilde{\mu}_{i} = \left(\frac{\partial g}{\partial x_{i}}\right)_{T,P,x_{j\neq i}, N_{u}} \tag{18}\] \[g = \left(\frac{\partial G}{\partial N_{u}}\right)_{T,P,x_{i}} \tag{19}\]
where \(g=G/N_{u}=\vec{\mu}^{\sf T}\vec{n}\) is the Gibbs free energy per number of unit cells (Eq. 8).
The differential form of the semi-grand canonical potential takes the form
\[d\Phi=-SdT+VdP-N_{u}\vec{x}^{\sf T}d\vec{\vec{\mu}}+\vec{\mu}^{\sf T}\vec{n}_ {0}dN_{u} \tag{20}\]
where the only extensive natural variable is the number of unit cells \(N_{u}\), which is conjugate to \(\phi=\vec{\mu}^{\sf T}\vec{n}_{0}\), the semi-grand canonical potential normalized by the number of unit cells (Eq. 12). As can be inferred from the differential form Eq. 20, the equations of state derived from the semi-grand canonical potential are
\[S = -\left(\frac{\partial\Phi}{\partial T}\right)_{P,\tilde{\mu}_{i},N_{u}} \tag{21}\] \[V = \left(\frac{\partial\Phi}{\partial P}\right)_{T,\tilde{\mu}_{i},N_{u}}\] (22) \[x_{i} = -\frac{1}{N_{u}}\left(\frac{\partial\Phi}{\partial\tilde{\mu}_{ i}}\right)_{T,P,\tilde{\mu}_{j\neq i},N_{u}}\] (23) \[\phi = \left(\frac{\partial\Phi}{\partial N_{u}}\right)_{T,P,\tilde{\mu }_{i}}. \tag{24}\]
These equations of state are especially useful to obtain free energies from semi-grand canonical Monte Carlo simulations.
### Response functions
The second derivatives of a characteristic potential with respect to its natural variables are related to response functions. A response function measures how a state variable that is not controlled as a boundary condition is affected by a variation of a natural variable. For \(\Phi\), the following relations can be derived for the normalized heat capacity, compressibility, and chemical susceptibilities
\[c_{P,\tilde{\mu}_{i}}=-\frac{T}{N_{u}}\bigg{(}\frac{\partial^{2}\Phi}{ \partial T^{2}}\bigg{)}_{P,\tilde{\mu}_{i},N_{u}} \tag{25}\]
\[\kappa_{T,\tilde{\mu}_{i}}=-\frac{1}{V}\bigg{(}\frac{\partial^{2}\Phi}{\partial P^ {2}}\bigg{)}_{T,\tilde{\mu}_{i},N_{u}} \tag{26}\]
\[\chi_{ij}=-\frac{1}{N_{u}}\bigg{(}\frac{\partial^{2}\Phi}{\partial\tilde{\mu}_{ i}\partial\tilde{\mu}_{j}}\bigg{)}_{T,\tilde{\mu}_{k\neq i,j},N_{u}} \tag{27}\]
Similar to the heat capacity and the compressibility, the chemical susceptibilities, \(\chi_{ij}\), are response functions in that they measure the change in the \(i\)-th parametric composition component upon an incremental change of the \(j\)-th exchange chemical potential component. The normalization by \(N_{u}\) or \(V\) is to ensure that the response functions are independent of the size of the crystal. The response functions listed above are a subset of the \(m(m-1)/2\) possible response functions, one for each unique Hessian element of the characteristic potential with respect to its natural variables, with \(m\) referring to the number of natural variables.
### Elements of Statistical Mechanics
The atoms and electrons of a crystal posses a variety of degrees of freedom that can be excited at finite temperature.[9; 10; 7] Each collective excitation of the solid corresponds to a microstate. For example, a binary alloy consisting of A and B atoms can adopt one of many possible arrangements over the sites of a parent crystal structure. The arrangement of A and B atoms varies spatially and evolves over time due to thermally activated atomic hops. Each arrangement of the A and B atoms is referred to as a configurational microstate. The atoms of a crystal also vibrate around their equilibrium sites and thereby produce vibrational microstates.[11; 116] Other common degrees of freedom are electronic in nature and include the order/disorder phenomena involving local magnetic moments of magnetic atoms or different oxidation states of redox active atoms.
The microstate of a solid can be tracked with a collection of variables that specify the state of each atom and localized electronic state within the crystal. The occupant at site \(i\) of a binary A-B alloy, for example, can be specified by an occupation variable \(\sigma_{i}\), which is +1 (-1) when occupied by B (A). The vibrational state of an atom at site \(i\) can be tracked with a displacement vector \(\vec{u}_{i}\), while the orientation of a magnetic moment at a site \(i\) containing a magnetic atom is specified by a unit vector \(\vec{m}_{i}\). The microstate of a crystal containing N sites is then uniquely specified by the collection of all site variables \(\mathbb{C}=(\sigma_{1},...,\sigma_{i},...,\sigma_{N},\vec{u}_{1},...\vec{u}_{ N},\vec{m}_{1},...\vec{m}_{N})\) when treating the degrees of freedom classically.
Statistical mechanics serves as a link between the electronic structure and the thermodynamic properties of a solid. The partition function plays a central role.[117; 118] When treating discrete degrees of freedom (e.g. chemical configurational degrees of freedom) or when treating each microstate as a quantum mechanical eigenstate, the partition function can be expressed as,[117; 118]
\[Z=\sum_{\mathbb{C}}e^{-\beta\Omega_{\mathbb{C}}}, \tag{28}\]
with the sum in Eq. (28) extending over all microstates, \(\mathbb{C}\), consistent with the imposed boundary conditions (\(\beta=1/k_{B}T\) with \(k_{B}\) being Boltzmann's constant). The form of the generalized enthalpy \(\Omega_{\mathbb{C}}\) depends on the externally imposed boundary conditions.[117] For instance, when holding \(T\), \(V\), and the \(x_{i}\) constant, \(\Omega_{\mathbb{C}}\) is simply equal to the energy of the solid, \(E_{\mathbb{C}}\). For constant \(T\), \(P\), and \(x_{i}\), \(\Omega_{\mathbb{C}}=E_{\mathbb{C}}+PV_{\mathbb{C}}\), where \(V_{\mathbb{C}}\) is the volume of the solid in microstate \(\mathbb{C}\). When holding \(T\), \(P\) and the exchange chemical potentials, \(\tilde{\mu}_{i}\), for \(i=1,...,k\) constant, \(\Omega_{\mathbb{C}}=E_{\mathbb{C}}+PV_{\mathbb{C}}-N_{u}\sum_{i}^{k}\tilde{ \mu}_{i}x_{i,\mathbb{C}}\), where the \(x_{i,\mathbb{C}}\) are the parametric concentrations in microstate \(\mathbb{C}\).
It can be shown that the probability that a solid is in a microstate \(\mathbb{C}\) at constant temperature takes the form [117; 118]
\[P_{\mathbb{C}}=\frac{e^{-\beta\Omega_{\mathbb{C}}}}{Z} \tag{29}\]
A key postulate of statistical mechanics states that measured thermodynamic properties are averages over their corresponding microscopic counterparts.[117] The thermodynamic generalized enthalpy at constant temperature \(T\), for example, is equal to
\[\langle\Omega\rangle=\sum_{\mathbb{C}}P_{\mathbb{C}}\Omega_{\mathbb{C}} \tag{30}\]
The postulate can be expressed more generally as
\[\langle X\rangle=\frac{1}{Z}\sum_{\mathbb{C}}X_{\mathbb{C}}e^{-\beta\Omega_{ \mathbb{C}}}, \tag{31}\]
where \(X_{\mathbb{C}}\) is some property of microstate \(\mathbb{C}\). Ensemble averages can be calculated using Monte Carlo sampling techniques.
The characteristic thermodynamic potential, \(\Phi\), is related to the partition function according to,
\[\beta\Phi=-\mathrm{ln}Z. \tag{32}\]
Similar to the equations of state, Eq. (21), (22) and (23), the following relations can be obtained for \(\Phi\) by taking derivatives of Eq. (32) and using (28)
\[\left(\frac{\partial\beta\Phi}{\partial\beta}\right)_{P,\tilde{ \mu}_{i},N_{u}}= -\frac{1}{Z}\frac{\partial Z}{\partial\beta}=\langle\Omega\rangle \tag{33}\] \[\left(\frac{\partial\beta\Phi}{\partial P}\right)_{\beta,\tilde{ \mu}_{i},N_{u}}= -\frac{1}{Z}\frac{\partial Z}{\partial P}=\beta\langle V\rangle\] (34) \[\left(\frac{\partial\beta\Phi}{\partial\tilde{\mu}_{i}}\right)_{ \beta,P,\tilde{\mu}_{j\neq i},N_{u}}= -\frac{1}{Z}\frac{\partial Z}{\partial\tilde{\mu}_{i}}=-\beta N_{u} \langle\tilde{x}_{i}\rangle, \tag{35}\]
which relate partial derivatives of the characteristic potential with respect to its natural variables to ensemble averages of extensive properties.
A similar approach can be followed to relate the thermodynamic response functions, which are determined by second derivatives of \(\Phi\), to fluctuations in \(\Omega_{\mathbb{C}}\), \(V_{\mathbb{C}}\) and \(x_{i,\mathbb{C}}\) according to
\[c_{P,\tilde{\mu}_{i},N_{u}}=\frac{\langle\Omega^{2}\rangle-\langle\Omega \rangle^{2}}{N_{u}kT^{2}} \tag{36}\]
\[\chi_{ij}=\frac{\langle x_{i}x_{j}\rangle-\langle x_{i}\rangle\langle x_{j} \rangle}{kT}N_{u}. \tag{37}\]
\[\chi_{i,\Omega}=\frac{\langle x_{i}\Omega\rangle-\langle x_{i}\rangle\langle \Omega\rangle}{kT}. \tag{38}\]
## 3 Diffusion in multicomponent crystals
The conservation of the number of crystal sites imposes constraints on diffusional fluxes and affects the thermodynamic driving forces for diffusion.[112; 104] In this section, generalized diffusional flux expressions in crystals are derived that account for crystallographic constraints. This is followed by a description of atomic hops that relies on transition state theory. A connection is then made between transport coefficients and fluctuations at the atomic scale using well-established Kubo-Green expressions.[119]
### Phenomenological description of diffusion
Diffusional fluxes are driven by gradients in chemical potentials. For multicomponent, isotropic diffusion, the phenomenological equation that to first order relate the diffusional fluxes, \(J_{i}\), to gradients in chemical potentials, \(\nabla\mu_{i}\), can be expressed in vector form as:
\[\vec{J}=-\mathbf{L}\nabla\vec{\mu}, \tag{39}\]
where \(\mathbf{L}\) is the matrix of Onsager kinetic coefficients and \(\vec{J}^{\mathsf{T}}=[J_{1},\ldots,J_{s}]\). The element \(L_{ij}\) gives the contribution of the gradient in chemical potential of species \(j\), \(\nabla\mu_{j}\), to the flux of species \(i\), \(J_{i}\). The elements of \(\mathbf{L}\) must be symmetric (\(L_{ij}=L_{ji}\)).[120]
Crystallographic constraints emerge when diffusion occurs within a single crystal away from extended defects such as dislocations, grain boundaries and interfaces.[121; 104] A redistribution of chemical species then occurs by preserving the underlying crystal structure. The conservation of crystal sites while atoms diffuse imposes constraints on the fluxes. For example, a conservation of crystal sites in an A-B alloy in which diffusion is mediated by a dilute concentration of vacancies requires that \(J_{A}+J_{B}+J_{Va}=0\).[104; 121] More generally, as shown in B, the constraints on the fluxes take the form
\[\mathbf{T}^{\mathsf{T}}\vec{J}=\vec{0}. \tag{40}\]
where \(\mathbf{T}\), the \(s\times(s-k)\) dimensional matrix introduced in Section 2.1, collects the vectors in composition space that is orthogonal to the composition space of a crystal with a fixed number of unit cells. The vector \(\vec{0}\) is a \((s-k)\) dimensional vector of zeros. Substituting the flux expressions, Eq. 39, into Eq. 40 and using the fact that \(\mathbf{L}\) is a symmetric matrix, leads to linear relationships between different Onsager transport coefficients that can be expressed compactly as
\[\mathbf{L}\mathbf{T}=\mathbf{0} \tag{41}\]
where \(\mathbf{0}\) is a \(s\times(s-k)\) matrix of zeros.
Equations 40 and 41 enable a reduction in the number of independent flux expressions and independent thermodynamic driving forces for diffusion of the form
\[\vec{J}=-\tilde{\mathbf{L}}\nabla\vec{\tilde{\mu}} \tag{42}\]
where the \(k\) independent exchange fluxes are defined as
\[\vec{J}=\mathbf{R}^{\mathsf{T}}\vec{J} \tag{43}\]
and where
\[\tilde{\mathbf{L}}=\mathbf{R}^{\mathsf{T}}\mathbf{L}\mathbf{R} \tag{44}\]
is a \(k\times k\) Onsager transport coefficient matrix for diffusion in a crystal with a constant number of unit cells. The \(\vec{\tilde{\mu}}=\mathbf{Q}^{\mathsf{T}}\vec{\mu}\) appearing in Eq. 42 are the exchange chemical potentials that appear as natural variables of the semi-grand canonical potential. The matrix \(\mathbf{R}^{\mathsf{T}}\), introduced in Section 2.1, is the left pseudoinverse of \(\mathbf{Q}\). A full derivation of Equations 42, 43 and 44 is provided in B.
Chemical potentials are not as easily measured as composition gradients. It is therefore common to formulate the flux expressions in terms of gradients in concentration by applying the chain rule of differentiation to the gradients of chemical potentials in Eq. 42 to yield
\[\vec{\vec{J}}=-\mathbf{D}\nabla\vec{c}, \tag{45}\]
where \(\vec{c}\) is a \(k\)-dimensional vector of volumetric concentrations, \(c_{i}=x_{i}/v_{u}\), with \(v_{u}\) the volume per unit cell. The chemical diffusion coefficient matrix, \(\mathbf{D}\), relates the \(k\) exchange fluxes (Eq. 43) to \(k\) independent concentration variables and describes diffusion in a crystal in which the number of unit cells, \(N_{u}\) is conserved.
The matrix of chemical diffusion coefficients, \(\mathbf{D}\), can be factored into a product of a kinetic factor and a thermodynamic factor according to
\[\mathbf{D}=\tilde{\mathbf{K}}\mathbf{\Theta} \tag{46}\]
where \(\tilde{\mathbf{K}}\) is a \(k\times k\) matrix of kinetic coefficients with
\[\tilde{\mathbf{K}}=v_{u}k_{B}T\tilde{\mathbf{L}} \tag{47}\]
and where \(\mathbf{\Theta}\) is a \(k\times k\) matrix of thermodynamic factors, with elements [104]
\[\Theta_{ij}=\frac{1}{k_{B}T}\frac{\partial\tilde{\mu}_{i}}{\partial x_{j}}= \frac{1}{k_{B}T}\frac{\partial^{2}g}{\partial x_{j}\partial x_{i}}. \tag{48}\]
While there is some arbitrariness as to how the matrix of diffusion coefficients can be factored, this particular factorization ensures that the elements of the kinetic factor matrix, \(\tilde{\mathbf{K}}\), have units of a diffusion coefficient (i.e. cm\({}^{2}\)/s), while the elements of the thermodynamic factor matrix, \(\mathbf{\Theta}\), are unitless.
The extension to anisotropic diffusion is straightforward, and can be written
\[\tilde{J}_{i,\alpha}=-\sum_{j,\beta}D_{ij,\alpha\beta}\nabla_{ \beta}c_{j}, \tag{49}\] \[\tilde{J}_{i,\alpha}=-\sum_{j,\beta}\tilde{L}_{ij,\alpha,\beta} \nabla_{\beta}\tilde{\mu}_{j}, \tag{50}\]
where \(D_{ij,\alpha\beta}\) and \(\tilde{L}_{ij,\alpha\beta}\) are rank-four tensors, with \(i\) and \(j\) indicating species, \(\alpha,\beta\) indicating spatial direction, and \(\nabla_{\beta}c_{j}\) and \(\nabla_{\beta}\tilde{\mu}_{j}\) are the gradients of \(c_{j}\) and \(\tilde{\mu}_{j}\), respectively, along spatial direction \(\beta\).
### Atomistic description of diffusion
Diffusion in crystals can often be modeled as an infrequent event system. The crystal evolves through relatively rare discrete events in which one or more chemical occupants change crystal sites, whether substitutional or interstitial, or reorient between discrete orientations. Between events, an infrequent event system is characterized by relative inactivity, in which the assignment of chemical occupants to crystal sites remains unchanged. The time evolution of the system can then be modeled using the master equation
\[\frac{\partial P_{\mathbb{C}^{\prime}}(t)}{\partial t}=\sum_{\mathbb{C}\neq \mathbb{C}^{\prime}}\left(\Gamma_{\mathbb{C}\mathbb{C}^{\prime}}P_{\mathbb{C}} (t)-\Gamma_{\mathbb{C}^{\prime}\mathbb{C}}P_{\mathbb{C}^{\prime}}(t)\right), \tag{51}\]
where \(\Gamma_{\mathbb{C}\mathbb{C}^{\prime}}\) is the rate at which the system transitions from state \(\mathbb{C}\) to state \(\mathbb{C}^{\prime}\), and \(P_{\mathbb{C}}(t)\) is the probability of being in state \(\mathbb{C}\) at time \(t\).
According to transition state theory, the rate at which a rare event transition from state \(\mathbb{C}\) to state \(\mathbb{C}^{\prime}\) occurs is [122]
\[\Gamma_{\mathbb{C}\mathbb{C}^{\prime}}=\nu^{*}_{\mathbb{C}\mathbb{C}^{\prime} }e^{-\beta\Delta E^{m}_{\mathbb{C}\mathbb{C}^{\prime}}}, \tag{52}\]
where \(\nu^{*}_{\mathbb{C}\mathbb{C}^{\prime}}\) is the vibrational prefactor for the event. The vibrational prefactor typically has values on the order of \(10^{12}-10^{13}\) s\({}^{-1}\) in solids. The migration barrier \(\Delta E_{\mathbb{C}\mathbb{C}^{\prime}}\) is defined as the change in potential energy from the initial equilibrium state, \(\mathbb{C}\), to the transition state between \(\mathbb{C}\) and \(\mathbb{C}^{\prime}\)
\[\Delta E^{m}_{\mathbb{C}\mathbb{C}^{\prime}}=E^{a}_{\mathbb{C}\mathbb{C}^{ \prime}}-E_{\mathbb{C}} \tag{53}\]
where \(E_{\mathbb{C}}\) and \(E^{a}_{\mathbb{C}\mathbb{C}^{\prime}}\) refer to the energies in the initial state and the activated state of the hop, respectively (Figure 3).
In a multi-component crystal, the initial and final microstates of atomic hop events will be a function of the local composition and the local degree of ordering.[95; 98; 43; 60; 61; 7; 62] This often makes the values of \(\nu^{*}_{\mathbb{C}\mathbb{C}^{\prime}}\) and \(\Delta E^{m}_{\mathbb{C}\mathbb{C}^{\prime}}\) a function of the direction of the transition, such that \(\nu^{*}_{\mathbb{C}\mathbb{C}^{\prime}}\neq\nu^{*}_{\mathbb{C}^{\prime} \mathbb{C}}\) and \(\Delta E^{m}_{\mathbb{C}\mathbb{C}^{\prime}}\neq\Delta E^{m}_{\mathbb{C}^{ \prime}\mathbb{C}}\). While the forward and reverse hops usually have a different migration barrier and prefactor, they nevertheless share the same transition state and therefore have the same energy in the activated state, \(E^{a}_{\mathbb{C}\mathbb{C}^{\prime}}=E^{a}_{\mathbb{C}^{\prime}\mathbb{C}}\). This fact
can be used to calculate the migration barrier via the kinetically resolved activation (KRA) barrier, \(\Delta E^{KRA}_{\mathbb{CC}^{\prime}}\), which is defined as the average barrier of the forward and reverse hops and is therefore direction independent.[95; 7] The energy of the activated state in terms of \(\Delta E^{KRA}_{\mathbb{CC}^{\prime}}\) takes the form
\[E^{a}_{\mathbb{CC}^{\prime}}=\Delta E^{KRA}_{\mathbb{CC}^{\prime}}+\frac{E_{ \mathbb{C}^{\prime}}+E_{\mathbb{C}}}{2}, \tag{54}\]
which can then be inserted into Eq. 53 to calculate a migration barrier according to
\[\Delta E^{m}_{\mathbb{CC}^{\prime}}=\Delta E^{KRA}_{\mathbb{CC}^{\prime}}+ \frac{E^{\prime}_{\mathbb{C}}-E_{\mathbb{C}}}{2}. \tag{55}\]
The relationships between the different quantities appearing in Equations 54 and 55 are shown in Figure 3.
A benefit of working with a kinetically resolved activation barrier, \(\Delta E^{KRA}_{\mathbb{CC}^{\prime}}\), is that its dependence on the local degree of ordering can be parameterized with a local cluster expansion [95; 7; 61]. The migration barrier for a hop in a disordered multi-component crystal can then be calculated by utilizing surrogate models such as cluster expansions that parameterize the configuration dependence of \(\Delta E^{KRA}_{\mathbb{CC}^{\prime}}\), a local property, and the end state energies \(E_{\mathbb{C}}\) and \(E_{\mathbb{C}^{\prime}}\)[95; 7; 61].
The attempt frequency, which may also have a configuration dependence, is a function of the difference between the vibrational entropy in the activated state, \(S^{a}_{\mathbb{CC}^{\prime}}\), and the vibrational entropy in the initial state, \(S_{\mathbb{C}}\), according to [123; 122]
\[\nu^{*}_{\mathbb{CC}^{\prime}}=\frac{k_{B}T}{h}e^{\Delta S^{m}_{\mathbb{CC}^{ \prime}}/k_{B}} \tag{56}\]
where \(\Delta S^{m}_{\mathbb{CC}^{\prime}}=S^{a}_{\mathbb{CC}^{\prime}}-S_{\mathbb{C}}\), and \(h\) is Planck's constant. A similar approach as that used to parameterize energy barriers can also be used to parameterize \(\Delta S_{\mathbb{CC}^{\prime}}\) in terms of the end state entropies \(S_{\mathbb{C}}\) and \(S_{\mathbb{C}^{\prime}}\) and a kinetically resolved activation entropy, \(\Delta S^{KRA}_{\mathbb{CC}^{\prime}}\)
\[\Delta S^{m}_{\mathbb{CC}^{\prime}}=\Delta S^{KRA}_{\mathbb{CC}^{\prime}}+ \frac{S^{\prime}_{\mathbb{C}}-S_{\mathbb{C}}}{2}. \tag{57}\]
With surrogate models for the end state entropies and kinetically resolved activation entropy, the dependence of \(\nu^{*}_{\mathbb{CC}^{\prime}}\) on local configurational order can be included. The use of kinetically resolved activation quantities as described above ensures consistency between purely thermodynamic properties and kinetic properties by preserving detailed balance.
### Kinetic transport coefficients
The atoms of a solid constantly perform diffusive hops, even when the solid is in thermodynamic equilibrium. Atomic hops can be sampled stochastically with kinetic Monte Carlo simulations given a catalogue of possible events and efficient methods for calculating \(E_{\mathbb{C}}\), \(\Delta E^{KRA}_{\mathbb{CC}^{\prime}}\), and \(\nu^{*}_{\mathbb{CC}^{\prime}}\). Each atom \(\zeta\) of chemical type \(i\) will wander through the crystal and end up at a position \(\Delta\vec{R}^{\zeta}_{i}\) at time \(t\) relative to its starting point at \(t=0\).
An expression for \(\mathbf{L}\) that connects to atomic hop events can be derived using Kubo-Green linear response methods [119; 120], an approach that links fluctuations that occur at equilibrium to the macro-scale transport coefficients of linear kinetic rate equations. The calculated \(\mathbf{L}\) are therefore suited to describe the evolution of a system that is out of equilibrium, but is nevertheless everywhere in local equilibrium.[124] The Onsager transport coefficients of an isotropic solid, \(\mathbf{L}\), can be calculated as an ensemble average over kinetic Monte Carlo trajectories according to
\[L_{ij}=\frac{\left\langle\left(\sum_{\zeta}\Delta\vec{R}^{\zeta}_{i}\right) \left(\sum_{\zeta}\Delta\vec{R}^{\zeta}_{j}\right)\right\rangle}{2dtVk_{B}T}, \tag{58}\]
where \(i\) and \(j\) indicate species, \(d\) is the dimension of space in which diffusion occurs, \(t\) is the observation
Figure 3: Schematic parameterization of the activation energy for diffusion \(E^{a}_{\mathbb{CC}^{\prime}}\) in terms of the end state energies \(E_{\mathbb{C}}\) and \(E_{\mathbb{C}^{\prime}}\) and a kinetically resolved activation energy \(\Delta E^{KRA}_{\mathbb{CC}^{\prime}}\) for purposes of calculating the migration barrier \(\Delta E^{m}_{\mathbb{CC}^{\prime}}\) and ensuring detailed balance is maintained in kinetic Monte Carlo simulations.
time, and \(V\) is the volume of the Monte Carlo supercell. The kinetic coefficients defined according to Equation 47 then take the form
\[\tilde{\mathbf{K}}=\mathbf{R}^{\sf T}\mathbf{K}\mathbf{R}, \tag{59}\]
where the matrix \(K\) has elements
\[K_{ij}=\frac{\left\langle\left(\sum_{\zeta}\Delta\vec{R}_{i}^{\zeta}\right) \left(\sum_{\zeta}\Delta\vec{R}_{j}^{\zeta}\right)\right\rangle}{2dtN_{u}}, \tag{60}\]
and where as before, \(N_{u}\) is equal to the number of unit cells in the crystal.
Other quantities of interest that can be calculated using trajectories sampled with kinetic Monte Carlo simulations are collective and tracer diffusion coefficients. A _collective_ diffusion coefficient, \(D_{i}\), can be defined for each diffusing species \(i\) according to
\[D_{i}=\frac{K_{ii}}{n_{i}}=\frac{\left\langle\left(\sum_{\zeta}\Delta\vec{R}_{ i,\alpha}^{\zeta}\right)^{2}\right\rangle}{2dtN_{i}} \tag{61}\]
where \(n_{i}\) is the number of atoms of type \(i\) per unit cell and \(N_{i}=n_{i}N_{u}\) is the number of atoms of type \(i\) in the crystal in which the trajectories are sampled. The collective diffusion coefficient averages the square of the displacement of the geometric center of mass of all the diffusing atoms of type \(i\). It can be related to the tracer diffusion coefficient, \(D_{i}^{*}\), upon expanding the square in Eq. 61 according to
\[D_{i}=\frac{\left\langle\sum_{\zeta}\left(\Delta\vec{R}_{i}^{\zeta}\right)^{2 }\right\rangle}{2dtN_{i}}+\frac{\left\langle\sum_{\zeta}\sum_{\zeta^{\prime} \neq\zeta}\Delta\vec{R}_{i}^{\zeta}\Delta\vec{R}_{i}^{\zeta^{\prime}}\right\rangle }{2dtN_{i}} \tag{62}\]
where the first term corresponds to the standard definition of the tracer diffusion coefficient [125]
\[D_{i}^{*}=\frac{\left\langle\sum_{\zeta}\left(\Delta\vec{R}_{i}^{\zeta}\right) ^{2}\right\rangle}{2dtN_{i}}. \tag{63}\]
The tracer diffusion coefficient, \(D_{i}^{*}\), is a measure of the mobility of individual atoms. The second term in Eq. 61 captures correlations between the trajectories of different atoms of the same chemical type \(i\). The Haven ratio [126; 125], \((H_{R})_{i}=D_{i}^{*}/D_{i}\), measures the degree with which trajectories of different atoms are correlated with each other, being equal to one when there are no correlations between diffusing atoms. The correlation factor, defined as
\[f_{i}=\frac{\left\langle\left(\Delta\vec{R}_{i}^{\zeta}\right)^{2}\right\rangle }{h_{i}\left(\Delta\vec{r}\right)^{2}}, \tag{64}\]
in contrast, measures the degree with which successive hops of the same atom are correlated.[125]\(\Delta\vec{R}_{i}^{\zeta}\) represents a vector that connects the end points of a trajectory of species \(i\), \(h_{i}\) is the average number of hops by atoms of species \(i\), and \((\Delta\vec{r})^{2}\) is the square of a hop vector.
While treated as scalars, the various atomic transport coefficients are second-rank tensors. In a crystal with cubic symmetry, the transport coefficient tensors only have diagonal non-zero elements, which by symmetry are all equal to each other. Anisotropic kinetic coefficients, \(L_{ij,\alpha\beta}\), can be calculated according to:
\[L_{ij,\alpha\beta}=\frac{\left\langle\left(\sum_{\zeta}\Delta\vec{R}_{i, \alpha}^{\zeta}\right)\left(\sum_{\zeta}\Delta\vec{R}_{j,\beta}^{\zeta} \right)\right\rangle}{2tVk_{B}T}, \tag{65}\]
where \(\Delta\vec{R}_{i,\alpha}^{\zeta}\) is the \(\alpha\)-component of the vector connecting the beginning and ending points of the \(\zeta\)-th atom of species \(i\). Similar expressions hold for the other transport coefficients described above.
### Calculating thermodynamic factors
Similar to the Onsager transport coefficients, the thermodynamic factor matrix, with elements \(\Theta_{i,j}\) defined according to Eq. 48, is also related to fluctuations. For the thermodynamic factor, the fluctuations are of the parametric compositions, \(\vec{x}\), within the semi-grand canonical ensemble where the exchange chemical potentials, \(\vec{\tilde{\mu}}\), are held constant. This follows from the following relationship [104]
\[\sum_{k}\left(\frac{\partial\tilde{\mu}_{i}}{\partial x_{k}}\right)_{x_{l\neq k }}\left(\frac{\partial x_{k}}{\partial\tilde{\mu}_{j}}\right)_{\tilde{\mu}_{ l\neq j}}=\delta_{i,j} \tag{66}\]
Within the semi-grand canonical ensemble, the derivatives of the parametric concentrations \(x_{i}\) with respect to the exchange chemical potentials \(\tilde{\mu}_{j}\) is equal to fluctuations in the \(x_{i}\)
\[\chi_{ij}=\left(\frac{\partial x_{i}}{\partial\tilde{\mu}_{j}}\right)_{\tilde{ \mu}_{l\neq j}} \tag{67}\]
which can be calculated using Eq.37. Because of Eq. 66, the matrix of thermodynamic factor elements can be calculated from the inverse of the \(k\times k\) matrix of \(\chi_{ij}\) according to
\[\Theta=\frac{1}{k_{B}T}\chi^{-1}. \tag{68}\]
## 4 Surrogate models: cluster expansions
An essential ingredient in the calculation of the thermodynamic properties of a solid with a statistical mechanics approach is the energy \(E(\mathbb{C})\) of microstates \(\mathbb{C}\). Formally these energies correspond to solutions to the Schrodinger equation of the solid. There are now a multitude of state-of-the-art numerical approaches that can solve for the quantum mechanical energy spectrum of a solid by relying on approximations and extensions to first-principles density functions theory (DFT).[127] The number of microstates that need to be sampled to perform statistical mechanical averages, however, are too large to be calculated directly from first principles. Instead, surrogate models are required to interpolate first-principles energies of a small number of microstates to predict the energies of microstates sampled in Monte Carlo simulations.
The generalized cluster expansion approach provides guidance as to how to rigorously formulate a tunable expansion to represent properties of interacting atoms. First introduced by Sanchez et al for alloy degrees of freedom over the sites of a crystal[113; 9; 128], it has been extended to describe the energy of crystals with non-collinear magnetic degrees of freedom[114] and molecular orientational degrees of freedom [129] and was recently generalized further to formulate rigorous descriptors of local atomic environments for machine-learned interatomic potentials [130]. The CASM Monte Carlo code base is designed to work with generalized cluster expansions for local degrees of freedom assigned to sites of a crystal.
### Examples of cluster expansions
As an illustration, consider a simple binary system of atoms and vacancies that share interstitial sites of a host material. Common examples include Li-vacancy disorder over the Li sites of intercalation compounds used as electrodes in Li-ion batteries, such as Li\({}_{x}\)CoO\({}_{2}\), Li\({}_{x}\)FePO\({}_{4}\) and Li\({}_{x}\)Mn\({}_{2}\)O\({}_{4}\),[12] and oxygen-vacancy disorder over the interstitial sites of refractory metals such as Ti, Zr and Nb, which are able to dissolve unusually high concentration of oxygen, nitrogen and carbon.[44; 51; 52] Each interstitial site \(i\) of the crystal can be assigned an occupation variable, \(\sigma_{i}\), which is +1 if the site is occupied by an interstitial species and -1 if it is vacant. The configurational state of the crystal is then completely specified by the vector of all occupation variables \(\vec{\sigma}=(\sigma_{1},...,\sigma_{i},...,\sigma_{N})\).
The energy of the crystal will depend on how the guest atoms are arranged over the interstitial sites of the crystal and is therefore an explicit function of \(\vec{\sigma}\). The dependence of the energy of the crystal on \(\vec{\sigma}\) can be parameterized with an alloy cluster expansion of the form[113; 9]
\[E(\vec{\sigma})=V_{0}+\sum_{i}V_{i}\sigma_{i}+\sum_{i,j}^{\prime}V_{i,j}\sigma _{i}\sigma_{j}+\sum_{i,j,k}^{\prime}V_{i,j,k}\sigma_{i}\sigma_{j}\sigma_{k}+... \tag{69}\]
where the \(V_{0}\), \(V_{i}\), \(V_{i,j}\), \(V_{i,j,k}\), etc. are expansion coefficients that can be trained to a data set of first-principles energies. The sums extend over interstitial sites of the crystal, with the prime on the sums indicating that only distinct pairs, triplets etc. are summed over to avoid counting the same interactions multiple times. The cluster expansion can be expressed more compactly as[113; 9]
\[E(\vec{\sigma})=V_{0}+\sum_{\alpha}V_{\alpha}\Phi_{\alpha}(\vec{\sigma}) \tag{70}\]
where
\[\Phi_{\alpha}=\prod_{i\in\alpha}\sigma_{i} \tag{71}\]
are polynomial basis functions in terms of occupation variables of sites belonging to clusters of sites labeled with the index \(\alpha\). The sum in Eq. 70 extends over all distinct clusters of sites, including point clusters, pair clusters, triplet clusters etc. The \(V_{\alpha}\) are adjustable expansion coefficients that are to be fit to a training set of first-principles data.[131; 132; 133; 134; 135; 136] The symmetry of the underlying crystal structure imposes constraints on the expansion coefficients, dramatically reducing the number of independent expansion coefficients that need to be trained. The value of a cluster expansion is that, once parameterized, it can be evaluated rapidly within Monte Carlo simulations where microstates are sampled within a large supercell of the primitive cell of the crystal. A cluster expansion can also be used to describe local properties, such as the kinetically resolved activation barrier used to represent the dependence of migration barriers for diffusion on the local degree of order or disorder.[95; 7; 61]
The cluster expansion formalism has been extended to represent the energy of a crystal as a function of continuous degrees of freedom. For example, the energy of a magnetic solid with non-collinear localized magnetic moments can be expressed ac
cording to [114; 74]
\[E(\vec{\mathbf{m}})=V_{0}+\sum_{\alpha,\vec{n}}V_{\alpha,\vec{n}}\Phi_{\alpha, \vec{n}}(\vec{\mathbf{m}}) \tag{72}\]
where \(\Phi_{\alpha,\vec{n}}(\vec{\mathbf{m}})\) are composed of linear combinations of products of spherical harmonics, each a function of a local magnetic moment unit vector attached to a site belonging to cluster \(\alpha\). Cluster expansions can be formulated that couple multiple site degrees of freedom, as in a magnetic alloy, where magnetic degrees of freedom are coupled to chemical degrees of freedom.[73] A similar cluster expansion has been formulated to describe the energy of a molecular crystal as a function of the relative orientations of the molecules of the crystal.[129] The CASM software package algorithmically constructs crystal-based cluster expansions for configurational, displacement and magnetic degrees of freedom and is able to formulate cluster expansions that couple multiple site degrees of freedom with homogeneous strain of the crystal.[110; 137]
### The CASM calculator
A large fraction of the computational expense of a Monte Carlo simulation is devoted to calculations of the energy using a cluster expansion. This requires the frequent evaluation of polynomial basis functions that are functions of site degrees of freedom that are then multiplied with expansion coefficients and summed. The CASM calculator is a unique feature of the CASM software package designed to optimize the speed of Monte Carlo simulations. CASM clexulators are C++ classes with member functions that have been algorithmically written by a CASM preprocessor to contain explicit expressions for evaluating the cluster expansion basis functions and changes in the basis functions due to changes in degrees of freedom. The expressions involving site degrees of freedom are written on a per unit cell basis and CASM generates neighbor lists for each unit cell in a Monte Carlo simulation supercell to allow evaluation over an entire configuration. The functions are compiled with optimization and linked to the CASM Monte Carlo code at runtime. This approach ensures that evaluations of energies and energy differences occur very rapidly.
## 5 Monte Carlo simulations
Monte Carlo methods for the calculation of the thermodynamic and kinetic properties of solids sample microstates explicitly. In a Monte Carlo simulation, the occupation variables of sites within a large supercell of the crystal are held in computer memory. Periodic boundary conditions are usually imposed. When modeling configurational degrees of freedom for a binary alloy, for example, each site \(i\) has an occupation variable \(\sigma_{i}\) that tracks the occupant of that site in the current microstate. A chain of microstates are successively sampled by applying small perturbations to the \(n^{th}\) microstate to generate the \((n+1)^{th}\) microstate. The sampled microstates are used to collect quantities whose statistical mechanical averages yield macroscopic thermodynamic quantities.
In this section, we illustrate how Monte Carlo methods can be used to calculate thermodynamic properties and transport coefficients of multicomponent crystals. A distinction is made between the thermodynamic state variables that are imposed as boundary conditions and the conjugate thermodynamic state variables that are calculated with a Monte Carlo method. The most common boundary conditions for multicomponent crystals are those of the semi-grand canonical ensemble, where the number of unit cells \(N_{u}\), the temperature, \(T\), and the exchange chemical potentials, \(\tilde{\mu}_{i}\) (conjugate to the parametric concentration \(\tilde{x}_{i}\)) are held constant. Calculated properties are then the semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\), the parametric concentrations of the different species in the solid \(\langle x_{i}\rangle\) along with response functions \(C_{\mu}\), \(\chi_{ij}\) and \(\chi_{i\Omega}\). Semi-grand canonical Monte Carlo simulations are generally run over a dense grid of temperatures and chemical potentials to determine the functional relationship between \(\langle\Omega\rangle\), the various \(\langle x_{i}\rangle\) and the response functions on the natural variables of the semi-grand canonical ensemble \(T,\tilde{\mu}_{1},...\tilde{\mu}_{k}\). In some contexts, it may be more convenient to impose the boundary conditions of the canonical ensemble, which fixes the temperature and the composition of the crystal, \(\vec{x}\).
### Example 1: First-order phase transitions
As a first model system, we consider an atom-vacancy mixture on a two-dimensional triangular lattice. For the energy of the system, a cluster ex
pansion of the form
\[E(\vec{\sigma}) =V_{0}+\sum_{(i,j)=NN}V_{NN}\sigma_{i}\sigma_{j}\] \[+\sum_{(i,j)=NNN}V_{NNN}\sigma_{i}\sigma_{j} \tag{73}\]
is used. The occupation variables, \(\sigma_{i}\), assigned to each lattice site \(i\) are equal to 1 when the site is occupied and -1 when the site is vacant. Examples of clusters on the two-dimensional triangular lattice are illustrated in Figure 4. The stable ordered phases at low temperature and the elevated temperature thermodynamic properties are sensitive to the values of the interactions coefficients. Since the model is a simple atom vacancy system, the composition is fully specified with only one parametric concentration, \(x\), and one independent exchange chemical potential, \(\tilde{\mu}\). We choose interaction coefficients \(V_{NNN}/V_{NN}=1/10\), and the constant interaction coefficient, \(V_{0}=-3(V_{NN}+V_{NNN})\), is chosen to ensure that the \(x=0\) (all vacancies) and the \(x=1\) (all sites of the triangular lattice occupied) states have a zero energy.
The energies (normalized by \(|V_{NN}|\)) of a large number of different atom-vacancy configurations as calculated with the model cluster expansion are shown as a function of composition in Figure 5(a). The orderings with formation energies on the convex hull are the stable ground state phases at zero Kelvin. Figure 5(a) shows that there are five ground state orderings with compositions \(x=1/4\), \(1/3\), \(1/2\), \(2/3\) and \(3/4\) for this model cluster expansion. The orderings and their superlattices are illustrated in Figure 5(b) along with the fully vacant (\(x=0\)) and fully occupied (\(x=1\)) configurations.
Figure 5(c) shows the calculated temperature versus composition phase diagram for the model cluster expansion. The ground state ordered phases are stable at low temperatures and within narrow composition ranges. Each ordered phase under goes an order-disorder phase transitions at elevated temperatures. In this model system, each ordered phase disorders by means of a first-order phase transition. Superposed on the temperature versus composition phase diagram are lines of constant chemical potential. These lines exhibit discontinuities upon passing through two-phase regions since the chemical potentials of coexisting phases at a constant temperature are equal to each other. At low temperatures, all the constant chemical potential lines converge to one of the stoichiometric ground state compositions.
Phase stability can also be displayed in a temperature versus chemical potential phase diagram as illustrated in Figure 5(d), where the horizontal axis corresponds to the parametric chemical potential \(\tilde{\mu}\). Two-phase regions in the temperature versus composition phase diagram become lines in a temperature versus chemical potential phase diagram. Each ground state ordering at low temperature is stable in a wide chemical potential window.
It is instructive to inspect other thermodynamic quantities at constant chemical potential as a function of temperature. Figure 6 shows the average semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\) as a function of temperature along the constant chemical potential line labeled A in Figures 5(c) and (d). The semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\) exhibits a discontinuous jump upon crossing the two-phase region. The discontinuity is the latent heat associated with the first-order phase transition. First-order phase transitions exhibit hysteresis. The ordered phase can remain metastable above the transition temperature upon passing through a first-order phase transition from low temperatures, while the disordered phase can be super cooled below the transition temperature upon crossing the first-order transition from high temperatures. This form of hysteresis can be replicated within Monte Carlo simulations when the
Figure 4: The point cluster (P), nearest neighbor pair cluster (NN), second nearest neighbor pair cluster (NNN), and nearest neighbor triplet cluster (NNT) on the two-dimensional triangular lattice with unit cell shown in light blue.
Figure 5: Thermodynamic properties of a two-dimensional triangular lattice model cluster expansion with \(V_{NNN}/V_{NN}=1/10\) and \(V_{NNT}=0\). (a) The formation energies and convex hull as predicted with the model cluster expansion and (b) ground state orderings having formation energies on the convex hull. Phase diagrams as determined by minimizing free energies obtained with Monte Carlo simulations plotted versus (c) the parametric composition and (d) the exchange chemical potential. The vertical lines correspond to paths of constant chemical potential.
last microstate sampled at \(T\) and \(\tilde{\mu}\) is used as the first microstate at the next temperature \(T+\Delta T\) and \(\tilde{\mu}+\Delta\mu\). Figure 6 shows the hysteresis in the semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\) from a heating run and a cooling run.
It is also often useful to inspect the variation of \(\tilde{\mu}\) as a function of \(x\) at constant temperature. Figure 7(a) shows the variation of the chemical potential as a function of the composition \(x\) along the constant temperature line labeled B in Figures 5(c) and (d). The chemical potential varies continuously in single phase regions, sloping gently in disordered solid solutions, but varying strongly over a small composition interval in ordered phases. The plateaus in the chemical potential versus composition plots correspond to two-phase regions. This is because the coexisting phases within the two-phase regions have the same chemical potential. Each plateau in a constant temperature plot of \(\tilde{\mu}\) versus \(x\) corresponds to a first-order phase transition due to a discontinuous variation of the composition at the chemical potential of the plateau. These first-order phase transitions are also accompanied by hysteresis in Monte Carlo simulations. Figure 7(b) shows hysteresis as it emerges in Monte Carlo simulations at constant temperature. The curve connected by the solid black line in Figure 7(b) corresponds to the true equilibrium chemical potential versus composition curve. However, Monte Carlo simulations can exhibit a path dependence when passing through a thermodynamic first-order phase transition at constant temperature, as is clear by the metastable extensions of the chemical potential versus composi
Figure 6: A first order phase transition exhibits hysteresis in \(\langle\Omega\rangle\) between heating (green) and cooling (gray) runs. The value of \(\langle\Omega\rangle\) in the phase with minimum semi-grand canonical potential \(\phi\) is shown with black line. Results correspond to line A in Figure 5.
Figure 7: The parametric composition \(x\) for the phase with the minimum semi-grand canonical potential \(\phi\) at each \(\tilde{\mu}\) value is shown in (a) across the entire \(\tilde{\mu}\) range. Hysteresis in the calculated parameteric composition between increasing / decreasing \(\tilde{\mu}\) runs is shown in (b) for the limited \(\tilde{\mu}\) range indicated by the box in (a). Results correspond to line B in Figure 5.
tion curves in Figure 7(b). The true equilibrium curve can be determined by minimizing the free energy as described in Section 6.
The Monte Carlo simulations used to generate Figures 5, 6 and 7 were performed at varying temperatures, \(T\), and exchange chemical potential values, \(\tilde{\mu}\), along paths where one or the other boundary condition was fixed. For the calculated phase diagrams in Figure 5(c) and (d), Monte Carlo simulation cells were chosen so that they are commensurate with the ground state ordering at the value of \(\tilde{\mu}\) where the path began. For illustrating hysteresis in Figures 6 and 7, a supercell was chosen to be fully commensurate with all symmetrically equivalent variants of the ground state orderings shown in Figure 5(b). All supercells contain at least \(N=10^{5}\) sites. At least \(10^{3}N\) Monte Carlo steps were performed, and a cutoff was enforced to stop the calculations if a maximum of \(10^{7}N\) Monte Carlo steps was reached. Calculations were run with a requested precision of \(\pm 10^{-3}|V_{NN}|\) for the average semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\) and \(\pm 10^{-3}\) for the average composition \(\langle x\rangle\), using the method described in Section 7.5 to estimate when a run was equilibrated and calculate the precision in the sample means.
Figure 8: Monte Carlo simulations of the second-order phase transition in the Ising model with \(V_{NN}>0\) on a 2-dimensional square lattice with varying number of sites, \(N\), shows (a) continuous variation in \(\langle\Omega\rangle\) with temperature, (b) a strong peak in \(c_{\tilde{\mu}}\) with increasing system size, and (c) an order parameter, \(\xi\) as defined in Eq. 76, which smoothly and continuously decreases to zero. The exact value of the critical temperature in the infinite system, \(T_{c}\), is shown with a dashed line.
### Example 2: Second order-phase transitions
A first-order phase transition is characterized by discontinuities in extensive thermodynamic variables. As illustrated in the previous section, a first-order phase transition within the semi-grand canonical ensemble exhibits a discontinuity in the generalized enthalpy, \(\langle\Omega\rangle\), and in the composition \(\langle x\rangle\). These thermodynamic quantities are related to first derivatives of the characteristic free energy of the system and change discontinuously when the system transitions between phases residing on separate free energy surfaces. Phase transitions can also be of second order. In contrast to a first-order phase transition, the extensive thermodynamic variables related to first derivatives of the characteristic free energy vary continuously upon passing through a second order transition. While this can make it difficult to detect a second order transition, a stronger signature is manifested by the response functions, some of which exhibit a divergence at a second order transition. Response functions, such as the heat capacity and the generalized susceptibilities, are related to second derivatives of the characteristic free energy.
To illustrate the characteristics of a second order phase transition, we consider a simple nearest-neighbor pair cluster expansion model for a binary system of A and B atoms on a two-dimensional square lattice
\[E(\vec{\sigma})=\sum_{(i,j)=NN}V_{NN}\sigma_{i}\sigma_{j}. \tag{74}\]
The occupation variables, \(\sigma_{i}\), assigned to each lattice site \(i\) are equal to 1 or -1 depending whether the occupying species is A or B, respectively. This model, with a positive nearest-neighbor interaction coefficient \(V_{NN}\), favors a checkerboard ordering pattern at low temperature and undergoes a disordering reaction at elevated temperature that is of second order.
Monte Carlo simulations were performed in supercells commensurate with the checkerboard ordering with varying number of sites, \(N\), at varying temperatures, and at a fixed exchange chemical potential value, \(\tilde{\mu}=0.0\). At least \(10^{4}N\) Monte Carlo steps were performed, and a cutoff was enforced to stop the calculations if a maximum of \(10^{5}N\) Monte Carlo steps was reached. Calculations were run with a requested precision of \(\pm 10^{-3}|V_{NN}|\) for the average semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\) and \(\pm 10^{-4}\) for the parametric composition \(\langle x\rangle\), using the method described in Section 7.5 to estimate when a run was equilibrated and calculate the precision in the sample means.
Figure 8(a) shows the variation of the average semi-grand canonical generalized enthalpy \(\langle\Omega\rangle\) as a function of temperature. In contrast to a first order phase transition, it does not exhibit a discontinuity at the transition temperature, but rather continuously passes through an inflection point. Similarly, other thermodynamic variables that are related to extensive quantities, such as the composition, vary continuously through a second order phase transition.
The occurrence of a second order transition is more evident upon inspection of the heat capacity. Figure 8(b) shows the calculated heat capacity as a function of temperature. There is a clear tendency of the heat capacity to diverge at the second order transition temperature. The heat capacity is proportional to the variance in the fluctuating semi-grand canonical generalized enthalpy \(\Omega\) according to Eq. 36. Since the correlation length of spatial fluctuations in energy and composition diverges at a second order phase transition, the full spectrum of fluctuations that contribute to the divergence of a response function cannot be captured in a finite sized Monte Carlo cell. Both the peak of the divergence and the temperature at which the divergence occurs will therefore vary with the size of the Monte Carlo cell. This is evident in Figure 8(b), which shows the heat capacity as calculated using different sizes of the Monte Carlo simulation cell. Finite size scaling approaches have been developed to estimate the true second order transition temperatures.[139]
Order parameters are another set of variables that aid the identification of a second-order transition. A useful order parameter to detect the checkerboard ordering on a square lattice is defined as a difference of the two sublattice concentrations, \(n_{B}^{1}\) and \(n_{B}^{2}\), according to
\[\eta=\frac{1}{\sqrt{2}}(n_{B}^{1}-n_{B}^{2}) \tag{75}\]
where \(n_{B}^{1}\) and \(n_{B}^{2}\) track the concentration of species B on the sublattices of the checkerboard supercell as illustrated in Figure 8(c). This order parameter is zero in the fully disordered state when \(n_{B}^{1}=n_{B}^{2}\) and has a non-zero value \(\pm 1/\sqrt{2}\) for the perfect checkerboard ordering. It is also able to distinguish the two translational variants of the checkerboard ordering, with positive values of \(\eta\) signifying one
Figure 9: Sections of the final state of the \(N=2*10^{6}\) Monte Carlo supercells (red squares in Fig. 8) and calculated order parameter, \(\xi\), as a fraction of the order parameter for the perfect checkerboard ordering, \(\xi_{0}=1/\sqrt{2}\), at varying temperatures relative to the exact critical temperature, \(T_{c}\), for the second order transition in the infinitely large system. Images were generated using OVITO [138].
translational variant and negative values the other translational variant. In the thermodynamic limit, order parameters such as \(\eta\) decrease continuously to zero upon approaching a second order transition from low temperatures when maintaining one particular variant of the ordered phase. In finite sized Monte Carlo cells, however, the cell may fluctuate between different translational and/or orientational variants of the same ordered phase, resulting in a mean value of \(0\) for \(\langle\eta\rangle\) when averaging over long computational times, even below the transition temperature. To measure the degree of ordering irrespective of the translational variant, the norm
\[\xi=\sqrt{\eta^{2}} \tag{76}\]
can be used. Figure 8(d) shows the calculated order parameter \(\xi\) as a function of temperature. While not going to zero at the true transition temperature, the average value of \(\xi\) approaches values that are close to zero as long as a large enough supercell size is used.
Portions of the final state of Monte Carlo simulations in supercells with \(N=2\times 10^{6}\) sites are shown in Figure 9 at varying temperatures. For this model, the exact critical temperature, \(T_{c}\), of the second order transition in the infinite system can be calculated exactly as \(T_{c}k_{B}/|V_{NN}|=2/\ln(1+\sqrt{2})\)[140]. As the temperature is increased to \(0.9*T_{c}\), increasing numbers of anti-site defects in the checkerboard ordering can be observed without the formation of distinct domains and \(\xi\) remains close to the value for perfect ordering. At \(1.0*T_{c}\), the two translational variants of the checkerboard ordering can be observed as distinct domains and the order parameter \(\xi\) is sharply reduced. As high as \(1.5*T_{c}\), local regions with the checkerboard ordering are still clearly noticeable, but since both variants exist in larger supercells the order parameter \(\xi\) rapidly drops to zero. As the temperature increases further, short range order diminishes and the site occupation approaches that of a random alloy.
### Example 3: Tracking long-range order
As was illustrated for the checkerboard ordering in the previous section, order parameters are invaluable in tracking the degree of long-range order within Monte Carlo simulations. The order parameter, \(\eta\), Eq. 75, used to track the presence of the checkerboard ordering is especially simple. It is equal to zero in the absence of long-range order and has either a positive or negative value when the atoms adopt one of the two translational variants of the checkerboard pattern. Long-range order parameters can be identified for any ordered phase such that they distinguish all the symmetrically equivalent variants of the ordering and become equal to zero in the disordered state. Algorithmic approaches to formulate order parameters for any ordered phase are described in Natarajan et al [86] and Walsh et al [141] and have been implemented in CASM. In general, order parameters that track an ordered phase on a particular parent crystal structure are not as simple as the checkerboard order parameter, Eq. 75. To enable a distinction between the disordered phase and all the symmetrically equivalent variants of a particular ordering, long-range order parameters are defined as symmetry-adapted linear combinations of sublattice concentrations within the supercell that defines the periodicity of the ordered phase.[86] The ability to distinguish multiple symmetrically equivalent variants of a particular ordered phase often requires two or more order parameter variables. Furthermore, a parent crystal may host multiple distinct ordered phases, which each require their own set of order parameters.
To illustrate how long-range ordering can be tracked in a system that hosts several ordered phases we revisit the triangular lattice cluster expansion of Section 5.1. This model system favors five ordered phases at low temperatures (Figure 5(b)) that exhibit three distinct superlattice periodicities. The orderings at \(x=1/4\) and \(x=3/4\) form in a \(2a\times 2a\) supercell of the underlying parent triangular lattice unit cell and each have four symmetrically equivalent translational variants. The
Figure 10: Sublattices for order parameter basis definition in the \(2\sqrt{3}a\times 2\sqrt{3}a\) supercell.
orderings at \(x=1/3\) and \(x=2/3\) have a \(\sqrt{3}a\times\sqrt{3}a\) supercell and have three translational variants each. The ordering at \(x=1/2\), consisting of rows of atoms alternated by rows of vacancies, has a \(a\times\sqrt{3}a\) supercell, and due to its lower symmetry has both symmetrically equivalent translation variants as well as orientational variants. The row ordering can adopt three orientations on the parent triangular lattice and each orientation has two symmetrically equivalent translational variants.
A suitable set of order parameters should be able to distinguish the disordered state, the different orderings and the different translational and orientational variants of each particular ordered phase. The approach to identifying order parameters starts with a determination of the smallest supercell on the parent crystal that is commensurate with all orderings and all their translational and orientational variants.[86] For the orderings of Figure 5(b), this is the 12 sublattice \(2\sqrt{3}a\times 2\sqrt{3}a\) supercell shown in Figure 10. Similar to the formulation of the checkerboard order parameter, sublattice concentrations are introduced for each of the 12 sublattices of the mutually commensurate supercell of Figure 10. Group theoretical techniques as described by Natarajan et al [86] and Thomas et al [142] then enable the identification of symmetry adapted linear combinations of the sublattice concentrations. For these particular orderings, 6 useful order parameters emerge that can be divided into three subspaces. The first, defined as the average of all sublattice concentrations of species B, \(n_{B}^{i}\), \(i\) being the sublattice index, within the \(2\sqrt{3}a\times 2\sqrt{3}a\) supercell of Figure 10 according to
\[\eta_{0}=q_{1}\sum_{i=1}^{12}n_{B}^{i}, \tag{77}\]
with \(q_{1}=\frac{1}{\sqrt{12}}\), is invariant to the symmetry of the parent lattice and simply tracks the overall concentration of the solid. The next two order parameters together are able to distinguish the orderings at \(x=1/3\) and \(x=2/3\) and are defined in terms of the sublattice concentrations according to
\[\begin{bmatrix}\eta_{2}&\eta_{3}\end{bmatrix}=\vec{n}_{B}^{\sf T}\begin{bmatrix} 2q_{2}&0\\ 2q_{2}&0\\ -q_{2}&q_{3}\\ -q_{2}&q_{3}\\ -q_{2}&-q_{3}\\ -q_{2}&-q_{3}\\ -q_{2}&-q_{3}\\ 2q_{2}&0\\ -q_{2}&q_{3}\\ -q_{2}&-q_{3}\\ -q_{2}&-q_{3}\end{bmatrix}, \tag{78}\]
where \(q_{2}=1/(2\sqrt{6})\), \(q_{3}=1/(2\sqrt{2})\), and \(\vec{n}_{B}^{\sf T}=\begin{bmatrix}n_{B}^{1},n_{B}^{2},\ldots,n_{B}^{12}\end{bmatrix}\). Figure 11 shows the coordinates of the translational variants of the \(x=1/3\) and \(x=2/3\) orderings within the \(\sqrt{3}a\times\sqrt{3}a\) supercell within the two-dimensional \(\eta_{2}\) and \(\eta_{3}\) order parameter subspace. The three translational variants of a particular \(\sqrt{3}a\times\sqrt{3}a\) ordering reside on a circle at \(120^{o}\) intervals. The fully disordered state resides at the origin in the subspace spanned by \(\eta_{2}\) and \(\eta_{3}\).
The third category of order parameters span a three dimensional subspace and track the \(x=1/4\), \(x=3/4\) and \(x=1/2\) ordered phases. These are defined in terms of the subl
Figure 11: The coordinates of the translational variants of the \(\sqrt{3}a\times\sqrt{3}a\) orderings at \(x=1/3\) and \(x=2/3\) in the order parameter space spanned by \(\eta_{2}\) and \(\eta_{3}\).
the \(2\sqrt{3}\times 2\sqrt{3}a\) supercell of Figure 10 according to
\[\left[\eta_{4}\quad\eta_{5}\quad\eta_{6}\right]=\vec{\pi}_{B}^{1}\begin{bmatrix}-q _{1}&-q_{1}&-q_{1}\\ -q_{1}&q_{1}&q_{1}\\ q_{1}&-q_{1}&q_{1}\\ q_{1}&q_{1}&-q_{1}\\ -q_{1}&-q_{1}&-q_{1}\\ -q_{1}&q_{1}&q_{1}\\ q_{1}&q_{1}&-q_{1}\\ -q_{1}&-q_{1}&-q_{1}\\ -q_{1}&q_{1}&q_{1}\\ q_{1}&q_{1}&-q_{1}\end{bmatrix}. \tag{79}\]
Figure 12 illustrates the coordinates of the symmetrically equivalent variants of the orderings at \(x=1/4\), \(x=1/2\) and \(x=3/4\). The four translational variants of the \(x=1/4\) and \(x=3/4\) orderings form tetrahedra in the order parameter space spanned by \(\eta_{4}\), \(\eta_{5}\) and \(\eta_{6}\). The row ordering at \(x=1/2\), which has both orientational and translational variants reside at the corners of an octahedron in the same three dimensional subspace.[86]
While the full set of 6 order parameters are necessary to distinguish all the symmetrically equivalent variants of each ordered phase in Monte Carlo simulations, it is often only necessary to detect which ordering is present, irrespective of the particular variant. The number of order parameters that needs to be tracked can then be reduced. For example, in order to determine if a \(\sqrt{3}a\times\sqrt{3}a\) ordering is present within a Monte Carlo cell, it is sufficient to only track the length of the order-parameter vector in \(\eta_{2}\)-\(\eta_{3}\) space, \(\xi_{1}=\sqrt{\eta_{2}^{2}+\eta_{3}^{2}}\). This length measure in the \(\eta_{2}\)-\(\eta_{3}\) space, together with the overall concentration as measured by \(\eta_{1}\), is sufficient to detect which particular \(\sqrt{3}a\times\sqrt{3}a\) ordering is present within a Monte Carlo simulation, independent of the particular translational variant. A similar length metric can be defined in the space spanned by \(\eta_{4}\), \(\eta_{5}\) and \(\eta_{6}\) according \(\xi_{2}=\sqrt{\eta_{4}^{2}+\eta_{5}^{2}+\eta_{6}^{2}}\). Here again, the combination of \(\eta_{1}\), which is a measure of the overall composition, and \(\xi_{2}\) is sufficient to detect whether either the \(x=1/4\), \(x=1/2\) or the \(x=3/4\) orderings are present within a Monte Carlo simulation. Figure 13 plots \(\eta_{1}\), \(\xi_{1}\) and \(\xi_{2}\) as a function of the composition and exchange chemical potential along line B in the phase diagrams of Figure 5(c) and (d). The order parameters were calculated using Monte Carlo simulations in a supercell commensurate with the \(2\sqrt{3}\times 2\sqrt{3}a\) supercell in which the order parmeters are defined, and otherwise identical calculation
Figure 12: The coordinates of the translational and orientational variants of the \(x=1/4\), \(x=1/2\) and \(x=3/4\) in the order parameter space spanned by \(\eta_{4}\), \(\eta_{5}\) and \(\eta_{5}\).
parameters as in Section 5.1.
### Example 4: Diffusion on a triangular lattice
Atomic transport properties within a crystal can be estimated with kinetic Monte Carlo simulations. To illustrate this, we consider a binary atom-vacancy system on the two-dimensional triangular lattice and use the same model cluster expansion as was used to calculate the phase diagrams of Figure 5(c) and (d). This model predicts a variety of stable ordered phases at low temperature and a solid solution at high temperature and is thereby able to illustrate the combined effects of concentration and different degrees of long-range order on atomic diffusion.
A common mechanism of diffusion in a crystal is through atom-vacancy exchanges in distinct atomic hop events. Each atomic hop event takes the system from one microstate to another. In the kinetic Monte Carlo simulation supercell, when an atom at site \(i\) hops to an adjacent vacant site \(j\), for example, the system evolves from the current configurational microstate \(\mathbb{C}=(\sigma_{1},\ldots,\sigma_{i}=1,\ldots,\sigma_{j}=-1,\ldots,\sigma_ {N})\) to a new microstate \(\mathbb{C}^{\prime}=(\sigma_{1},\ldots,\sigma_{i}=-1,\ldots,\sigma_{j}=1, \ldots,\sigma_{N})\). The frequency with which such an event occurs, \(\Gamma_{\mathbb{C}\mathbb{C}^{\prime}}\), and its dependence on local configurational order can be parameterized with local cluster expansion surrogate models as described in Section 3.2. However, for our model system, we choose a constant value of \(\Delta E^{KRA}_{\mathbb{C}\mathbb{C}^{\prime}}=3|V_{NN}|\) and a constant value for \(\nu^{*}_{\mathbb{C}\mathbb{C}^{\prime}}=10^{12}\) s\({}^{-1}\). The fraction of vacancies, \(x_{Va}\), on a triangular lattice that can host atoms from the dilute limit at \(x\approx 0\) to the fully saturated limit at \(x=1\) will vary according to \(x_{v}=1-x\). Hence the number of hop paths available to diffusing atoms depends strongly on the overall concentration of the diffusing atoms. Along with the configuration dependence on the end state energies and therefore also migration energies, diffusion will depend strongly on both concentration and ordering.
The model system has much in common with layered lithium intercalation compounds such as Li\({}_{x}\)CoO\({}_{2}\) and Li\({}_{x}\)TiS\({}_{2}\), where Li ions can diffuse over octahedrally coordinated interstitial sites that form a triangular lattice between two-dimensional sheets of CoO\({}_{2}\) and TiS\({}_{2}\).[95; 43] It also shares similarities with adatom diffusion (e.g. adsorbed oxygen atoms) on a surface of a close-packed metal (e.g. Pt). In both examples the intercalation compound host crystal structure or the surface substrate imposes a constraint on the total number of triangular lattice sites. As derived in B.2, the flux expression for this binary A-B system (where A are vacancies and B are the diffusing atoms) takes the form
\[\tilde{J}=-\tilde{L}\nabla\tilde{\mu} \tag{80}\]
where \(\tilde{\mu}=\mu_{B}-\mu_{A}\) is the parametric chemical potential, \(\tilde{J}=J_{B}\) and where \(\tilde{L}=L_{BB}\) is the Onsager transport coefficient. The above flux expression can be converted to Fick's first law using the chain rule of differentiation
\[\tilde{J}=-v_{u}\tilde{L}\frac{\partial\tilde{\mu}}{\partial x}\nabla c=-D \nabla c \tag{81}\]
where \(x=N_{B}/N_{u}=n_{B}\) is the parametric composition, \(v_{u}\) is the volume per unit cell and \(c=x/v_{u}\), the number of diffusing atoms per unit volume. The chemical diffusion coefficient \(D\) is the product of a
Figure 13: The magnitude of vectors formed from the symmetry adapted order parameters in each irreducible subspace for the fully commensurate supercell of the ground state orderings. Results correspond to line B in Figure 5.
kinetic factor (\(v_{u}\tilde{L}\)) and a thermodynamic factor (\(\partial\tilde{\mu}/\partial x\)).
For a simple atom-vacancy system, where there are no off-diagonal transport coefficients, the chemical diffusion coefficient \(D\) is often factored into a product of the collective diffusion coefficient \(D_{B}\) and a thermodynamic factor \(\tilde{\Theta}\) (i.e. \(D=D_{B}\tilde{\Theta}\)) according to [143, 12]
\[D_{B}=\frac{K_{BB}}{x} \tag{82}\]
and
\[\tilde{\Theta}=\frac{x}{k_{B}T}\frac{\partial\tilde{\mu}}{\partial x}. \tag{83}\]
The resulting diffusion coefficient, \(D_{B}\), can then be directly compared to the tracer diffusion coefficient.
Figure 14(a) shows calculated values for the collective and tracer diffusion coefficients, \(D_{B}\) and \(D_{B}^{*}\), respectively, along with the kinetic coefficient, \(K_{BB}\), at a reduced temperature of \(k_{B}T/V_{NN}=0.5\) (line B in Figure 5(c) and (d)). The various diffusion coefficients were calculated by sampling atomic trajectories with kinetic Monte Carlo simulations and inserting them in Eqs. 62 and 63. The kinetic Monte Carlo simulations were performed in a \(N=1200\) site supercell of the \(2\sqrt{3}a\times 2\sqrt{3}a\) supercell shown in Figure 10. An initial set of kinetic Monte Carlo calculations in which sampling was performed at a range of simulated time steps showed convergence in the kinetic coefficients when observations were taken every \(10^{-2}\) simulated seconds. Calculations were run with a requested absolute precision of \(\pm 10^{-2}|V_{NN}|\) for the average semi-grand canonical energy \(\langle\Omega\rangle\), and a requested relative precision of \(\pm 10^{-1}\) for \(D_{B}^{*}\) and \(K_{BB}\). A cutoff was enforced to stop the calculations if a maximum of 1 hr of computational time was reached. The estimated error bars, which are generally on the same order of magnitude as the markers, are included in Figure 14.
In the dilute limit, \(x\to 0\), \(D_{B}\) and \(D_{B}^{*}\) converge to a common value as the interactions between different atoms become negligible. The diffusion coefficients in Figure 14 are normalized by the diffusion coefficient at \(x=0\). The diffusion coefficients exhibit strong dips at \(x=1/4\), \(1/3\), \(1/2\), \(2/3\) and \(3/4\), the compositions at which the diffusing atoms adopt a state of long-range order. This is because atoms are locked into their sublattice positions and will have a strong thermodynamic bias to hop back to their preferred sublattice positions.
A useful metric to understand diffusion is the correlation factor, which measures the degree with which successive hops of a diffusing atom are correlated. An uncorrelated random walker has a correlation factor \(f=1\). This is only reached in the dilute limit where interstitial diffusers do not interact with each other. Anything less than 1 indicates correlated diffusion. The correlation factor as calculated for this model system is shown in Figure 14(b). Starting at 1 in the dilute limit, it decreases with increasing concentration \(x\), to values between 0.3 and 0.6 except for strong dips to near 0.0 at the stoichiometric compositions of the ordered phases. This is consistent with reverse hops being highly likely after an atom hops out of perfect ordering.
Figure 15(a) and (b) show the calculated thermodynamic factor, \(\Theta\), as a function of the concentration \(x\) and as a function of the exchange chemical potential \(\tilde{\mu}\), respectively, as defined according to Equation 48. Figure 15(c) shows the thermodynamic factor \(\tilde{\Theta}\) as defined according to Equation 83 versus the exchange chemical potential \(\tilde{\mu}\). The thermodynamic factor as defined according to Equation 83 measures the deviation from thermodynamic ideality, which is only realized in the dilute limit as \(x\to 0\) where \(\tilde{\Theta}\) is equal to 1. The system deviates strongly from thermodynamic ideality when the atoms order and by either definition the thermodynamic factor tends to diverge at the stoichiometric compositions of the equilibrium ordered phases.
## 6 Free energy integration
While Monte Carlo methods do not provide direct access to free energies, it is possible to calculate them indirectly using the results of Monte Carlo simulations. For example, the relationship between the parametric composition \(x_{i}\) of a multicomponent crystal and its conjugate exchange chemical potential \(\tilde{\mu}_{i}\) as generated with semi-grand canonical Monte Carlo simulations can be used to calculate the Gibbs free energy, \(g=G/N_{u}\), by integrating the differential form
\[dg=-sdT+vdP+\sum_{i=1}^{k}\tilde{\mu}_{i}dx_{i} \tag{84}\]
Figure 14: The (a) kinetic coefficients, and (b) correlation factor, calculated using kinetic Monte Carlo are shown as a function of composition at constant temperature. Standard error of the mean is estimated using Eq. 17. In (b) exact results for the correlation factor on a triangle lattice, \(f_{B}(x=0)=1.0\) and \(f_{B}(x=1.0)=\frac{\pi+6\sqrt{3}}{11\pi-6\sqrt{3}}\)[144], are shown as \((*)\). Results correspond to line B in Figure 5, and colors indicate the equilibrium phase.
along a constant temperature and constant pressure path that starts at \((\vec{x}_{0},\vec{\bar{\mu}}_{0})\) and ends at \((\vec{x},\vec{\bar{\mu}})\) to yield
\[g(T,\vec{x})=g(T,\vec{x}_{0})+\int_{\vec{x}_{0}}^{\vec{x}}\sum_{i=1}^{k}\tilde{ \mu}_{i}dx_{i} \tag{85}\]
A difficulty with free energy integration techniques is that an evaluation of \(g(T,\vec{x})\) requires knowledge of \(g(T,\vec{x}_{0})\) in a particular reference state. Usually a reference state is chosen in which the free energy can be calculated easily. For example, it is common to choose a composition \(\vec{x}_{0}\) in which all sites of the crystal are occupied by exclusively one chemical species such that there is no configurational entropy. In a binary A-B alloy, this could for example be a crystal of pure A atoms or a crystal of pure B atoms. In these reference states, there is only one configurational microstate and \(g(T,\vec{x}_{0})\) is then simply equal to \(e(\vec{x}_{0})\), the energy of the solid per number of unit cells at \(\vec{x}_{0}\). To evaluate the integral of Eq. 85, Monte Carlo simulations must be performed along a continuous path from the reference state to the composition \(\vec{x}\) to collect the relationship between \(\vec{x}\) and \(\vec{\bar{\mu}}\). This is most conveniently generated with semi-grand canonical Monte Carlo simulations at constant temperature, where a discrete, but finely spaced, grid of \(\vec{\bar{\mu}}\) values are imposed as boundary conditions and a corresponding set of values for \(\langle\vec{x}\rangle\) are then calculated.
A similar scheme can be derived for the semi-grand canonical free energy \(\phi=\Phi/N_{u}\) at constant temperature and pressure by integrating the differential form
\[d\phi=-sdT+vdP-\sum_{i}^{k}x_{i}d\tilde{\mu}_{i} \tag{86}\]
yielding the expression
\[\phi(T,\vec{\bar{\mu}})=\phi(T,\vec{\bar{\mu}}_{0})-\int_{\vec{\mu}_{0}}^{ \vec{\bar{\mu}}}\vec{x}d\vec{\bar{\mu}} \tag{87}\]
where again the free energy of a reference state at \((T,\vec{\bar{\mu}}_{0})\) is required.
Expressions can also be derived to relate free energies at different temperatures. The derivatives of \(g\) and \(\phi\) with respect to temperature are related to the entropy, which is not directly calculated with Monte Carlo simulations. It is, therefore, more convenient to start with the total differentials (at constant \(P\), usually set to zero) of \(\beta g\) and \(\beta\phi\),[63] which
Figure 15: The thermodynamic factor, \(\Theta\), as defined according to Equation 48 as a function of (a) composition \(x\) and (b) the exchange chemical potential \(\tilde{\mu}\), and (c) the thermodynamic factor \(\tilde{\Theta}\) as defined according to Equation 83 as a function of the exchange chemical potential \(\tilde{\mu}\). \(\Theta\) is calculated according to Eq. 68 using the inverse of the chemical susceptibility, \(\chi\), measured according to Eq. 37. Results correspond to line B in Figure 5, and colors indicate the equilibrium phase.
take the form
\[d(\beta g)=hd\beta+\beta\sum_{i}^{k}\tilde{\mu}_{i}dx_{i} \tag{88}\]
\[d(\beta\phi)=\omega d\beta-\beta\sum_{i=1}^{k}x_{i}d\tilde{\mu}_{i} \tag{89}\]
where \(\beta=1/k_{B}T\), \(h\) is the enthalpy per unit cell of the crystal (\(h=e+Pv\)) and \(\omega\) is the semi-grand canonical energy per unit cell (\(\omega=e+Pv-\vec{\tilde{\mu}}\vec{x}\)). Both \(h\) and \(\omega\) can be calculated as \(\langle\Omega\rangle/N_{u}\), using canonical and semi-grand canonical Monte Carlo simulations, respectively. Integrating these expressions along a path at constant \(\vec{x}\) (as in canonical Monte Carlo simulations) or at constant \(\vec{\tilde{\mu}}\) (as in semi-grand canonical Monte Carlo simulations) yields expressions that relate free energies at two different temperatures
\[\beta g(T,\vec{x})=\beta_{0}g(T_{0},\vec{x})+\int_{\beta_{0}}^{\beta}hd\beta \tag{90}\]
\[\beta\phi(T,\vec{\tilde{\mu}})=\beta_{0}\phi(T_{0},\vec{\tilde{\mu}})+\int_{ \beta_{0}}^{\beta}\omega d\beta \tag{91}\]
where \(\beta_{0}=1/k_{B}T_{0}\). A reference state where the free energy can easily be calculated is again needed to evaluate both expressions. Common reference states for paths that traverse different temperatures are low temperature ordered ground states or a high temperature disordered solid solution. The free energy of stable ordered phases can be approximated at low temperatures using a low temperature expansion of the partition function.[18, 63] The free energy of the solid solution at a particular composition or set of chemical potentials can be calculated with Eq. 85 or Eq. 87 using a reference state at one of the corners of composition space at a temperature that is sufficiently high to be above all order-disorder transition temperatures.
Common pathways to calculate free energies as a function of composition (or exchange chemical potentials) and temperature are schematically illustrated in Figure 16. The path starting at point 1 and ending at 2 has a constant chemical potential \(\tilde{\mu}\) and can be traversed with semi-grand canonical Monte Carlo simulations. If the temperature of point 1 is sufficiently low, then the free energy at point 1 can be calculated with a low temperature expansion.[18, 63] The free energy at point 2 can then be calculated with Eq. 91 by integrating over the calculated values of \(\omega\) with respect to
Figure 16: Schematic diagram of integration pathways for the calculation of free energies, \(g\) or \(\phi\), from semi-grand canonical Monte Carlo simulation of the mean composition, \(\vec{x}\), and semi-grand canonical energy, \(\omega\), as a function of chemical potential, \(\tilde{\mu}\), and temperature, \(T\). The first pathway (\(1\to 2\to 3\)) uses a low-temperature expansion calculation (1) for a free energy reference and then integrates along a constant \(\tilde{\mu}\) and increasing \(T\) pathway (\(1\to 2\)) followed by an increasing \(\tilde{\mu}\) and constant \(T\) pathway (\(2\to 3\)). The second pathway (\(4\to 5\to 6\)) uses the energy at zero configurational entropy (4) for an initial free energy reference and then integrates along an increasing \(\tilde{\mu}\) and constant \(T\) pathway (\(4\to 5\)) followed by a constant \(\tilde{\mu}\) and decreasing \(T\) pathway (\(5\to 6\)).
temperature (or equivalently the inverse temperature, \(\beta\)). Free energy integration can also be performed to link the free energy calculated at point 2 to that at point 3 using either Eq. 85 or Eq. 91. The data along the path from 2 to 3 would be collected with semi-grand canonical Monte Carlo simulations. Figure 16 also shows a path to calculate free energies in high temperature solid solutions. A convenient reference state for free energy integration is point 4, for pure A, where the absence of any configurational entropy makes it possible to equate the free energy at this point to the energy.
Free energies calculated with free energy integration techniques as described above are only reliable as long as the traversed path does not cross any phase boundaries. This is because of the phenomenon of hysteresis. Typically, Metropolis or n-fold-way Monte Carlo algorithms use the last sampled microstate at one point along a path as the initial microstate for the next point of the path. This introduces a memory effect where microstates continue to be sampled corresponding to a metastable free energy. This is schematically illustrated in Figure 17 for a semi-grand canonical Monte Carlo simulation. The orange curve represents the relationship between \(\tilde{\mu}\) and \(x\) as calculated with semi-grand canonical Monte Carlo simulations for increasing values of \(\tilde{\mu}\). The solid black horizontal line in the \(x\)-\(\tilde{\mu}\) plot corresponds to the equilibrium chemical potential of a first-order phase transition at which the concentration increases discontinuously. Above the black line, the equilibrium \(\tilde{\mu}\) versus \(x\) curve should follow the purple curve. When the final sampled microstate of a Monte Carlo simulation at fixed \(T\) and \(\tilde{\mu}\) is inherited as the initial microstate at \(\tilde{\mu}+\Delta\tilde{\mu}\), the calculated \(\tilde{\mu}\) versus \(x\) curve will overshoot the equilibrium transition chemical potential \(\tilde{\mu}^{*}\). The same occurs when performing semi-grand canonical Monte Carlo simulations in the opposite direction (i.e. decreasing chemical potential). Also shown in Figure 17 are the corresponding Gibbs free energies \(g\) and semi-grand canonical free energies \(\phi\). The phenomenon of hysteresis within Monte Carlo simulations makes it possible to estimate metastable free energies that extend some distance into multiphase coexistence regions, depending on the energetics of the system and the convergence details of the calculations.
In practice, the energy differences between phases may be small relative to the overall ranges of \(g\) and \(\phi\). This can be seen for the model system from Section 5.1 by comparing the energy scales in Figures 18 and 19. Figure 18 shows how the phase boundaries along line B in Figure 5 between the ordered phase with composition \(x=1/2\) (green) and the disordered state (gray) are obtained. The compositions of the phases at the phase boundaries are shown in Figure 18(a) at the value of the chemical potential, \(\tilde{\mu}\), for which the two phases have equal semi-grand canonical potential, \(\phi\), in Figure 18(b). At these compositions, the Gibbs free energy, \(g\), vs \(x\) curves have a common tangent, as shown in Figure 18(c). Figure 19 shows the result of repeating the process to identify phase boundaries across the range of positive \(\tilde{\mu}\) where calculations were performed.
The free energy integration techniques described in this section enable the calculation of free energies in regions where a phase is stable or metastable. They are unable to generate free energies where a phase is unstable with respect to small perturbations in composition. To access these free energies using equilibrium methods, umbrella sampling techniques can be used. A general approach to calculate free energies in domains where the solid is unstable was introduced by Sadigh and Erhart [145] for miscibility gaps and was extended by Natarajan
Figure 17: Schematic of the hysteresis behavior of the calculated mean composition as a function of chemical potential in semi-grand canonical ensemble Monte Carlo simulations. The composition at which the two phases are at equilibrium can be equivalently determined by finding the chemical potential at which the semi-grand canonical potentials are equal, or by using the common tangent construction to identify the compositions at which the Gibbs free energies are equal.
Figure 18: The relationships calculated from Monte Carlo simulations along line B in Figure 5 near the phase boundary between the ordered phase with composition \(x=1/2\) (green) and the disordered state (gray) of (a) the exchange chemical potential, \(\tilde{\mu}\), and composition \(x\), (b) the semi-grand canonical potential, \(\phi\) and \(\tilde{\mu}\), and (c) the Gibbs free energy, \(g\), and \(x\). Solid black lines in (b) and (c) indicate phase equilibria, and dashed lines are guides for the eye at the corresponding values of \(x\) and \(\tilde{\mu}\). In this figure, \(g\) and \(\phi\), as calculated using Eqs. 90 and 91, are re-referenced, indicated by \(g^{\prime}\) and \(\phi^{\prime}\), respectively, to more clearly show energy differences which are small relative to the overall changes in \(g\) and \(\phi\) over the relevant ranges of \(x\) and \(\tilde{\mu}\).
Figure 19: Phase boundary construction along line B in Figure 5 over the range \(x>0.5\) (\(\tilde{\mu}>0.0\)), showing the relationships calculated from Monte Carlo simulations of (a) the exchange chemical potential, \(\tilde{\mu}\), and composition \(x\), (b) the semi-grand canonical potential, \(\phi\) and \(\tilde{\mu}\), and (c) the Gibbs free energy, \(g\), and \(x\). Solid black lines in (b) and (c) indicate phase equilibria, and dashed lines are guides for the eye at the corresponding values of \(x\) and \(\tilde{\mu}\). Colors indicate the equilibrium phase as in Figure 5.
et al [86] for order-disorder reactions. Recent work has shown how an integrable neural network [8] can be used to represent the free energy in high dimensional composition and order-parameter spaces and efficiently parameterized using adaptive learning techniques [146]. The variance constrained umbrella sampling techniques of Sadigh and Erhart [145] and Natarajan et al [86] are implemented in the CASM Monte Carlo code suite.
## 7 Monte Carlo algorithms
A variety of Monte Carlo algorithms have been implemented within CASM. The Metropolis algorithm is the simplest and at intermediate to high temperatures the most practical method to calculate basic thermodynamic properties. At low temperatures the Metropolis algorithm becomes less efficient and the n-fold way algorithm becomes more favorable. Kinetic Monte Carlo algorithms enable the sampling of diffusion trajectories necessary to calculate Onsager transport coefficients. Finally, when treating continuous degrees of freedom, such as a crystal of interacting non-collinear magnetic moments, Hamiltonian Monte Carlo methods become necessary to ensure efficient sampling of microstates.
Each algorithm is described in more detail in the following sections. The Monte Carlo methods described are examples of Markov processes, which is defined as a stochastic process in which the probability of the next state only depends on the current state. The series of generated states is called a Markov chain. There is a large body of literature describing the statistics of stochastic processes generally, of Markov chains and Markov chain Monte Carlo simulations, and their application to problems in statistical physics [147; 148; 149; 139; 150; 151].
### The Metropolis algorithm
The Metropolis algorithm [152] is the most commonly used Monte Carlo method for the calculation of thermodynamic properties. The algorithm produces a series of configurations, \([\mathbb{C}_{1},\mathbb{C}_{2},\ldots,\mathbb{C}_{n}]\). In the limit of large \(n\), a configuration \(\mathbb{C}\) will appear in the series with a frequency that is proportional to the probability predicted by the Boltzmann distribution \(P_{\mathbb{C}}=e^{-\beta\Omega_{\mathbb{C}}}/Z\). The series of microstates is constructed by proposing an event, \(\Delta_{\mathbb{C}\mathbb{C}^{\prime}}\), which changes the current configuration, \(\mathbb{C}=\mathbb{C}_{i}\), to a different configuration, \(\mathbb{C}^{\prime}\), calculating the associated change in potential energy, \(\Delta\Omega_{\mathbb{C}\mathbb{C}^{\prime}}\), and accepting the event with probability
\[p(\Delta_{\mathbb{C}\mathbb{C}^{\prime}})=\min(1,e^{-\beta\Delta\Omega_{ \mathbb{C}\mathbb{C}^{\prime}}}).\]
If the event is accepted, then the next configuration in the series includes the change, \(\mathbb{C}_{i+1}=\mathbb{C}^{\prime}\). If the event is rejected, then the next configuration in the series does not include the change, \(\mathbb{C}_{i+1}=\mathbb{C}_{i}\).
Given that the Metropolis algorithm generates configurations in proportion to the probability predicted by the Boltzmann distribution, the ensemble average value of a property can be estimated from Monte Carlo simulations as
\[\langle X\rangle\approx\bar{X}=\frac{\sum_{l}^{N}X_{l}}{N}, \tag{92}\]
where \(X_{l}\) is the \(l\)-th of \(N\) observations of the property (a function of the configuration, \(\mathbb{C}_{l}\), and thermodynamic conditions), and \(\bar{X}\) is the estimate of \(\langle X\rangle\).
The Metropolis algorithm is effective in many situations, but if the proportion of rejected states becomes large then the method may be less efficient than other methods. This situation often occurs at low temperatures or when the proposed events result in a large change in the configuration. Alternative Monte Carlo methods become useful in these situations.
### The n-fold way algorithm
At low temperatures the probability of the lowest energy microstates are very high, resulting in exceedingly low acceptance rates of excited microstates when using the Metropolis algorithm. Very long computational times are then required to calculate well-converged thermodynamic averages. The n-fold way method [87] picks a new microstate at each step by considering a list of candidate events, \(\Delta_{\mathbb{C}\mathbb{C}^{\prime}}\), and choosing the next event according to
\[p(\Delta_{\mathbb{C}\mathbb{C}^{\prime}}) =q_{\mathbb{C}\mathbb{C}^{\prime}}/Q_{\mathbb{C}}\] \[q_{\mathbb{C}\mathbb{C}^{\prime}} =\min(1,e^{-\beta\Delta\Omega_{\mathbb{C}\mathbb{C}^{\prime}}})\] \[Q_{\mathbb{C}} =\sum_{\mathbb{C}^{\prime}}q_{\mathbb{C}\mathbb{C}^{\prime}}.\]
The weight of the \(i\)-th configuration in the ensemble average, \(w_{i}\), is calculated as
\[w_{i}=-\frac{1}{Q_{\mathbb{C}_{i}}}\ln(R), \tag{93}\]
where \(R\in[0,1)\) is a random number. This weighting factor provides a statistically equivalent reflection of the number of rejected events that would have occurred before an acceptance if instead the Metropolis algorithm were performed with the same candidate events. Thus, for n-fold way Monte Carlo simulations the ensemble average estimate is the weighted average
\[\langle X\rangle\approx\bar{X}=\frac{\sum_{l}^{N}w_{l}X_{l}}{\sum_{l}^{N}w_{l}}. \tag{94}\]
While a new microstate is chosen at every step in the n-fold way, the probabilities of many candidate perturbations to the current state need to be calculated, making the computational cost of each step significantly larger than that required during a Metropolis algorithm step. The n-fold way method only becomes computationally more efficient than the Metropolis method when the rejection rate of the Metropolis algorithm becomes exceedingly high, as occurs at low temperatures.
### Kinetic Monte Carlo algorithm
The kinetic Monte Carlo (KMC) algorithm [87] is equivalent to the n-fold way algorithm, but the candidate events are chosen based on the physically possible atomic mechanisms for diffusion. This allows each trajectory generated by a kinetic Monte Carlo simulation to be interpreted as a likely trajectory that would have been sampled by a diffusing atom in the crystal over time. The event probabilities for KMC are calculated from the event rates, Eq. 52, according to
\[p(\Delta_{\mathbb{CC}^{\prime}}) =q_{\mathbb{CC}^{\prime}}/Q_{\mathbb{C}}\] \[q_{\mathbb{CC}^{\prime}} =\Gamma_{\mathbb{CC}^{\prime}}\] \[Q_{\mathbb{C}} =\sum_{\mathbb{C}^{\prime}}q_{\mathbb{CC}^{\prime}}.\]
For KMC calculations, the time spent in the \(i\)-th configuration is calculated as
\[t_{i}=-\frac{1}{Q_{\mathbb{C}_{i}}}\ln(R), \tag{95}\]
where \(R\in[0,1)\) is a random number, exactly as the weight is calculated in the n-fold way algorithm.
The value of kinetic coefficients can be estimated from kinetic Monte Carlo simulations as:
\[X\approx\bar{X}=\frac{\sum_{l}^{N}X_{l}}{N}, \tag{96}\]
where \(X_{l}\) is the observed value over the \(l\)-th of \(N\) time intervals. For example, an observation of \(K_{ij}\) can be calculated using Eq. 60 as
\[\left(K_{ij}\right)_{l}=\frac{\Big{(}\sum_{\zeta}\Delta\vec{R}_{i}^{\zeta} \Big{)}\Big{(}\sum_{\zeta}\Delta\vec{R}_{j}^{\zeta}\Big{)}}{2dt_{l}N_{u}}, \tag{97}\]
where \(t_{l}\) is the length of the \(l\)-th time interval. For measurements of steady-state quantities such as \(K_{ij}\), the time intervals need to be long enough to ensure sampling of events that contribute to long-range diffusion. If the time intervals are too short, the estimated kinetic values may be incorrectly inflated by local rearrangements. Generally, a convergence study should be used to find an appropriate sampling time interval before performing calculations for a new system or at very different thermodynamic conditions.
To take time-interval based samples, CASM checks if a chosen event will occur after the next observation should be taken, and if so, takes a sample before applying the change in configuration due to the event. In some cases, when the event time is long relative to the sampling interval, this may mean that multiple samples are taken in the same configuration. As an alternative approach with higher data storage requirements, CASM supports saving the state of the entire Monte Carlo supercell at each sampling time, allowing for a post-run convergence analysis using a range of sampling intervals.
The n-fold way / KMC algorithm is effective in many situations, but becomes inefficient if the probability of transitioning between two configurations, or a small set of configurations, is much more likely than transitioning to other configurations. In KMC calculations, this can be understood as an energy basin in which the system makes many transitions until escaping. There exist methods for grouping such states and calculating and sampling the distribution of exit times and configurations in order to accelerate calculations [153; 154; 155].
### Hamiltonian Monte Carlo
For continuous degrees of freedom (DoF), it is often challenging to propose events that are not rejected too often by the Metropolis algorithm. This is relevant for finite temperature studies of non-collinear magnetic spin cluster expansions, where the local magnetic moments at each site can adopt a continuum of different orientations [156; 74]. The
Hamiltonian Monte Carlo method [157; 158] addresses this issue by using the Hamiltonian to combine aspects of molecular dynamics with Metropolis Monte Carlo simulations. At each step a random momentum is generated and applied to the continuous degrees of freedom and the configuration is updated by integration (for example, a leapfrog integration) for several integration steps. The final state is then used as the proposed next configuration in the Metropolis algorithm.
The integration steps results in a greater likelihood of acceptance with a larger change in DoF values than a randomly generated change. The use of the Metropolis algorithm allows for a random change of momentum and the integration to be done approximately. Together this enables the Hamiltonian Monte Carlo method to be more efficient in calculating equilibrium properties than a direct molecular dynamics calculation.
To implement the Hamiltonian Monte Carlo method, forces must be calculated. CASM uses the automatic differentiation library FADBAD++ [159] to support the calculation of forces from effective Hamiltonians.
### Averaging methods, convergence criteria, and simulation output
When calculating averages \(\bar{X}\) using Monte Carlo methods, it is important to know how well converged \(\bar{X}\) is, since \(\bar{X}\) is only equal to \(\langle X\rangle\) as the number of samples tends to infinity (\(N\to\infty\)). The accuracy of \(\bar{X}\) is affected by how long it takes the Markov chain to reach its equilibrium probability distribution given its initial configuration. It is typical to implement some procedure to discard a certain number of initial observations which may be present well outside of their equilibrium proportion. The accuracy of \(\bar{X}\) is also affected by the correlations between observations, the number of samples, and the underlying probability distribution which is a function of the effective Hamiltonian and the supercell size.
Excluding the supercell size effect, the central limit theorem states that the error in the Monte Carlo estimate of the ensemble average, \(\bar{X}-\langle X\rangle\), converges to zero as the number of samples increases, with a normal distribution of mean zero [150]
\[\bar{X}-\langle X\rangle\approx\mathcal{N}(0,\sigma^{2}/N). \tag{98}\]
For stationary distributions, such as those generated by the Monte Carlo algorithms considered here after they have been suitably equilibrated, the variance \(\sigma^{2}\) is [150]
\[\sigma^{2} =\gamma_{0}+2\sum_{k=1}^{\infty}\gamma_{k} \tag{99}\] \[\gamma_{k} =\operatorname{Cov}\left(X_{j},X_{j+k}\right) \tag{100}\]
Here \(\gamma_{k}\) is the lag \(k\) autocovariance, which quantifies correlations between observations \(X_{j}\) and \(X_{j+k}\). An estimate, \(\hat{\gamma}_{k}\), of the autocovariance can be calculated directly from the Monte Carlo simulation observations using
\[\hat{\gamma}_{k}=\sum_{i}^{N-k}\left(X_{i}-\bar{X}\right)\left(X_{i+k}-\bar{X }\right). \tag{101}\]
A common assumption in Monte Carlo simulations that is the autocovariance decays like
\[\gamma_{k}=\gamma_{0}\rho^{-|k|}, \tag{102}\]
where \(\rho\) is a system and algorithm dependent autoregression parameter. In this case, the sum in Eq. 99 can be evaluated to give
\[\sigma^{2}=\gamma_{0}\left(\frac{1+\rho}{1-\rho}\right). \tag{103}\]
CASM implements the automatic equilibration and convergence checks introduced by Van de Walle and Asta [63] to determine the number of initial samples that should be excluded from the sample average as the system equilibrates, and to calculate from the remaining observations an estimate \(\hat{\rho}\) of the autoregression parameter \(\rho\). The error \(\bar{X}-\langle X\rangle\) can be then be estimated to a user-provided confidence level and calculations extended until the requested precision level is reached.
For example, assuming equilibrium has been reached and the covariance structure is well represented by Eq. 102, there is a 95% confidence level in error \(\bar{X}\pm p\) where
\[p=1.96\sqrt{\frac{\hat{\gamma}_{0}}{N}\left(\frac{1+\hat{\rho}}{1-\hat{\rho}} \right)}. \tag{104}\]
For n-fold way Monte Carlo simulations, the same approach can be used to estimate the error \(\bar{X}-\langle X\rangle\) after performing a re-sampling procedure to generate \(N^{\prime}\) equally weighted observations \(X^{\prime}_{l}\) from the n-fold way observations \(X_{l}\) with weights \(w_{l}\). The procedure is to use the n-fold way observations
and associated weights to construct a time series \(\tilde{X}(t)=X_{l}\ni\sum_{i=1}^{l-1}w_{i}<t<\sum_{i=1}^{l}w_{i}\) and then sample the time series at \(N^{\prime}\) regular intervals of size \(\delta t=\sum_{l}^{N}w_{l}/N^{\prime}\), such that \(X^{\prime}_{l}=\tilde{X}(l\ast\delta t)\).
CASM Monte Carlo methods allow setting a target precision level, as either an absolute value or fraction relative to the calculated mean, for any component of a non-scalar sampled quantity. When multiple quantities are requested to be converged to a target precision level, they must all be equilibrated before samples are used for averaging, and all quantities must be converged to their requested target precision for an automatically converging Monte Carlo simulation to be considered complete. When applicable to the particular Monte Carlo method being used, CASM allows setting a minimum and maximum number of Monte Carlo steps, Monte Carlo passes (where 1 pass equals 1 step per supercell site with variable degrees of freedom), samples, simulated time, or elapsed computational time. In order to ensure proper equilibration, it is generally a good practice to set minimums for the number of steps, passes, samples, or simulated time, or to delay sampling until a certain number of steps, passes, or simulated time has passed. Setting maximums is useful for stopping calculations that are very slow to converge so that a larger set of Monte Carlo simulations may complete. As long as equilibration is deemed to have been reached for all requested quantities, the estimated means and errors in the mean are reported.
CASM Monte Carlo methods also give users significant control over when sampling takes places, and how much output is generated. The default is to only output estimated means and standard errors, but users may also request to output all individual observations, or snapshots of the Monte Carlo supercell at each sampling time and in the initial and final states. In either C++ or Python, it is straightforward to implement sampling functions for new properties and new analysis functions to evaluate quantities such as \(c_{P,\tilde{\mu}_{i}}\) and \(\chi\) which depend on all observations.
## 8 The casm-flow software package
The CASM Monte Carlo methods may be run directly, but to support high-throughput simulations covering multiple Hamiltonian models or systems, we have also created a Python package, casm-flow, that helps manage running and analyzing Monte Carlo simulations. It is designed to run matching simulations on multiple sets of model system parameters, taking into account variations across model systems, such as differences in the number of ordered phases, the chemical potential ranges where they are stable, and the transition temperatures. This makes it possible to compare and analyze results that were generated with multiple models fit to different data or sampled from a probability distribution.
To manage large numbers of Monte Carlo simulations, casm-flow makes integrated use of the software package signac [160; 161]. Signac allows users to setup customized workflows and manage job submission on a cluster when dealing with large parameter spaces.
The casm-flow Python library includes a number of methods which create one or more paths of calculations in thermodynamic space, and then provides a standard set of features that allow a user to setup input files, run simulations directly or submit them on a cluster, and then help analyze and plot results. Example methods include:
* PathFlowMethod, which generates a single Monte Carlo path, with linearly or logarithmically spaced thermodynamic conditions.
* SeriesFlowMethod, which generates a series of Monte Carlo paths covering a two-dimensional thermodynamic space.
* GridFlowMethod, which generates a \(n\)-dimensional grid of Monte Carlo paths. This may be useful when running heating or cooling paths on a grid in an \(n\)-dimensional parametric chemical potential or composition space. The grid may be explicitly set by the user, or automatically determined from the predicted convex hull of a model.
* TreeFlowMethod, which generates a tree of Monte Carlo paths in \(n\)-dimensions. A tree starts with a Monte Carlo run along a single path in thermodynamic space and then branches out to a second dimension, then a third, etc. This is useful for integrating free energies across temperature and chemical potential space. It is also useful for varying other parameters such as the variance-constrained Monte Carlo bias.
* SparseFlowMethod, which generates a single Monte Carlo path at the approximate center
of stability for each predicted ground state of a model. This is useful for efficiently determining approximate transition temperatures before running more detailed calculations.
* SparseTreesFlowMethod, which generates a set of Monte Carlo path trees, with one starting from the approximate center of stability for each predicted ground state of a model. The extent of the tree branches can be explicitly set by the user, or automatically determined from the predicted convex hull of a model.
* ParamGridMethod, which generates a set of Monte Carlo paths with identical thermodynamic conditions for a grid of simulation parameters, such as supercell size or target precision level.
Each of the above methods can be run in parallel for multiple models, and Monte Carlo simulation parameters can be customized on a per-model basis.
The cassm-flow package also includes a number of features for analyzing model systems, generating Monte Carlo simulation parameters, and visualizing Monte Carlo results. It can be used to calculate the predicted convex hull for a cluster expansion model and use the hull to determine the ground state configuration at a particular value of the chemical potential. The package includes options to automatically determine Monte Carlo simulation supercells that match size or shape criteria, such as being commensurate with a particular ordered phase or supercell in which order parameters have been defined, having a minimum volume, or being restricted to one- or two-dimensional superlattices of a particular unit cell. The package also includes functions to perform numerical free energy integration along paths in the canonical, semi-grand canonical, and variance-constrained ensembles. It includes integration with the Python software package Bokeh [162] to generate interactive plots of the convex hull, Monte Carlo simulation results, and integrated free energies, in the style used in this paper.
As an example workflow, cassm-flow can construct convex hulls, identify the ground state ordered phases, and generate appropriate order parameters for a given set of cluster expansion models. Next, SparseFlowMethod can be used to generate constant chemical potential and increasing temperature Monte Carlo paths at the center of the stability region for each ground state phase of each cluster expansion model. The results of those runs can be used to identify approximate transition temperatures based on the values of the order parameters. Then, SparseTreesFlowMethod can be used to construct Monte Carlo path trees for each ground state, with maximum temperatures set by the previous result, and free energies can be calculated for each phase by integration of the results. Additionally, GridFlowMethod can be used to generate cooling Monte Carlo paths which can be integrated to calculate the free energy of the high temperature disordered phase. Finally, cassm-flow can plot the minimum free energy phase along a series of paths to construct phase diagrams such as Figure 5(a) and (b).
Besides the Python library which forms the core of cassm-flow, a command line program is included which allows convenient use of many of its features, including the job control features available through integration with signac. An initial public release of cassm-flow is planned for fall 2023.
## 9 Conclusions
In this paper, we described the implementation of Monte Carlo techniques for the study multicomponent crystalline materials within the Clusters Approach to Statistical Mechanics (CASM) software suite. The framework used by CASM to formulate thermodynamic potentials and kinetic transport coefficients accounting for arbitrarily complex crystal structures was presented and demonstrated with examples applying it to crystal systems of increasing complexity. The cluster expansion method used by CASM to parameterize formation energies and the local cluster expansion method used to parameterize kinetic barriers was introduced. Application of the methods implemented in CASM was demonstrated with the use of semi-grand canonical Metropolis Monte Carlo simulations to characterize first and second order phase transitions, calculate free energies, and construct a phase diagrams. Additionally, the use of kinetic Monte Carlo (KMC) simulations to calculate kinetic coefficients was demonstrated. Finally, a new software package has been introduced, cassm-flow, which helps automate the setup, submission, management, and analysis of Monte Carlo simulations performed using CASM.
## 10 Data availability
Input files and results will be made available on Materials Commons [163]. Detailed usage instructions for CASM, including installation instructions are available online [164].
## 11 Acknowledgments
This work was supported by the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering under Award #DE-SC0008637 as part of the Center for Predictive Integrated Structural Materials Science (PRISMS Center) at University of Michigan.
## Appendix A Parametric compositions and the semi-grand canonical potential
### Parametric compositions in CASM
The overall chemical composition of a multicomponent crystal can be tracked with a variety of different variables. Consider a crystal that can host \(s\) species over different sublattices, where one of the species can be a vacancy. The total number of atoms of each type \(i=1,...,s\) is \(N_{i}\). It is often desirable to normalize \(N_{i}\) by the number of primitive unit cells, \(N_{u}\) or by the total number of atoms in the crystal. The number of each species per primitive cell can be tracked with the variables \(n_{i}=N_{i}/N_{u}\). These variables, however, are not independent for a crystal with a fixed number of primitive unit cells \(N_{u}\) due to the constraint that \(\sum_{i}n_{i}=n\), where \(n\) is the number of crystal sites per primitive cell. Furthermore, additional constraints may exist if certain species are restricted to a subset of sublattices. These constraints reduce the number of independent composition variables and make it convenient to work with parametric composition variables.
The geometric meaning of parametric composition is illustrated in Figure 20(a) for the simple example of a binary A-B alloy with one sublattice per unit cell. In this example, \(n_{A}\) and \(n_{B}\) track the number of A and B atoms per unit cell. They are not independent, however, since A and B share the same sites of the crystal, leading to the constraint that \(n_{A}+n_{B}=1\). There are two extremes in the subspace of allowed compositions: a crystal containing only A atoms at \(n_{A}=1\) and \(n_{B}=0\) and
Figure 20: Examples of the allowed composition space for (a) an A-B binary alloy with a single sublattice, (b) an A-B-C ternary alloy with a single sublattice, and (c) an A-B-C ternary alloy with two sublattices in which A and B can occupy the first sublattice and B and C can occupy the second sublattice. Each system is shown with a possible choice of composition axes \(\vec{q}_{1}^{\mathrm{T}}\) and \(\vec{q}_{2}^{\mathrm{T}}\) and origin \(\vec{n}_{0}^{\mathrm{T}}\).
a crystal containing only B atoms with \(n_{A}=0\) and \(n_{B}=1\). These two extremes are illustrated in Figure A.20(a) and are labeled as A and B. Any intermediate concentration of the alloy will reside on the line connecting points A and B. The introduction of independent parametric compositions requires the choice of an origin, for example, pure A. In \(n_{A}\) and \(n_{B}\) space, this origin is represented with the vector \(\vec{n}_{0}\) as illustrated in Figure A.20(a). The space of allowed concentrations can be spanned with a second vector, \(\vec{q}_{1}\), as illustrated in Figure A.20(a). Any allowed composition of A and B in this crystal can then be expressed in vector form as
\[\vec{n}=\vec{n}_{0}+x\vec{q}_{1}\] (A.1)
where \(\vec{n}^{\mathsf{T}}=[n_{A},n_{B}]\) and \(x\) is a parametric composition within the subspace of allowed compositions. If the length of \(\vec{q}_{1}\) is equal to the distance between points 1 and 2 (the two extreme compositions) in \(n_{A}\) and \(n_{B}\) space, then the parametric composition will take on any value between 0 and 1.
Another example is shown in Figure A.20(b) for a ternary A-B-C alloy having a primitive cell containing one sublattice (i.e. \(n=1\)). There are now three composition axes with variables \(n_{A}\), \(n_{B}\) and \(n_{C}\). The constraint that \(n_{A}+n_{B}+n_{C}=1\) restricts the space of allowed compositions to a two-dimensional subspace. In Figure A.20(b) the origin is chosen to be pure A. Two vectors are needed to span the two-dimensional subspace of allowed compositions. These are denoted as \(\vec{q}_{1}\) and \(\vec{q}_{2}\) in Figure A.20(b). These vectors connect the chosen origin, \(\vec{n}_{0}^{\mathsf{T}}=[1,0,0]\) to \(\vec{n}^{\mathsf{T}}=[0,1,0]\) and \(\vec{n}^{T}=[0,0,1]\), respectively. Any concentration \(\vec{n}^{\mathsf{T}}=[n_{A},n_{B},n_{C}]\) can then be expressed as
\[\vec{n}=\vec{n}_{0}+x_{1}\vec{q}_{1}+x_{2}\vec{q}_{2}\] (A.2)
where \(x_{1}\) and \(x_{2}\) are parametric compositions.
Many crystals are more complex than the two examples above. For example, a crystal with two sites per primitive cell may host A and B atoms on the first sublattice and B and C on the second sublattice. The number of each species per primitive cell can again be tracked in a three dimensional space spanned by the variables \(n_{A}\), \(n_{B}\) and \(n_{C}\). However, not all values of \(n_{A}\), \(n_{B}\) and \(n_{C}\) are allowed due to the constraint of a fixed number of sites per primitive cell and the additional constraint that \(A\) and \(C\) atoms can only occupy one of the two sublattice, while \(B\) can occupy both sublattices.
The subspace of allowed values of \(n_{A}\), \(n_{B}\) and \(n_{C}\) can be identified by enumerating the extreme compositions. One extremum in the subspace of allowed compositions corresponds to AC, with \(\vec{n}^{\mathsf{T}}=[1,0,1]\). Two other extrema in the subspace of allowed compositions include the chemical formula AB with \(\vec{n}^{\mathsf{T}}=[1,1,0]\) and BC with \(\vec{n}^{\mathsf{T}}=[0,1,1]\). A fourth extremum corresponds to the chemical formula BB in which both sublattices are occupied by B atoms, with \(\vec{n}^{\mathsf{T}}=[0,2,0]\). Figure A.20(c) illustrates the extrema of allowed compositions for this particular crystal and highlights the two-dimensional subspace of allowed values of \(n_{A}\), \(n_{B}\) and \(n_{C}\).
Parametric compositions can again be introduced to navigate within the two-dimensional subspace of allowed compositions. First an origin needs to be chosen. For example, \(\vec{n}_{0}^{\mathsf{T}}=[1,0,1]\) corresponding to the chemistry AC could be a possible origin. Two spanning vectors for the two-dimensional subspace are \(\vec{q}_{1}\) and \(\vec{q}_{2}\) as illustrated in Figure A.20(c). The concentration of the compound in terms of parametric compositions is again of the form
\[\vec{n}=\vec{n}_{0}+x_{1}\vec{q}_{1}+x_{2}\vec{q}_{2}\] (A.3)
where \(x_{1}\) and \(x_{2}\) are parametric compositions.
In general, for a solid containing \(s\) species, the number of each species per primitive cell as collected in the \(s\)-dimensional vector, \(\vec{n}\), can be expressed in terms of parametric compositions \(\vec{x}\) according to
\[\vec{n}=\vec{n}_{0}+\mathbf{Q}\vec{x}\] (A.4)
where as before, \(\vec{n}_{0}\) points to a chosen origin and the matrix \(\mathbf{Q}=[\vec{q}_{1},...,\vec{q}_{k}]\) collects as columns \(k\) independent vectors that span the subspace of allowed values of \(\vec{n}\), consistent with the constraint of a fixed number of sites per primitive cell and any additional sublattice constraints. The matrix \(\mathbf{Q}\), therefore, has dimensions \(s\times k\), leading to \(k<s\) parametric compositions, \(\vec{x}^{\mathsf{T}}=[x_{1},...,x_{k}]\).
The \(k\) vectors \(\vec{q}_{i}\) that span the subspace of allowed values of \(\vec{n}\) do not necessarily form an orthogonal set. It therefore becomes useful to introduce a second set of vectors, \([\vec{r}_{1},\ldots,\vec{r}_{k}]\), that span the same subspace and that satisfy \(\vec{r}_{i}^{\mathsf{T}}\vec{q}_{j}=\delta_{i,j}\) for \(i,j=1,\ldots,k\), with \(\delta_{i,j}\) being the Kronecker delta. The matrix
\[\mathbf{R}=[\vec{r}_{1},\ldots,\vec{r}_{k}],\] (A.5)
whose transpose is the left pseudoinverse of \(\mathbf{Q}\) (i.e. \(\mathbf{R}^{\mathsf{T}}\mathbf{Q}=I\) where \(I\) is a \(k\times k\) identity matrix) makes it possible to determine the parametric compositions
\(\vec{x}\) given the concentration variables per unit cell, \(\vec{n}\), according to
\[\vec{x}=\mathbf{R}^{\mathsf{T}}(\vec{n}-\vec{n}_{0}) \tag{111}\]
Since \(\mathbf{Q}\) describes the allowed composition space, it has full column rank and \(\mathbf{R}^{\mathsf{T}}\) can be calculated using:
\[\mathbf{R}^{\mathsf{T}}=(\mathbf{Q}^{\mathsf{T}}\mathbf{Q})^{-1}\mathbf{Q}^{\mathsf{T}}. \tag{112}\]
The set of \(\vec{r}_{i}\) vectors become equal to the set of \(\vec{q}_{i}\) vectors when the latter form an orthonormal set.
When imposing the constraints of the crystal on diffusional flux expressions in Appendix B, it will be convenient to utilize the orthogonal projection operator defined as
\[\mathbf{P}=\sum_{i=1}^{k}\vec{r}_{i}\vec{q}_{i}^{\mathsf{T}}=\mathbf{R}\mathbf{Q}^{ \mathsf{T}}, \tag{113}\]
which is a \(s\times s\) matrix with rank \(k\). Any vector \(\vec{v}=\vec{n}-\vec{n}_{0}\) in the full \(s\) dimensional composition space will be projected onto the subspace of allowed compositions spanned by the sets \(\vec{q}_{i}\) or \(\vec{r}_{i}\) when multiplied by \(\mathbf{P}\). Any vector \(\vec{v}\) that is already in the space spanned by the column vectors of \(\mathbf{Q}\), (i.e. \(\vec{v}=\mathbf{Q}\vec{x}\)) is unaffected by \(\mathbf{P}\) (i.e. \(\mathbf{P}\vec{v}=\vec{v}\)), which can be verified by substituting Eq. 112 into Eq. 113 and using the fact that \((\mathbf{Q}^{\mathsf{T}}\mathbf{Q})^{-1}\) is a symmetric matrix and is equal to its transpose.
In general, it is most convenient to choose the length of each \(\vec{q}_{i}\) such that it corresponds to one unit exchange of atoms per unit cell, i.e. the resulting vector is of the form \([0,...,1,...,-1,...0]\). In the examples treated so far, this has always been the case for the chosen \(\vec{q}\) vectors. For crystals with more than one sublattice per primitive unit cell, this will generally lead to parametric compositions that are not restricted to the interval [0,1]. To illustrate this, consider the example of a crystal with two sublattices in which A and B can occupy the first sublattice and A, B and C can occupy the second sublattice. The composition space for this example is illustrated in Figure 12.
As the origin of the parametric composition space, one possible choice is pure A, in which both sublattices are completely occupied by A atoms, i.e. \(\vec{n}_{0}^{\mathsf{T}}=[2,0,0]\). The subspace of allowed compositions can be spanned by \(\vec{q}_{1}^{\mathsf{T}}=[-1,1,0]\) and \(\vec{q}_{2}^{\mathsf{T}}=[-1,0,1]\) as illustrated in Figure 12. Notice that \(\vec{q}_{1}\) does not extend all the way from the AA composition to the BB composition, but has a length that is half that distance. This is to ensure that moving along that axis by a unit distance of \(\vec{q}_{1}\) corresponds to an exchange of one A atom with one B atom. By using this particular vector, the parametric composition corresponding to this axis, \(x_{1}\), ranges from 0 to 2, as two A atoms must be replaced by 2 B atoms to go from the AA composition to the BB composition. This convention ensures that the exchange chemical potentials that are conjugate to the resulting parametric compositions are quantities of an atom as opposed to that of multiple atoms.
### Examples of semi-grand canonical potentials
It is instructive to inspect the form of semi-grand canonical potentials and the exchange chemical potentials \(\vec{\vec{\mu}}\) for specific examples. As defined in Section 2.2, the semi-grand canonical potential takes the form
\[\phi=g-\vec{\vec{\mu}}^{\mathsf{T}}\vec{x}=\vec{\mu}^{\mathsf{T}}\vec{n}_{0} \tag{114}\]
where \(\phi=\Phi/N_{u}\) and \(g=G/N_{u}\) and where \(\vec{\vec{\mu}}=\mathbf{Q}^{\mathsf{T}}\vec{\mu}\). Another important thermodynamic quantity is the semi-grand canonical enthalpy defined as
\[\omega=e-\vec{\vec{\mu}}^{\mathsf{T}}\vec{x} \tag{115}\]
Figure 12: The space of allowed compositions for a crystal with two sublattices in which A and B can occupy the first sublattice and A, B and C can occupy the second sublattice. A choice of composition axes \(\vec{q}_{1}^{\mathsf{T}}=[-1,1,0]\) and \(\vec{q}_{2}^{\mathsf{T}}=[-1,0,1]\) with origin \(\vec{n}_{0}^{\mathsf{T}}=[2,0,0]\) is shown along with the stoichiometric compositions at the extrema of the allowed composition space: AA, AC, BB and BC.
where \(e\) is the average energy per primitive cell. The next sections illustrate these potentials for concrete examples.
#### a.2.1 Binary and ternary alloys on a single sublattice
For the simple binary alloy with the choice of origin as shown in Figure 15(a),
\[\vec{n}_{0}=\left[\begin{array}{c}1\\ 0\end{array}\right]\hskip 28.452756pt\vec{q}_{1}=\left[\begin{array}{c}-1\\ 1\end{array}\right] \tag{162}\]
Since the subspace of allowed compositions is one dimensional, \(\mathbf{Q}\) is a \(2\times 1\) matrix
\[\mathbf{Q}=\left[\begin{array}{c}-1\\ 1\end{array}\right], \tag{163}\]
According to Equation 10, the exchange chemical potential is the equal to
\[\tilde{\mu}_{1}=\mu_{B}-\mu_{A}, \tag{164}\]
and is the intensive variable that is conjugate to the parametric composition \(x_{1}\). The semi-grand canonical free energy becomes equal to the chemical potential of A since, using Equation 12
\[\phi=\vec{\mu}^{\mathrm{T}}\vec{n}_{0}=[\mu_{A},\mu_{B}]\left[ \begin{array}{c}1\\ 0\end{array}\right]=\mu_{A} \tag{165}\]
In semi-grand canonical Monte Carlo simulations, \(\tilde{\mu}_{1}\) together with temperature T are the input variables, while quantities such as the average parametric composition \(\langle x_{1}\rangle\) and the average semi-grand canonical enthalpy, \(\langle\omega\rangle=\langle e-\tilde{\mu}_{1}x_{1}\rangle\) are outputs. Since \(\tilde{\mu}_{1}\) is a difference in the chemical potentials of B and A, it is necessary to know both \(\tilde{\mu}_{1}\) and the semi-grand canonical potential \(\phi=\mu_{A}\) in order to determine the individual chemical potentials \(\mu_{A}\) and \(\mu_{B}\). This requires a free energy integration step described in Section 6.
Similar results are obtained for the ternary A-B-C alloy with origin and parametric composition axes as chosen in Figure 15(b). Now there are two exchange chemical potentials, with
\[\tilde{\mu}_{1} =\mu_{B}-\mu_{A}, \tag{166}\] \[\tilde{\mu}_{2} =\mu_{C}-\mu_{A} \tag{167}\]
while the semi-grand canonical potential becomes
\[\phi=g-\tilde{\mu}_{1}x_{1}-\tilde{\mu}_{2}x_{2}=\mu_{A} \tag{168}\]
and the semi-grand canonical enthalpy (at zero pressure) per primitive cell becomes
\[\omega=e-\tilde{\mu}_{1}x_{1}-\tilde{\mu}_{2}x_{2} \tag{169}\]
#### a.2.2 Two binary sublattices sharing a common species
Consider the example of Figure 15(c) again for a crystal with two sites in the primitive cell, with the first sublattice hosting A and B and the second sublattice hosting B and C. For the choice of origin in Figure 15(c) and the choice of parametric composition axes
\[\vec{n}_{0}=\left[\begin{array}{c}1\\ 0\\ 1\end{array}\right]\hskip 14.226378pt\vec{q}_{1}=\left[\begin{array}{c}0\\ 1\\ -1\end{array}\right]\hskip 14.226378pt\vec{q}_{2}=\left[\begin{array}{c}-1\\ 1\\ 0\end{array}\right] \tag{170}\]
The two-dimensional composition subspace leads to a \(3\times 2\)\(\mathbf{Q}\) matrix
\[\mathbf{Q}=\left[\begin{array}{cc}0&-1\\ 1&1\\ -1&0\end{array}\right], \tag{171}\]
This matrix yields for the exchange chemical potentials
\[\tilde{\mu}_{1} =\mu_{B}-\mu_{C}, \tag{172}\] \[\tilde{\mu}_{2} =\mu_{B}-\mu_{A} \tag{173}\]
while the semi-grand canonical potential becomes
\[\phi=g-\tilde{\mu}_{1}x_{1}-\tilde{\mu}_{2}x_{2}=\mu_{A}+\mu_{C} \tag{174}\]
#### a.2.3 Binary and ternary sublattices sharing multiple common species
As a final example, consider the more complex crystal with two sublattices per primitive cell with the first sublattice hosting A and B and the second sublattice hosting A, B and C. The composition space for this example is shown in Figure 17. In Figure 17, the origin and parametriccomposition axes are chosen as
\[\vec{n}_{0}=\left[\begin{array}{c}2\\ 0\\ 0\end{array}\right]\hskip 14.226378pt\vec{q}_{1}=\left[\begin{array}{c}-1\\ 1\\ 0\end{array}\right]\hskip 14.226378pt\vec{q}_{2}=\left[\begin{array}{c}-1\\ 0\\ 1\end{array}\right] \tag{175}\]
This leads to
\[\mathbf{Q}=\left[\begin{array}{cc}-1&-1\\ 1&0\\ 0&1\end{array}\right], \tag{176}\]
The exchange chemical potentials then become
\[\tilde{\mu}_{1} =\mu_{B}-\mu_{A}, \tag{177}\] \[\tilde{\mu}_{2} =\mu_{C}-\mu_{A} \tag{178}\]
The semi-grand canonical potential becomes
\[\phi=g-\tilde{\mu}_{1}x_{1}-\tilde{\mu}_{2}x_{2}=2\mu_{A} \tag{111}\]
The factor of two in front of the chemical potential of A emerges since the primitive unit cell contains two sites, which are both occupied by A atoms at the origin of the parametric composition space.
## Appendix B Fluxes and driving forces for diffusion in crystals
### Crystallographic constraints on flux expressions
Diffusion within a crystal that occurs far away from extended defects proceeds under crystallographic constraints that affect the form of the flux expressions, thermodynamic driving forces and the Onsager transport coefficients. Indeed, diffusion within single crystal regions of a solid can only lead to a spatial redistribution of different chemical species, but cannot result in the extension of the crystal. It, therefore, occurs at constant \(N_{u}\), the number of unit cells. This imposes constraints on diffusional fluxes and on the form of diffusional potentials (driving forces). To establish these constraints, it is necessary to revisit the composition variables of the previous section. For a crystal having a fixed number of unit cells, \(N_{u}\), the concentrations of \(s\) species per unit cell, \(\vec{n}^{\mathsf{T}}\), reside in a lower dimensional subspace spanned by a set of vectors \([\vec{q}_{1},\ldots,\vec{q}_{k}]\). Parametric compositions \(\vec{x}\) serve as coordinates in this subspace. Diffusion can only result in composition changes of \(\vec{n}\) that stay in the subspace spanned by \([\vec{q}_{1},\ldots,\vec{q}_{k}]\). This means that any changes in \(\vec{n}\) that leave this subspace are forbidden.
The subspace that is orthogonal to \([\vec{q}_{1},\ldots,\vec{q}_{k}]\) has a dimension \(s-k\) and can be spanned by a set of orthogonal vectors \([\vec{t}_{k+1},\ldots,\vec{t}_{s}]\). Such a set of vectors can always be found by finding the null space of \(\mathbf{Q}=[\vec{q}_{1},\ldots,\vec{q}_{k}]\). The spanning vectors of the subspace orthogonal to \(\mathbf{Q}\) can be collected in the \(s\times(s-k)\) matrix
\[\mathbf{T}=[\vec{t}_{k+1},\ldots,\vec{t}_{s}]. \tag{112}\]
The constraint of a constant number of unit cells \(N_{u}\) on the composition variable \(\vec{n}\) can be expressed as
\[\mathbf{T}^{\mathsf{T}}(\vec{n}-\vec{n}_{0})=\vec{0} \tag{113}\]
which ensures that variations in the concentration \(\vec{n}\) do not stray into the subspace spanned by \([\vec{t}_{k+1},\ldots,\vec{t}_{s}]\). Equation 113 form a set of \(s-k\) linear constraints on the concentration variables \(\vec{n}^{\mathsf{T}}=[n_{A},n_{B},\ldots]\).
Constraints similar to Eq. 113 also apply to the diffusional fluxes of the different species within a perfect crystal having a fixed number of unit cells \(N_{u}\). This is a consequence of the conservation of particles equation, that relates each concentration variable \(n_{m}\) for species \(m\) to a corresponding flux \(J_{m}\) according to
\[\frac{\partial(n_{m}/v_{u})}{\partial t}=-\nabla J_{m} \tag{114}\]
where \(v_{u}\) represents the volume of the unit cell of the crystal. Due to the linearity of time and space derivatives, the constraints of Eq. 113 that apply to \(\vec{n}\) transfer to the fluxes \(\vec{J}\) and can be expressed as
\[\mathbf{T}^{\mathsf{T}}\vec{J}=\vec{0} \tag{115}\]
where \(\vec{J}\) collects the fluxes of the \(s\) species of the crystal. There are therefore \(s-k\) linear constraints on the fluxes of each element within the crystal when holding the number of unit cells constant.
The constraints on the fluxes also affect the Onsager transport coefficients that appear in the flux expressions
\[\vec{J}=-\mathbf{L}\nabla\vec{\mu}, \tag{116}\]
Since the constraints of Eq. 115 hold independent of the values of the driving forces, \(\nabla\vec{\mu}\) appearing in Eq. 116, the following must hold for the Onsager matrix
\[\mathbf{T}^{\mathsf{T}}\mathbf{L}=\mathbf{0} \tag{117}\]
where the \(\mathbf{0}\) represents a \((s-k)\times s\) matrix of zeros. These equations, therefore, constitute a set of \((s-k)\times s\) linear relationships on the Onsager transport coefficients that emerge due to the constraint of a fixed number of unit cells. Due to the Onsager reciprocity relationships (\(\mathbf{L}=\mathbf{L}^{T}\)), the above equation can also be rewritten as
\[\mathbf{L}\mathbf{T}=\mathbf{0} \tag{118}\]
where the \(\mathbf{0}\) now represents a \(s\times(s-k)\) matrix of zeros.
The linear relationships between the Onsager transport coefficients can be accounted for by rewriting the flux expressions \(B.5\). To this end, it is useful to rely on projection operators for the subspace of allowed compositions when fixing the
number of unit cells, which was introduced in Appendix A, Eq. 8, as
\[\mathbf{P}_{c}=\boldsymbol{R}\boldsymbol{Q}^{\mathsf{T}} \tag{8}\]
and for the subspace that is orthogonal to it, which can be expressed in terms of the matrix \(\boldsymbol{T}\) as
\[\mathbf{P}_{o}=\sum_{i=k+1}^{s}\vec{t}_{i}\vec{t}_{i}^{\mathsf{T}}= \boldsymbol{T}\boldsymbol{T}^{\mathsf{T}} \tag{9}\]
The projection operators, \(\mathbf{P}_{c}\) projecting onto the allowed composition space and \(\mathbf{P}_{o}\) projecting onto its kernel, are complementary, and sum to the identity matrix
\[\mathbf{I}=\mathbf{P}_{c}+\mathbf{P}_{o}=\boldsymbol{R}\boldsymbol{Q}^{ \mathsf{T}}+\boldsymbol{T}\boldsymbol{T}^{\mathsf{T}}. \tag{10}\]
The identity operator, Eq. 10, can be inserted between the Onsager matrix and the gradients in chemical potentials in the flux expressions, Eq. 5, to yield
\[\vec{J}=-\boldsymbol{L}\boldsymbol{R}\nabla(\boldsymbol{Q}^{\mathsf{T}}\vec{ \mu}) \tag{11}\]
where Eq. 7 was used. By further multiplying both sides by \(\boldsymbol{R}^{\mathsf{T}}\), the flux expressions can be rewritten as
\[\vec{\vec{J}}=-\tilde{\boldsymbol{L}}\nabla\vec{\mu} \tag{12}\]
where the \(k\) independent exchange fluxes are defined as
\[\vec{\vec{J}}=\boldsymbol{R}^{\mathsf{T}}\vec{J} \tag{13}\]
and where
\[\tilde{\boldsymbol{L}}=\boldsymbol{R}^{\mathsf{T}}\boldsymbol{L}\boldsymbol{R} \tag{14}\]
is a \(k\times k\) Onsager transport coefficient matrix that is consistent with the constraints of a constant number of unit cells. The \(\vec{\vec{\mu}}=\boldsymbol{Q}^{\mathsf{T}}\vec{\mu}\) are the exchange chemical potentials defined in Appendix A. These expressions describe the fluxes, transport coefficients and thermodynamic driving forces for diffusion in a crystal with a fixed number of unit cells \(N_{u}\).
The transformed fluxes, \(\vec{\vec{J}}\) reside in the dual space of the exchange chemical potentials, \(\vec{\vec{\mu}}\).
### Examples of flux expressions
#### b.2.1 Diffusion in a binary alloy with a single sublattice by direct exchange
Consider the simple binary alloy in which diffusion occurs through direct exchanges between A and B atoms on a single sublattice. As detailed in A.2.1, for such a system
\[\boldsymbol{Q}=\left[\begin{array}{c}-1\\ 1\end{array}\right], \tag{15}\]
is consistent with the composition formula \(\mathrm{A}_{1-x_{1}}\mathrm{B}_{x_{1}}\). The transpose of the left pseudo inverse of \(\boldsymbol{Q}\) is
\[\boldsymbol{R}=\frac{1}{2}\left[\begin{array}{c}-1\\ 1\end{array}\right]. \tag{16}\]
The space that is orthogonal to the space of allowed compositions is the column vector space
\[\boldsymbol{T}=\frac{1}{\sqrt{2}}\left[\begin{array}{c}1\\ 1\end{array}\right]. \tag{17}\]
Hence according to Eq. 4, the crystallographic constraints require that
\[\frac{1}{\sqrt{2}}[1,1]\left[\begin{array}{c}J_{A}\\ J_{B}\end{array}\right]=\frac{1}{\sqrt{2}}(J_{A}+J_{B})=0, \tag{18}\]
which imposes the constraint that a flux in A atoms must be compensated by an opposite flux of B atoms (i.e. \(J_{A}=-J_{B}\)) in order to preserve crystal sites.
For a binary system with direct exchange hops, the flux expressions take the form
\[\left[\begin{array}{c}J_{A}\\ J_{B}\end{array}\right]=-\left[\begin{array}{cc}L_{AA}&L_{AB}\\ L_{AB}&L_{BB}\end{array}\right]\left[\begin{array}{c}\nabla\mu_{A}\\ \nabla\mu_{B}\end{array}\right], \tag{19}\]
Projecting the flux expressions to the subspace of allowed concentrations consistent with a fixed number of unit cells yields
\[\tilde{J}_{1}=\tilde{L}\nabla\tilde{\mu}_{1} \tag{20}\]
where according to Eq. 14,
\[\tilde{L}=\left[\begin{array}{cc}-1/2&1/2\end{array}\right]\left[\begin{array} []{cc}L_{AA}&L_{AB}\\ L_{AB}&L_{BB}\end{array}\right]\left[\begin{array}{c}-1/2\\ 1/2\end{array}\right] \tag{21}\]
which yields
\[\tilde{L}=\frac{1}{4}(L_{AA}+L_{BB}-2L_{AB}) \tag{22}\]
This expression can be further simplified by exploiting the constraints on the \(\boldsymbol{L}\) matrix as codified by Eq. 7, which for this example can be expressed explicitly as
\[\left[\begin{array}{cc}L_{AA}&L_{AB}\\ L_{AB}&L_{BB}\end{array}\right]\left[\begin{array}{c}1/\sqrt{2}\\ 1/\sqrt{2}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right] \tag{23}\]
These two linear equations make it possible to eliminate two of the three Onsager coefficients such that Eq. B.22 can be rewritten as
\[\tilde{L}=L_{BB}\] (B.24)
The exchange chemical potential is equal to
\[\tilde{\mu}_{1}=\left[\begin{array}{cc}-1&1\end{array}\right]\left[\begin{array} []{c}\mu_{A}\\ \mu_{B}\end{array}\right]=\mu_{B}-\mu_{A},\] (B.25)
while the exchange flux is
\[\tilde{J}_{1}=\left[\begin{array}{cc}-1/2&1/2\end{array}\right]\left[ \begin{array}{c}J_{A}\\ J_{B}\end{array}\right]=\frac{1}{2}(J_{B}-J_{A})=J_{B}\] (B.26)
where the third equality follows by using Eq. B.18. The flux expression projected into the constant number of crystal unit cells subspace can therefore be expressed as
\[J_{B}=-L_{BB}\nabla(\mu_{B}-\mu_{A})\] (B.27)
### B.2.2 Diffusion in a binary alloy with a single sublattice mediated by a vacancy mechanism
A common scenario is diffusion in a binary A-B alloy that is mediated by a vacancy. The crystal contains three species, A, B and vacancies denoted Va and the concentration per unit cell \(\vec{n}^{\mathrm{T}}=[n_{A},n_{B},n_{Va}]\). For the purpose of setting up flux expressions, it is convenient to choose as the origin for the parametric composition variables the crystal in which all sites are fully vacant, i.e.
\[\vec{n}_{0}=\left[\begin{array}{c}0\\ 0\\ 1\end{array}\right]\quad\vec{q}_{1}=\left[\begin{array}{c}1\\ 0\\ -1\end{array}\right]\quad\vec{q}_{2}=\left[\begin{array}{c}0\\ 1\\ -1\end{array}\right].\] (B.28)
The space of allowed compositions consistent with the constraints of the crystal, as illustrated in Figure B.22, is
\[\mathbf{Q}=\left[\begin{array}{cc}1&0\\ 0&1\\ -1&-1\end{array}\right],\quad\mathbf{R}=\left[\begin{array}{cc}2/3&-1/3\\ -1/3&2/3\\ -1/3&-1/3\end{array}\right]\] (B.29)
The space that is orthogonal to space of allowed compositions is \(\vec{t}_{3}^{\mathrm{T}}=1/\sqrt{3}[1,1,1]\), such that
\[\mathbf{T}=\left[\begin{array}{c}1/\sqrt{3}\\ 1/\sqrt{3}\\ 1/\sqrt{3}\end{array}\right].\] (B.30)
According to Eq. B.4, the crystallographic constraints require that
\[\frac{1}{\sqrt{3}}\left[\begin{array}{cc}1&1&1\end{array}\right]\left[ \begin{array}{c}J_{A}\\ J_{B}\\ J_{Va}\end{array}\right]=\frac{1}{\sqrt{3}}(J_{A}+J_{B}+J_{Va})=0,\] (B.31)
The flux expressions take the form
\[\left[\begin{array}{c}J_{A}\\ J_{B}\\ J_{Va}\end{array}\right]=-\left[\begin{array}{ccc}L_{AA}&L_{AB}&L_{AVa}\\ L_{AB}&L_{BB}&L_{BVa}\\ L_{AVa}&L_{BVa}&L_{VaVa}\end{array}\right]\left[\begin{array}{c}\nabla\mu_{ A}\\ \nabla\mu_{B}\\ \nabla\mu_{Va}\end{array}\right],\] (B.32)
Linear constraints on the Onsager coefficients due to Eq. B.7 makes it possible to eliminate the Onsager coefficients involving vacancies (i.e. \(L_{AVa}=-(L_{AA}+L_{AB})\), \(L_{BVa}=-(L_{AB}+L_{BB})\) and \(L_{VV}=-(L_{AVa}+L_{BVa})\)). Projecting the Onsager coefficient matrix to the fixed crystal subspace according to Eq. B.14 leads to
\[\tilde{\mathbf{L}}=\left[\begin{array}{cc}L_{AA}&L_{AB}\\ L_{AB}&L_{BB}\end{array}\right],\] (B.33)
where the above linear relationships between the Onsager coefficients were used.
Projecting the fluxes according to Eq. B.13 yields
Figure B.22: The space of allowed compositions for a crystal with a single sublattice that can be occupied by A atoms, B atoms and vacancies (Va). A choice of composition axes \(\vec{q}_{1}^{\mathrm{T}}=[1,0,-1]\) and \(\vec{q}_{2}^{\mathrm{T}}=[1,0,-1]\) with origin \(\vec{n}_{0}^{\mathrm{T}}=[0,0,1]\) is shown along with the stoichiometric compositions at the extrema of the allowed composition space: A, B and Va. This is consistent with the composition formula \(\mathrm{A}_{x_{1}}\mathrm{B}_{x_{2}}\mathrm{Va}_{1-x_{1}-x_{2}}\).
\[\left[\begin{array}{c}\tilde{J}_{1}\\ \tilde{J}_{2}\end{array}\right]=\frac{1}{3}\left[\begin{array}{ccc}2&-1&-1\\ -1&2&-1\end{array}\right]\left[\begin{array}{c}J_{A}\\ J_{B}\\ J_{Va}\end{array}\right]=\left[\begin{array}{c}J_{A}\\ J_{B}\end{array}\right]\] (B.34)
where the constraint on the fluxes, Eq. B.31, was used to eliminate the vacancy flux. The diffusion potentials, which are the exchange chemical potentials, become
\[\left[\begin{array}{c}\tilde{\mu}_{1}\\ \tilde{\mu}_{2}\end{array}\right]=\left[\begin{array}{ccc}1&0&-1\\ 0&1&-1\end{array}\right]\left[\begin{array}{c}\mu_{A}\\ \mu_{B}\\ \mu_{Va}\end{array}\right]=\left[\begin{array}{c}\mu_{A}-\mu_{Va}\\ \mu_{B}-\mu_{Va}\end{array}\right]\] (B.35)
The resulting flux expressions in the fixed crystal frame becomes
\[\left[\begin{array}{c}J_{A}\\ J_{B}\end{array}\right]=-\left[\begin{array}{cc}L_{AA}&L_{AB}\\ L_{AB}&L_{BB}\end{array}\right]\left[\begin{array}{c}\nabla(\mu_{A}-\mu_{Va} )\\ \nabla(\mu_{B}-\mu_{Va})\end{array}\right],\] (B.36)
### Interstitial diffusion with a fixed host lattice
Another common system is one in which there is a fixed sublattice of A atoms and an interstitial sublattice that allows vacancies and B atoms. This crystal also contains three species, A, B and Va, and the concentration per unit cell can be expressed as the vector \(\vec{n}^{\sf T}=[n_{A},n_{B},n_{Va}]\). Clearly, this system behaves like a binary alloy on the interstitial sublattice, but with a fixed concentration of A atoms. This makes for a useful system to use as an exercise because the results can be easily checked against intuition.
The choice of composition origin and axis vectors consistent with the composition formula \(\text{AB}_{x_{1}}\text{Va}_{1-x_{1}}\) is
\[\vec{n}_{0}=\left[\begin{array}{c}1\\ 0\\ 1\end{array}\right]\qquad\vec{q}_{1}=\left[\begin{array}{c}0\\ 1\\ -1\end{array}\right],\] (B.37)
as illustrated in Figure B.23.
The equations relating parametric composition, \(\vec{x}\), and concentration per unit cell, \(\vec{n}\), (Eqs. 2, A.6, and A.7) are
\[\vec{n} =\vec{n}_{0}+\mathbf{Q}\vec{x},\] \[\vec{x} =\mathbf{R}^{\sf T}(\vec{n}-\vec{n}_{0}),\] \[\mathbf{R}^{\sf T} =(\mathbf{Q}^{\sf T}\mathbf{Q})^{-1}\mathbf{Q}^{\sf T},\]
which for this system and choice of composition axes take the values
\[\mathbf{Q}=\left[\begin{array}{c}0\\ 1\\ -1\end{array}\right]\qquad\mathbf{R}=\frac{1}{2}\left[\begin{array}{c}0\\ 1\\ -1\end{array}\right].\] (B.38)
The null space of \(\mathbf{\mathsf{Q}}\) is the space of disallowed compositions,
\[\mathbf{T}=\left[\begin{array}{cc}1&0\\ 0&\frac{1}{\sqrt{2}}\\ 0&\frac{1}{\sqrt{2}}\end{array}\right],\] (B.39)
which can be used to define the constraints on the the concentration per unit cell (Eq. B.2), flux (Eq. B.4), and Onsager coefficients (Eq. B.7) according to
\[\mathbf{T}^{\sf T}(\vec{n}-\vec{n}_{0}) =\vec{0},\] \[\mathbf{T}^{\sf T}\vec{J} =\vec{0},\] \[\mathbf{LT} =\mathbf{0}.\]
For this system and choice of composition axes, the concentration constraints are
\[\left[\begin{array}{ccc}1&0&0\\ 0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\end{array}\right]\left(\left[\begin{array} []{c}n_{A}\\ n_{B}\\ n_{Va}\end{array}\right]-\left[\begin{array}{c}1\\ 0\\ 1\end{array}\right]\right)=\left[\begin{array}{c}0\\ 0\end{array}\right],\] (B.40)
Figure B.23: The space of allowed compositions for a crystal with one fixed host sublattice occupied by A atoms, and an interstitial sublattice occupied by B atoms and vacancies (Va). A choice of composition axis \(\vec{q}_{1}^{\sf T}=[0,1,-1]\) with origin \(\vec{n}_{0}^{\sf T}=[1,0,1]\) is shown along with the stoichiometric compositions at the extrema of the allowed composition space: AVa and AB, along with compositions A, B and Va as guides for the eye. This is consistent with the composition formula \(\text{AB}_{x_{1}}\text{Va}_{1-x_{1}}\).
which gives the expected result
\[n_{A} =1,\] (B.41) \[n_{B}+n_{Va} =1,\]
The flux constraints are
\[\left[\begin{array}{cc}1&0&0\\ 0&\frac{1}{\sqrt{2}}&\frac{1}{\sqrt{2}}\end{array}\right]\left[\begin{array}{ c}J_{A}\\ J_{B}\\ J_{Va}\end{array}\right]=\left[\begin{array}{c}0\\ 0\end{array}\right],\] (B.42)
which gives the expected result
\[J_{A} =0,\] (B.43) \[J_{B} =-J_{Va},\]
The Onsager coefficients constraints are
\[\left[\begin{array}{ccc}L_{AA}&L_{AB}&L_{VAa}\\ L_{AB}&L_{BB}&L_{BVa}\\ L_{AVa}&L_{BVa}&L_{VaVa}\end{array}\right]\left[\begin{array}{cc}1&0\\ 0&\frac{1}{\sqrt{2}}\\ 0&\frac{1}{\sqrt{2}}\end{array}\right]=\left[\begin{array}{cc}0&0\\ 0&0\\ 0&0\end{array}\right],\] (B.44)
which gives the obvious result
\[L_{AA}=L_{AB}=L_{AVa}=L_{AB}=0,\] (B.45)
and also the expected result
\[L_{BB}=L_{VaVa}=-L_{BVa}.\] (B.46)
The choice of composition space results in exchange chemical potentials (Eq. 10) defined as
\[\vec{\tilde{\mu}}=\mathbf{Q}^{T}\vec{\mu}\]
which take the values
\[\left[\begin{array}{c}\tilde{\mu}_{1}\end{array}\right]=\left[\begin{array} []{c}0\\ 1\\ -1\end{array}\right]\left[\begin{array}{c}\tilde{\mu}_{A}\\ \tilde{\mu}_{B}\\ \tilde{\mu}_{Va}\end{array}\right],\] (B.47)
or equivalently
\[\tilde{\mu}_{1}=\mu_{B}-\mu_{Va}.\] (B.48)
The flux expressions and Onsager coefficients can be projected onto the allowed composition space (Eqs. B.12, B.13 and B.14),
\[\vec{\tilde{J}} =-\tilde{\mathbf{L}}\nabla\vec{\tilde{\mu}},\] \[\vec{\tilde{J}} =\mathbf{R}^{\mathsf{T}}\vec{J},\] \[\tilde{\mathbf{L}} =\mathbf{R}^{\mathsf{T}}\mathbf{L}\mathbf{R}.\]
which for this system and choice of composition axes gives
\[\left[\begin{array}{c}\tilde{J}_{1}\end{array}\right]=\frac{1}{2}\left[ \begin{array}{cc}0&1&-1\end{array}\right]\left[\begin{array}{c}J_{A}\\ J_{B}\\ J_{Va}\end{array}\right],\] (B.49)
or equivalently
\[\tilde{J}_{1}=\frac{1}{2}\left(J_{B}-J_{Va}\right),\] (B.50)
and
\[\left[\begin{array}{cc}\tilde{L}_{11}\end{array}\right]=\frac{1}{4}\left[ \begin{array}{cc}0&1&-1\end{array}\right]\left[\begin{array}{ccc}L_{AA}&L_ {AB}&L_{AVa}\\ L_{AB}&L_{BB}&L_{BVa}\\ L_{AVa}&L_{BVa}&L_{VaVa}\end{array}\right]\left[\begin{array}{c}0\\ 1\\ -1\end{array}\right],\] (B.51)
or equivalently
\[\tilde{L}_{11}=\frac{1}{4}\left(L_{BB}+L_{VaVa}-2L_{BVa}\right).\] (B.52)
The exchange chemical potentials, projected fluxes, and projected Onsager coefficients can be combined with the constraints found previously to yield the simplified one-dimensional results
\[\tilde{J} =-\tilde{L}\nabla\tilde{\mu}\] (B.53) \[\tilde{J} =\tilde{J}_{1}=J_{B},\] \[\tilde{L} =\tilde{L}_{11}=L_{BB}.\] \[\tilde{\mu} =\tilde{\mu}_{1}=\mu_{B}-\mu_{Va}.\]
|
2301.02659 | Bayesian modelling of visual discrimination learning in mice | The brain constantly turns large flows of sensory information into selective
representations of the environment. It, therefore, needs to learn to process
those sensory inputs that are most relevant for behaviour. It is not well
understood how learning changes neural circuits in visual and decision-making
brain areas to adjust and improve its visually guided decision-making. To
address this question, head-fixed mice were trained to move through virtual
reality environments and learn visual discrimination while neural activity was
recorded with two-photon calcium imaging. Previously, descriptive models of
neuronal activity were fitted to the data, which was used to compare the
activity of excitatory and different inhibitory cell types. However, the
previous models did not take the internal representations and learning dynamics
into account. Here, I present a framework to infer a model of internal
representations that are used to generate the behaviour during the task. We
model the learning process from untrained mice to trained mice within the
normative framework of the ideal Bayesian observer and provide a Markov model
for generating the movement and licking. The framework provides a space of
models where a range of hypotheses about the internal representations could be
compared for a given data set. | Pouya Baniasadi | 2022-11-15T16:59:20Z | http://arxiv.org/abs/2301.02659v1 | # Bayesian Modelling
###### Abstract
This project report is written in partial fulfilment of the requirement for the Master of Philosophy in Basic and Translational Neuroscience
_Supervised by_
Dr. Jasper Poort
Prof. Mate Lengyel
Computational and Biological
Learning Laboratory
Department of Psychology
Department of Engineering |
2309.05073 | FreeMan: Towards Benchmarking 3D Human Pose Estimation under Real-World
Conditions | Estimating the 3D structure of the human body from natural scenes is a
fundamental aspect of visual perception. 3D human pose estimation is a vital
step in advancing fields like AIGC and human-robot interaction, serving as a
crucial technique for understanding and interacting with human actions in
real-world settings. However, the current datasets, often collected under
single laboratory conditions using complex motion capture equipment and
unvarying backgrounds, are insufficient. The absence of datasets on variable
conditions is stalling the progress of this crucial task. To facilitate the
development of 3D pose estimation, we present FreeMan, the first large-scale,
multi-view dataset collected under the real-world conditions. FreeMan was
captured by synchronizing 8 smartphones across diverse scenarios. It comprises
11M frames from 8000 sequences, viewed from different perspectives. These
sequences cover 40 subjects across 10 different scenarios, each with varying
lighting conditions. We have also established an semi-automated pipeline
containing error detection to reduce the workload of manual check and ensure
precise annotation. We provide comprehensive evaluation baselines for a range
of tasks, underlining the significant challenges posed by FreeMan. Further
evaluations of standard indoor/outdoor human sensing datasets reveal that
FreeMan offers robust representation transferability in real and complex
scenes. Code and data are available at https://wangjiongw.github.io/freeman. | Jiong Wang, Fengyu Yang, Wenbo Gou, Bingliang Li, Danqi Yan, Ailing Zeng, Yijun Gao, Junle Wang, Yanqing Jing, Ruimao Zhang | 2023-09-10T16:42:11Z | http://arxiv.org/abs/2309.05073v4 | # FreeMan: Towards Benchmarking 3D Human Pose Estimation
###### Abstract
Estimating the 3D structure of the human body from natural scenes is a fundamental aspect of visual perception. 3D human pose estimation is a vital step in advancing fields like AIGC and human-robot interaction, serving as a crucial technique for understanding and interacting with human actions in real-world settings. However, the current datasets, often collected under single laboratory conditions using complex motion capture equipment and unvarying backgrounds, are insufficient. The absence of datasets on variable conditions is stalling the progress of this crucial task. To facilitate the development of 3D pose estimation, we present FreeMan, the first large-scale, multi-view dataset collected under the real-world conditions. FreeMan was captured by synchronizing \(8\) smartphones across diverse scenarios. It comprises \(11M\) frames from \(8000\) sequences, viewed from different perspectives. These sequences cover \(40\) subjects across \(10\) different scenarios, each with varying lighting conditions. We have also established an semi-automated pipeline containing error detection to reduce the workload of manual check and ensure precise annotation. We provide comprehensive evaluation baselines for a range of tasks, underlining the significant challenges posed by FreeMan. Further evaluations of standard indoor/outdoor human sensing datasets reveal that FreeMan offers robust representation transferability in real and complex scenes. Code and data will be available soon.
## 1 Introduction
Estimating 3D human poses from real scene input is a long-standing yet active research topic since its huge potential in real applications, such as animation creation [67, 70], virtual reality [16, 63], the metaverse [32, 41, 69] and human-robot interaction [19]. Specifically, it aims to identify and deter
mine the spatial positions and orientations of the human body's parts in 3D space from input data such as the image or the video. Despite numerous models proposed in recent years [29, 36, 65], practical implementation in real scenes remains challenging due to the varying conditions such as viewpoint, occasions, human scale, uneven light conditions and complex background. Some challenges may stem from the disparity between the recent benchmarks and real-world scenarios. As shown in Fig. 1, the widely recognized Human3.6M [20], along with the currently largest dataset HuMMan [7], are usually in laboratory settings utilizing intricate equipment, which maintains constant camera parameters and offers minimal variation in background conditions. The effectiveness of the trained models when trained using these datasets often decline significantly in real-world environments.
From a data-oriented perspective, we have identified several constraints that hinder the performance of the existing models. **(1) Insufficient Scene Diversity.** Existing datasets, as shown in Tab. 1, are mainly collected in controlled laboratory conditions, which may not be optimal for robust model training due to static lighting conditions and uniform backgrounds. This limitation becomes especially crucial when the objective is to estimate 3D pose in real-world scenarios, where scene contexts exhibit substantial variability. In certain datasets, even though the data is collected from outdoor scenes, _e.g.,_ MuCo [47] and 3DPW [61] in Tab. 1, the variety of scenarios remains remarkably limited. This constraint significantly hampers the applicability of trained models across a broader range of situations. **(2) Limited Actions and Body Scales.** In existing datasets, the range of human actions tends to be rather limited. Even in the currently largest dataset, HuMMan [7], the variety of actions in the publicly available data is quite restricted. Additionally, these large datasets typically employ fixed cameras to capture data from various perspectives. The distance from the camera to the actor is relatively constant, which results in a relatively fixed human body scale across different videos. **(3) Restricted Scalability.** The annotation of current datasets primarily relies on expensive manual processing, which greatly restricts the scalability of the datasets. Especially when the camera used for collection is movable, how to effectively align data from different cameras and perform efficient annotation remains an open issue.
To address these above issues, this work presents FreeMan, a novel large-scale benchmark for 3D human pose estimation under real-world conditions. FreeMan contains \(11M\) frames in \(8000\) sequences captured by \(8\) smartphone cameras from different views simultaneously, as illustrated in Fig. 2. It covers \(40\) subjects in \(10\) kinds of scenes. To our best knowledge, it is the current largest multi-view 3D pose estimation dataset, with variable camera parameters and complex background environments. It is \(215\times\) of the famous outdoor dataset 3DPW [61]. From a practical perspective, it has several appealing strengths: **Firstly**, a large number of scenes introduce diversity in both backgrounds and lighting, enhancing the generalization ability of models trained on FreeMan in real-world scenarios. This makes it particularly suitable for evaluating algorithmic performance in practical applications. **Secondly**, the distances between the \(8\) cameras and the actors are variant (_i.e.,_\(2\) to \(5.5\) meters), resulting in significant scale changes in human bodies across different sequences. **Thirdly**, although we employed mobile RGB cameras to collect data, we propose a semi-automated annotation pipeline and erroneous frame detection, thereby significantly reduce manual workload and enhance the scalability and annotation accuracy of the dataset. **Lastly**, the proposed FreeMan encompasses a wide range of pose estimation tasks, which include monocular 3D estimation, 2D-to-3D lifting, multi-view 3D estimation, and neural rendering of human subjects. We present thorough evaluation baselines for the aforementioned tasks on FreeMan, highlighting the inherent challenges of such a new benchmark.
In summary, this paper has made three contributions:
* We have constructed a large-scale dataset for 3D human pose estimation under varied real-world conditions. The
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline Dataset & Environment & Segs & Action & Scene & Segs & Person & **Geomans** & **FES** \\ \hline Human3D[47] & Laboratory & 4 & 6 & 1 & 168 & 88K & 7 & 30 \\ Control Drug[7][1] & Laboratory & 8 & 8 & 1 & 65 & 154M & 31 & 30 \\ 3DPW[61] & Real Scene & 8 & 8 & 1 & 16 & 13.34 & 14 & 30 \\ 3DPW[61] & Real Scene & 7 & 47 & 4 & 60 & 51K & 1 & 30 \\ Morent Human[4] & Laboratory & - & - & - & - & 1.34 & 1 & 30 \\ \hline Human3D[48] & Laboratory & 9 & 15 & 1 & 940 & 3.08M & 6 (Funch) & 30 \\ \hline ART[4] & Laboratory & 30 & 10 & 1 & 1480 & 10.14M & 6 (Funch) & 30 \\ \hline Human3D[48] & Laboratory & 1000 & 50 & 1 & 400K & 600M & 11 (Funch) & 30 \\ \hline
**HuMMan[48]** & Laboratory & 132 & 39 & 1 & 4466 & 2280M & 1 (Funch) & 30 \\ \hline
**HuMMan[48]** & Real Scene & 40 & 123 & 101 & 8000 & 11.33d & 8 (Mobile) & 30/60 \\ \hline \end{tabular}
\end{table}
Table 1: Overview of 3D human pose datasets. \({}^{1}\) Comparison of our proposed FreeMan dataset with existing 3D Human Pose datasets. Only HD Cameras counted for CM Panooptic[17]. \({}^{\dagger}\) Only 1% of the HuMMan dataset (600K frames) is made publicly available. \({}^{\ddagger}\) FreeMan includes 10 types of scenes that correspond to 29 locations. Fixed means cameras are fixed within the whole dataset, while our cameras are movable and camera poses vary among video sequences.
Figure 2: Equipment setting of data collection using 8 cameras. Cameras are attached to tripods.
impressive transferability of the models trained on FreeMan to real-world scenarios has been demonstrated.
* We have showcased a simple yet effective toolchain that enables the semi-automatic annotation and efficient manual correction.
* We provide comprehensive benchmarks for human pose estimation and modeling on FreeMan, facilitating downstream applications. These baselines highlight potential directions for future algorithmic enhancements.
## 2 Related Work
Human Pose Datasets.Human modeling is a significant task in computer vision. Existing datasets predominantly rely on 2D and 3D keypoint annotations, with 3D keypoint datasets available in two forms: monocular and multi-view. For 2D keypoint, there are some single-frame datasets such as MPII [3] and COCO [37], which provide diverse images with 2D keypoints annotations, while video datasets such as J-HMDB [23], Penn Action [73] and PoseTrack [4] provide 2D keypoints with temporal information. In contrast, 3D keypoint datasets are often constructed in indoor scenes, such as Human3.6M [20], CMU Panoptic [24], MPI-INF-3DHP [46], AIST++ [35] and HuMMan [7] for multi-view. There also exists some outdoor datasets such as 3DPW [61] for monocular cases. Details of these datasets are shown in Tab. 1. However, the majority of outdoor datasets such as MPI-INF-3DHP, MuCo-3DHP, and 3DPW exhibit a limited variety of acquisition scenes, and the datasets that involve fixed camera poses such as AIST++.
**3D Human Pose Estimation.** The present study categorizes the task of 3D pose estimation into three distinct types, namely 2D-to-3D pose lifting, monocular 3D pose estimation, and multi-view 3D pose estimation. In the 2D-to-3D pose lifting task, Martinez [45] proposed a simple baseline to regress the 3d keypoints based on a convolutional neural network from 2D keypoints. However, subsequent works, such as Videopose3D [54], PoseFormer [76] and MHFormer [36], have improved upon this baseline by integrating temporal information into their models. In monocular 3D pose estimation task, HMR [25], SPIN [31] takes a single RGB image as input to perform 3D huna pose estimation, which is often used as baselines for comparison with other algorithms, such as PARE [29], SPEC [30] and HybrIK [33]. Additionally, multi-view methods are proposed to accommodate potential body parts overlapping in monocular view. Iqbal's [21] and MCSS [50] adopt weak supervision to reduce the dependence on the 3D annotated pose, while Canonpose [64] and EpipolarPose [28] turned to self-supervise fashion to deal with multi-view data.
**Neural Rendering of Human Subjects.** With the development of NeRF [48] in dynamic scene rendering, people also focus on the dynamic rendering of humans. Compared to dynamic scenes, the non-rigid property of humans has more challenges. The prior knowledge of body movements can provide a good prior for rendering, and many methods use SMPL [42] as a prior for body rendering. Most methods reconstruct human bodies through multi-view videos [40, 52, 66], while recent works have also employed single-view videos, such as HumanNeRF [68], FlexNeRF [22], YOTO [26].
## 3 FreeMan Dataset
FreeMan is a large-scale multi-view dataset under real-world conditions with precise 3D pose annotations. It comprises \(11M\) frames from \(1000\) sessions, featuring \(40\) subjects across \(10\) types of scenes. The dataset includes \(10M\) frames recorded at \(30\)FPS and an additional \(1M\) frames at \(60\)FPS. Next, we highlight the diversity of FreeMan, from various scenario selections, actions, camera settings and subjects.
**Scenarios.** We design \(10\) types of real-world scenes, including \(4\) indoor and \(6\) outdoor scenes, for our data collection. Fig. 3 (c) illustrates the scene diversity of our FreeMan. The blue section represents the outdoor part, while the red section refers to frames captured in indoor scenes. Specifically, there are \(2.76\) million frames captured indoors and \(8.45\) million frames captured outdoors. In the outdoor data, there are different frame numbers collected under varying lighting conditions, with \(1\) million frames captured at night or dusk and \(7.45\) million frames captured during daytime. Moreover, the central block of the circle denotes different scenarios, while the blocks on the outermost circle refer to actions. The areas of the blocks are proportional to frame number. Please refer to supplementary material for more details.
**Action Set.** Following the popular action recognition dataset NTU-RGBD120 [39], we compose our action set with several common actions corresponding to scenes in daily life,
Figure 3: (a) Distribution of distance from the camera to the center of the system, indicated by translation along the z-axis in camera parameters. Four vertical red lines represent the distance of 4 cameras in Human3.6M [20]. (b) Distribution of human bounding box areas. The horizontal axis represents the ratio of the bounding box area over the image area. The vertical axis is in log scale. (c) Correspondence of scenes and actions. Areas of blocks represent the scale of the respective frame number. The outmost circle shows actions and the circle in the middle present \(10\) type of scenes in our dataset. Zoom in \(10\times\) for the best view.
\(e.g.\), drinking and talking in a cafe, reading in the library. Meanwhile, subjects interact with real objects to make data as close to real world as possible. As shown in the topmost row of Fig. 4, interaction with objects brings complicated occlusions, making our data more challenging. For outdoor scenarios, we set the data collection field as large as possible to help subjects perform actions with little restriction.
**Camera Positions.** Cameras in previous 3D human pose datasets [7, 17, 20] are fixed, resulting that only a few camera poses being included. As shown in Fig. 2, cameras are attached to tripods and are newly placed from time to time, and translation from the center of the system to camera \(d\), which is the physical distance between the camera and the system center, can vary from \(2\)m to \(5.5\)m. Fig. 3 (a) shows the distribution of \(d\) and the corresponding number of cameras. Most cameras are located around \(4\) meters far away from the system center. Besides, we show the distribution of the human bounding box area in Fig. 3 (b), in a unit of ratio to the whole image area, to demonstrate the variation of human size. With variations in camera translation and human actions, the area of human bounding boxes varies from \(0.01\) to \(0.7\) of the whole image area.
**Subjects.** There are \(40\) subjects participating in the construction of FreeMan and recruitment is completely based on voluntary. Among them, 22 actors are trained dancers for dance actions. All of them are well-informed and signed the agreement to make data public for research purposes only.
## 4 Data Acquisition & Annotation Pipeline
**Overview.** To collect a large-scale dataset from real-world environments, we developed a comprehensive toolchain, as shown in Fig. 5. Unlike previous toolchains used in controlled or idealized conditions, we carefully accounted for potential challenges in outdoor settings, including calibration and synchronization errors. To overcome these issues, we proposed an semi-automated pipeline including error detection and manual correction to ensure efficient data collection and annotation.
### Hardware Setup
**Cameras.** We collect FreeMan via \(8\) Mi11 phones [1] indexed from \(1\) to \(8\) as our data collection devices. _Note \(8\) collection of one action as one session, which corresponds to 8 RGB sequences from 8 views._ and each phone is attached to a tripod to keep stable during data collection. As shown in Fig. 2, all devices are positioned in a circle around a human at a height of approximately \(1.6\) meters above the ground, and the distance from cameras to the system center varying from \(2\) to \(5.5\) meters, which is similar to real-life usage scenarios. Each smartphone captures RGB sequences using its main camera at \(1920\times 1080\) resolution and \(30/60\) FPS. During the data collection process, actors perform actions facing the cameras with odd-numbered indices. As shown in Fig. 5 (a), the only requirement beyond devices is a stable network connection to server for data transmission.
**Device Synchronization.** Previous works [7, 17, 20] have synchronized devices using wired interfaces in a laboratory. However, the complexity of the entire system coupled with the difficulty in deploying it in real-world environments, has prompted us to consider alternative methods. To address issues related to usability and device constraints, we connect all devices wirelessly to a single server and developed an Android app that utilizes the Network Time Protocol (NTP) [49] to calculate the time difference between each device and the server's clock. During the capture process, temporal information is stored locally on each device as a timecode, while the server records the synchronized capture interval for all devices. The starting frame is determined by matching the timecode to the frame closest to the server's clock time. As shown in Fig. 5(b), synchronization errors are smaller than a single frame during our testing, corresponding to \(33\)ms and \(16\)ms for \(30\)FPS and \(60\)FPS, respectively.
**Chessboard-based Calibration.** At the beginning of each session, we first shoot a chessboard with known size tiled at the center of the system, then calculate the intrinsic and extrinsic camera parameters following the standard implementation in OpenCV [6, 75]. We attach cameras to tripods and collect tiled chessboards from each view. Initially, chessboard frames from all views are sent to the server and the server detects corner points, calculates camera extrinsic parameters of each device, and sends parameters back to cameras. Cameras are allowed for the further shooting only if camera parameters are received.
**Pixel Alignment** However, calibration with coarse matching points on chessboard is not accurate enough. After data collection and synchronization, we extract one frame from synchronized videos, and then use LightGlue [38], the state-of-the-art feature matcher to calculate dense matching points across views. Then dense matching points are used to further refine the camera extrinsic parameters resulted from chessboard-based calibration.
### Pose Annotation.
Once videos are collected, we use a state-of-the-art detector YOLOX [15] to detect human bounding box and HRNet-w48 [58] to detect 2D keypoints of \(8\) views \(K_{2D}\in\mathbb{R}^{8\times 17\times 2}\) in COCO [37] format. To eliminate the effect of potential wrong keypoints output, keypoint predictions with confidence lower than a threshold \(\phi\) are removed. Then remaining 2D keypoints are used for triangulation to get 3D human pose \(K_{3D}\in\mathbb{R}^{17\times 3}\) with pre-computed camera parameters. Here, we set \(\phi\) to be \(0.5\). Furthermore, we optimize \(K_{3D}\) with smoothness constraints and bone length constraints introduced in HuMMan [7] resulting in optimized 3D pose \(\tilde{K}_{3D}\in\mathbb{R}^{17\times 3}\). Then we fit a standard SMPL [42] model to the estimated 3D skeleton by SMPLify [5] to produce
a rough mesh annotation. After that, we project 3D keypoints to 2D image planes of each view using corresponding camera parameters. With regularization in triangulation and optimization along the temporal axis, the re-projected 2D poses \(\tilde{K}_{2D}\) is more accurate than \(K_{2D}\), especially for occlusion cases. Comparison between original \(K_{2D}\) and \(\tilde{K}_{2D}\) are shown in the left part of Fig. 7.
**Erroneous Pose Detection & Correction.** Although 2D pose estimator has been well developed, pose with heavy occlusions can be inaccurate. Thus, we propose a pipeline to filter erroneous 2D keypoints among vast millions of frames and then correct them **by human annotators**. As shown in Fig. 5, estimated 2D poses are feed into a pre-trained image generator to generate human images. Then we use SAM [27] to get human mask of original and generated images and intersection-over-union (IoU) between these mask are calculated. Poses correspond to IoU lower than a threshold \(\alpha\) are considered as erroneous ones and then checked by human annotator. Specifically, we choose Stable Diffusion 1.5 and ControlNet [71] as conditional image generator and DeepDataSpace [2] are used as annotation tools. Fig. 6 presents examples of correct and erroneous cases. More detailed processes and results are displayed in the appendix.
### Keypoint Quality Assessment
To demonstrate the effectiveness of our toolchain, we test it on Human3.6M [20]. We select \(3\) different actions of each subject in the training set, which covers \(10\%\) sequences of the whole training set and all kinds of actions. Following [7], keypoint quality is assessed by Euclidean distance between estimated 2D poses and ground truth 2D poses in units of pixels. The error results in less than 1% of pixels for images of \(1000\times 1000\), indicating that our toolchain can generate annotations with an accuracy that is acceptable considering the cognitive errors inherent in human labeling.
## 5 Benchmarks
We have constructed four benchmarks utilizing images and annotations derived from our dataset. The data is subdivided based on subjects, allocating \(18\) subjects for training, \(7\) for validation, and \(15\) for testing purposes. This partitioning results in three subsets composed of \(5.87M\), \(700K\), and \(3.69M\) frames, respectively. For each benchmark, subject lists of each subset are shared, and only views selected from the session vary for each task.
**Monocular 3D Human Pose Estimation (HPE).** This task involves taking a monocular RGB image or sequence as input and predicting 3D coordinates in camera coordinate
Figure 4: The diverse frames in FreeMan. The topmost two rows presents a range of indoor and outdoor scenes, highlighting human-object interactions and the diversity of scene contexts, lighting conditions, and subjects. The third row exhibits frames from different views. The final row illustrates the temporal variation of human poses from a consistent viewpoint, emphasizing the dynamism of motion capture.
system. We randomly select one view from each session for this task. The performance of algorithms is measured by widely used Mean Per Joint Position Error (MPJPE) [20] and Procrustes analysis MPJPE (PA-MPJPE) [42].
**2D-to-3D Lifting.** Given that 2D human poses can be predicted using existing 2D keypoint detectors [8, 10, 13, 58], the primary goal of this task is to effectively elevate these 2D poses into the 3D space within the camera coordinate system. The evaluation metrics are the same as HPE.
**Multi-View 3D Human Pose Estimation.** Estimating the 3D human pose from multiple views presents a natural solution to overcome occlusion in motion capture. For this task, models are provided with images or videos from multiple views, along with corresponding camera parameters. The objective is to predict the 3D coordinates of human joints in the same world coordinate system as the cameras. following implementation of [60], metrics of the task is MPJPE and average precision (AP) with specific thresholds.
**Neural Rendering of Human Subjects.** The free-viewpoint rendering of humans is a significant issue in human modeling. With the rise in popularity of neural radiance fields (NeRF) [48] for the novel view rendering task, several methods, including HumanNeRF [68], have emerged. These methods utilize monocular human motion videos as input to synthesize novel views of dynamic humans through NeRF. The widely used metrics of prediction are PSNR, SSIM [74] and LPIPS [72].
## 6 Experiments
In this section, we experiment with the four benchmarks. In human 3D pose estimation tasks, we conduct several transfer tests with other standard datasets to evaluate the effectiveness and transferability of our proposed FreeMan dataset. Existing similar datasets, Human3.6M [20] & HuMMan [7], are used for comparison. Since _HuMMan only releases 1% of data for pose estimation1_, we only involve it in monocular 3D human pose estimation and 2D-to-3D lifting. As for the neural human rendering, we train the model from one of the \(8\) views and test on the other the views in selected sessions.
Footnote 1: [https://opendatalab.com/OpenXDLab/HuMMan](https://opendatalab.com/OpenXDLab/HuMMan)
### Monocular 3D Human Pose Estimation
**Implementation details.** For the Human3.6M [20] and HuMMan [7] datasets, all views in their training set are utilized. To balance number of frames, we randomly sample a single view from sessions in the training split for FreeMan, resulting that the frame numbers of all three datasets being \(312K\), \(253K\), and \(233K\), respectively. To enhance efficiency, videos of all three datasets are downsampled to \(10\)FPS, following the implementation of MMPose [11]. Following [53], we select HMR [25] and PARE [29] as models to evaluate and implement experiments with configurations open-sourced by [53]. Please refer to Appendix for more.
Figure 5: The illustration of data collection and annotation toolchain: (a) depicts the transmission of signals between cameras and servers for camera calibration, where chessboard frames are sent to the server, and camera parameters are returned. (b) demonstrates the synchronization process among devices. (c) showcases the pipeline for pose annotation.
Figure 6: Demonstration of erroneous pose detection in Sec. 4. Human3.6M examples shown for quality assessment. The first row shows input frame and 2D keypoints by Pose estimator, and the last two rows show segment mask of original image and generated image by SAM. The left two columns are examples of correct poses, while the right two columns refer to cases with erroneous keypoints as highlighted by the red boxes. Please zoom in for details.
**Results.** We perform testing on the test set of 3DPW [61]. The performance of the models trained on different datasets, with varying types of supervision, are reported in Tab. 2. Notably, the HMR models trained on FreeMan exhibit significantly better performance on the 3DPW test set compared to those trained on Human3.6M and HuMMan with PA-MPJPE \(133.13\)mm and \(192.75\)mm respectively, which indicates that FreeMan demonstrates superior generalizability compared to the others. The same results are obtained with PARE, further confirming that FreeMan outperforms even in more advanced algorithms. This can be attributed to the diversity of scene contexts and human actions present in our dataset, which provides better transferability in real-world scenarios.
### 2D-to-3D Pose Lifting
**Implementation Details.** We employ four methods, either CNN-based or Transformer-based, including SimpleBaseline [45], VideoPose3D [54], PoseFormer [76] and MHFormer [36] in this task, and all the methods follow the corresponding official implementations. The results of SimpleBaseline and MHFormer are presented in Tab. 3, and that of VideoPose3D and PoseFormer can be found in Appendix. For training set in this task, we select one view from every session and down-sample the videos to \(15\)FPS, resulting in the frame number to be \(350K\), which is similar to the amount of released part of HuMMan (\(253K\)) and much smaller than Human3.6M (\(1500K\)). Following [7], we unify the test set to be AIST++ [35] in order to verify the generalization across datasets. To verify the effect of the dataset scale, we also train our model on the whole training set.
**Results.** As shown in Tab. 3, results of the in-domain test on FreeMan are provided as a baseline for future work. For in-domain testing, MPJPE of SimpleBaseline on FreeMan (\(79.22\)mm) is larger than that on HuMMan [7] (\(78.5\)mm2) and Human3.6M [20] (\(53.4\)mm3), demonstrating that FreeMan is a more challenging benchmark. Besides, all the methods trained on FreeMan tend to generalize better than that on HuMMan and Human3.6M when testing on AIST++ under the same setting as MPJPE and PA-MPJPE are much smaller in cross-domain test. Although the scale of FreeMan
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Method & \multicolumn{2}{c}{**HMR**} & \multicolumn{2}{c}{**PARE**} \\ \hline \multicolumn{1}{c}{Tian} & Supervision & Test & MPJPE & PA & MHPE & PA \\ \hline \hline Human3.6M & 2D-3D & \(\!\!
training set is of a similar magnitude as HuMMan's, which is much smaller than Human3.6M's, models trained on FreeMan outperform models trained on the other two by a large margin. Furthermore, when the training set is expanded to all frames in training split, FreeMan can further boost models to achieve better performance, proving that our large-scale data helps to improve model performance.
### Multi-View 3D Human Pose Estimation
**Implementation Details.** We conduct in-domain and cross-domain tests between Human3.6M and FreeMan to evaluate the effectiveness and generalization ability. We conduct the experiments with VoxelPose [59], which locates the human root first and then regresses 3D joint location accordingly. COCO-format poses in FreeMan are interpolated to match that in Human3.6M. We trained VoxelPose [59] on the two datasets following official implementation. For Human3.6M, bounding box annotations are from [55] and its validation set is used for the test. For FreeMan, we only use videos of odd-indexed views from the training set.
**Results.** Results of all experiments are reported in Tab. 4. For in-domain testing, the model trained on FreeMan achieves MPJPE@\(500\)mm of \(26.61\)mm on test set consisting of _odd-indexed_ views. For cross-domain testing, the model trained on FreeMan achieves Recall@\(500\)mm of \(96.68\)% and MPJPE@\(500\)mm is \(61.29\)mm on Human3.6M validation set. However, the model trained on the Human3.6M dataset fails to locate human on FreeMan test set, resulting zero AP with threshold smaller than 100mm. To get rid of the effects of root location, we input the ground truth root locations to model directly. With this setting, the model trained on Human3.6M obtains MPJPE@\(500\)mm of \(103.02\)mm on FreeMan test set, while the model trained on FreeMan can obtain MPJPE@\(500\)mm of \(58.30\)mm on Human3.6M validation set. Results show that the model trained on FreeMan has a much better generalization ability, while that on Human3.6M struggles in transfer testing.
### Neural Rendering of Human Subjects.
**Implementation Details.** We employ \(10\) scenes captured by FreeMan to train HumanNeRF [68], a deep neural network that aims to achieve high-quality human-centric novel view synthesis. To obtain human body segmentation annotations, we utilize the SAM (Segment Anything) [27] algorithm using our bounding boxes as prompts. Throughout the training step, we randomly select one view for each session and render the rest \(7\) view as novel views for testing. We then calculate metrics including PSNR, SSIM, and LPIPS, to evaluate the performance of the model. Please refer to Appendix for results of data at \(60\)FPS.
**Results.** The reconstruction results in \(10\) scenes are shown in Tab. 5. The best reconstruction achieves a high PSNR of \(30.11\)dB which indicates FreeMan contains contents that the model can learn and fit very well. While the performance varies, the lowest PSNR of \(23.86\) shows FreeMan also contains contents that are outside of model's learning scope and challenging. Additionally, the results in \(10\) scenes including both familiar contents that the model can handle well and more challenging new contents demonstrates the diversity of our dataset.
## 7 Conclusion
We present FreeMan, a novel large-scale multi-view 3D pose estimation dataset 3D human pose annotations. We elaborately develop a simple yet effective semi-annotation pipeline to automatically annotate frame-level 3D landmarks at a much lower cost, and build a comprehensive benchmark for 3D human pose estimation.
Extensive experimental results demonstrate the difficulty of test in varied conditions and the strengths of the proposed FreeMan. As a large-scale human motion dataset, our FreeMan addresses the existing gap between the current datasets and real-scene applications, and we hope that it will catalyze the development of algorithms designed to model and sense
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Train & Test & AP@25mm (\%)\(\uparrow\) & AP@50mm (\%)\(\uparrow\) & AP@75mm (\%)\(\uparrow\) & AP@100mm (\%)\(\uparrow\) & Recall@500mm (\%) & MPJPE@500mm (mm) \(\downarrow\) \\ \hline Human3.6M & Human3.6M & 32.32 & 97.47 & 98.61 & 98.99 & 100.00 & 25.29 \\ Human3.6M & FreeMan & 0.00 & 0.00 & 0.00 & 0.00 & 0.66 & 89.85 \\ Human3.6M & FreeMan (w/GT Roo) & 0.00 & 1.27 & 11.44 & 21.40 & 96.20 & 103.02 \\ \hline FreeMan & PreMan & 4.338 & 88.77 & 97.73 & 99.12 & 99.97 & 26.07 \\ FreeMan & Human3.6M & 0.00 & 5.77 & 82.85 & 92.62 & 96.68 & 61.29 \\ FreeMan & Human3.6M (w/GT Roo) & 0.00 & 6.60 & 87.91 & 95.38 & 100.00 & 58.30 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Multi-View 3D Pose Estimation results of VoxelPose [60]. Ground truth root position (GT Root) is not used if not specified. Recall@\(500mm\) shows the percentage that falls within the threshold, and the MPJPE@\(500mm\) indicates the average MPJPE values within the threshold. Rows highlighted shows the best setting in cross-domain test.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Scene & PSNR\(\uparrow\) & SSIM\(\uparrow\) & LPIPS\({}^{*}\)\(\downarrow\) \\ \hline Square & 25.98 & 0.9501 & 58.38 \\ Corridor & 24.57 & 0.9340 & **81.39** \\ Sports Port & 26.33 & 0.9662 & 30.09 \\ Park & 23.86 & 0.9439 & 73.61 \\ Courtyard & 28.56 & 0.9630 & 53.99 \\ Dance Room & **30.11** & 0.9658 & 43.34 \\ Library & 29.41 & **0.9665** & 31.53 \\ Platform & 26.79 & 0.9439 & 70.01 \\ Lobby & 25.41 & 0.9387 & 78.80 \\ Cafe & 27.32 & 0.9644 & 37.88 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Neural rendering results by using HumanNeRF [68] on \(10\) scenes of FreeMan. Note that LPIPS\({}^{*}\) = LPIPS \(\times 10^{3}\). The highest values are bolded and underlined ones refer to the lowest.
human behavior in real-world scenes.
**Limitations.** Prompts to generation model require careful tuning for high quality. Besides, the number of frames that require manual correction is still a considerable workload when the overall size of the dataset becomes large.
|
2309.03402 | A Wolf 359 in sheep's clothing: Hunting for substellar companions in the
fifth-closest system using combined high-contrast imaging and radial velocity
analysis | Wolf 359 (CN Leo, GJ 406, Gaia DR3 3864972938605115520) is a low-mass star in
the fifth-closest neighboring system (2.41 pc). Because of its relative youth
and proximity, Wolf 359 offers a unique opportunity to study substellar
companions around M stars using infrared high-contrast imaging and radial
velocity monitoring. We present the results of Ms-band (4.67 $\mu$m) vector
vortex coronagraphic imaging using Keck-NIRC2 and add 12 Keck-HIRES velocities
and 68 MAROON-X velocities to the radial velocity baseline. Our analysis
incorporates these data alongside literature radial velocities from CARMENES,
HARPS, and Keck-HIRES to rule out the existence of a close ($a < 10$ AU)
stellar or brown dwarf companion and the majority of large gas-giant
companions. Our survey does not refute or confirm the long-period radial
velocity candidate Wolf 359 b ($P\sim2900$ d) but rules out the candidate's
existence as a large gas-giant ($>4 M_{jup}$) assuming an age of younger than 1
Gyr. We discuss the performance of our high-contrast imaging survey to aid
future observers using Keck-NIRC2 in conjunction with the vortex coronagraph in
the Ms-band and conclude by exploring the direct imaging capabilities with JWST
to observe Jupiter-mass and Neptune-mass planets around Wolf 359. | Rachel Bowens-Rubin, Joseph M. Akana Murphy, Philip M. Hinz, Mary Anne Limbach, Andreas Seifahrt, Rocio Kiman, Maïssa Salama, Sagnick Mukherjee, Madison Brady, Aarynn L. Carter, Rebecca Jensen-Clem, Maaike A. M. van Kooten, Howard Isaacson, Molly Kosiarek, Jacob L. Bean, David Kasper, Rafael Luque, Gudmundur Stefánsson, Julian Stürmer | 2023-09-06T23:36:30Z | http://arxiv.org/abs/2309.03402v1 | A Wolf 359 in sheep's clothing: Hunting for substellar companions in the fifth-closest system using combined high-contrast imaging and radial velocity analysis
###### Abstract
Wolf 359 (_CN Leo, GJ 406, Gaia DR3 3864972938605115520_) is a low-mass star in the fifth-closest neighboring system (2.41 pc). Because of its relative youth and proximity, Wolf 359 offers a unique opportunity to study substellar companions around M stars using infrared high-contrast imaging and radial velocity monitoring. We present the results of Ms-band (4.67 \(\mu\)m) vector vortex coronagraphic imaging using Keck-NIRC2 and add 12 Keck-HIRES velocities and 68 MAROON-X velocities to the radial velocity baseline. Our analysis incorporates these data alongside literature radial velocities from CARMENES, HARPS, and Keck-HIRES to rule out the existence of a close (\(a<10\) AU) stellar or brown dwarf companion and the majority of large gas-giant companions. Our survey does not refute or confirm the long-period radial velocity candidate, Wolf 359 b (\(P\sim 2900\) d) but rules out the candidate's existence as a large gas-giant (\(>4M_{\rm Jup}\)) assuming an age of younger than 1 Gyr. We discuss the performance of our high-contrast imaging survey to aid future observers using Keck-NIRC2 in conjunction with the vortex coronagraph in the Ms-band and conclude by exploring the direct imaging capabilities with _JWST_ to observe Jupiter-mass and Neptune-mass planets around Wolf 359.
Coronagraphic imaging (313), Direct imaging (387), Exoplanets (498), M stars(985), Radial velocity(1332)
## 1 Introduction
Over 70% of the stars in our galaxy are M-dwarfs, yet we know little about the exoplanets that exist in these systems beyond the snow line (\(\gtrsim\)0.5 AU, Mulders et al., 2015). Most exoplanet detection methods and surveys are blind to this discovery space. The geometric probability of an exoplanet transit occurring for an exoplanet orbiting an M-dwarf beyond 1 AU is less than 0.1%. Astrometry and radial velocity surveys of M-dwarfs require lengthy baselines in order to observe a planet's full orbit because planets orbiting low-mass stars have longer periods for an equivalent separation.
Microlensing surveys have provided the first hint that cold gas giants, ice giants, and super-Earths could be common outside the snow line of M-dwarfs with increasing prevalence for smaller planets. A survey from Cas
san et al. (2012) estimated that the majority of low-mass stars host a giant planet between 0.5-10 AU, with Jupiter-like planets (\(0.3-10\)\(M_{\rm Jup}\)) at an occurrence rate of \(17^{+6}_{-9}\)%, Neptune-like planets (\(10-30\)\(M_{\oplus}\)) with a rate of \(52^{+22}_{-29}\)%, and super-Earths (\(5-10\)\(M_{\oplus}\)) with a rate of \(62^{+35}_{-37}\)%. A microlensing survey by the Microlensing Observations in Astrophysics collaboration is consistent with these results and concluded that Neptune-sized planets are one of the most common types of planet seen outside the snow line (Suzuki et al., 2016). Poleski et al. 2021 used data from the Optical Gravitational Lensing Experiment to determine that nearly every star could host an ice-giant planet from 5-15 AU, measuring an occurrence rate of \(1.4^{+0.9}_{-0.6}\) ice giants per system.
Exoplanet direct imaging--where photons from an exoplanet are spatially resolved from their host star--is the only exoplanet detection technique that offers a pathway for characterizing the atmosphere, composition, and formation history for exoplanets orbiting beyond the snow line that are unlikely to transit. When directly imaging the closest set of stellar neighbors (\(d<5\) pc), the current generation of high-contrast imaging systems on 8-10 m telescopes can probe comparatively colder planets at angular separations corresponding to where the prevalence of exoplanets outside the snow line is expected to peak (1-10 AU; Fernandes et al., 2019). Proximity in stellar distance makes companions appear at proportionally wider separation angles from their host star for a given orbit (\(\theta_{\rm sep}\propto a/d\)) and boosts the apparent magnitude of the companion logarithmically (\(m=5\log_{10}(d/10\)\({\rm pc})+M\)). This makes companions that are dimmer in absolute magnitude and closer in orbital separation easier to detect than if they were in a more distant analogous system.
The heritage of detecting exoplanets via the direct imaging technique has been to conduct blind surveys of hundreds of young-star systems in search of a rare set of large gas giant planets on long-period orbits that are bright enough to detect using short integration times. Thanks to the growing abundance of long-baseline exoplanet radial velocity (RV) data (e.g., Rosenthal et al., 2021; Trifonov et al., 2020; Ribas et al., 2023), we can now use RV data in tandem with high-contrast imaging (HCI) observations to tailor our imaging observations to conduct lengthier measurements around fewer systems. Information from RV data can be applied to select viable targets for imaging, choose the optimal imaging filters, predict how much integration time is required, and predict when a companion will be at its maximum separation from its host star. This targeted approach to HCI observing motivates the use of extended observing sequences which can expand our abilities to directly image colder (\(<500\) K) companions.
In many cases, we only need a hint to a companion's existence to curate an HCI observation using RV data. Cheetham et al. (2018) demonstrated this by leveraging RV data to directly image an ultra-cool brown dwarf, HD 4113C. Based on the the CORALIE survey's detection of long-term RV trends (Udry et al., 2000), Rickman et al. (2019) conducted targeted direct imaging resulting in the discovery of three giant planets and two brown dwarfs. The TRENDS high-contrast imaging survey used long-baseline velocities from Keck-HIRES to target their survey for white dwarf and substellar companions (e.g., Crepp et al., 2018, Crepp et al., 2016). Hinkley et al. (2022) used the VLTI/GRAVITY instrument to discover HD 206893 c by utilizing long-baseline RV data from European Southern Observatory's High Accuracy Radial velocity Planet Searcher (HARPS, Pepe et al., 2002; Mayor et al., 2003) and correlating it with the _Gaia_-Hipparcos astrometry accelerations (Brandt, 2021) and orbital astrometry of the system's outer companion.
Conducting targeted HCI observations of nearby systems that span multiple nights is becoming an increasingly common observing strategy to probe for sub-Jupiter mass exoplanets. The surveys from Mawet et al. 2019 and Llop-Sayson et al. 2021 completed multi-night HCI campaigns of the nearby, youthful \(\varepsilon\) Eridani system (\(d=3.22\) pc, age \(=600\pm 200\) Myr) with the goal of directly detecting the RV-discovered exoplanet, \(\varepsilon\) Eridani b. Combined, the 2017 and 2019 surveys collected nearly 16 hours of \(4.67\mu\)m imaging data over nine nights using the W. M. Keck Observatory's NIRC2 Imager (Keck-NIRC2; Wizinowich et al., 2000) but were not able to make an imaging detection of the planet. By combining the mass upper-limits from HCI with RV and _Gaia_ accelerations, Llop-Sayson et al. (2021) constrained the mass of \(\varepsilon\) Eridani b to be in the sub-Jupiter mass domain, \(0.66^{+0.12}_{-0.09}\)\(M_{\rm Jup}\). Wagner et al. (2021) also demonstrated the advantage of searching for companions around nearby stars by performing a 100 hr HCI survey at 10-12.5 \(\mu\)m of the \(\alpha\) Centauri system (\(d=1.3\) pc, age \(=5.3\pm 0.3\) Gyr). They imaged one candidate and demonstrated that it was possible to achieve survey sensitivities down to warm sub-Neptune mass planets through the majority of the \(\alpha\) Centari habitable zone. While these surveys were not able to make definitive direct detections, they demonstrated the possibilities of future ground-based mid-infrared HCI campaigns of nearby stars.
In this paper, we present the results of our joint HCI-RV survey to search for substellar companions around the solar-neighborhood star Wolf 359 (_CN Leo, GJ 406,
Gaia DR3 3864972938605115520)._ The paper is organized as follows: In the remainder of SS1, we provide an overview of the Wolf 359 system. In SS2, we report our observational and data reduction methods for the Keck-NIRC2 coronographic imaging survey and the RV measurements from the W. M. Keck Observatory High Resolution Echelle Spectrometer (Keck-HIRES; Vogt et al., 1994) and Gemini-North MAROON-X spectrograph (Seifahrt et al., 2020). In SS3, we estimate Wolf 359's stellar age, apply these age constraints to the HCI data to set companion mass upper bounds, and provide an updated RV analysis combining our measurements with the previously published RV data from HARPS, Keck-HIRES, and CARMENES. In SS4, we discuss how our imaging performance with Keck-NIRC2 compared to the predicted performance and then explore what JWST high-contrast imaging could reveal about the Wolf 359 system.
### The Wolf 359 System
Wolf 359 is a solar-metallicty M6V star (Pineda et al., 2021) and one of our nearest stellar neighbors1 (2.41 pc; Gaia Collaboration et al., 2022). Table 1 summarizes Wolf 359's stellar parameters.
Footnote 1: As one of our nearest neighbors, this system has captured the public’s interest and is a setting in many fictional stories including the Wolf 359 podcast and several episodes in the _Star Trek_ franchise.
Radial velocity surveys have been monitoring Wolf 359 for more than two decades. A preprint paper presented by Tuomi et al. (2019) identified two exoplanet candidates orbiting Wolf 359 using 63 RV measurements from Keck-HIRES and HARPS spanning 13 years. These planet candidates are summarized in Table 2. The shorter-period candidate (Wolf 359 c) was refuted by Lafarga et al. (2021) after determining that the RV signal matched the star's rotation period. The RV signal for the Wolf 359 b candidate could correspond to a cold, Neptune-like exoplanet on a wide orbit of approximately 8 years (\(P_{orb}=2938\pm 436\) d, \(a=1.845^{+0.289}_{-0.258}\) AU; Tuomi et al., 2019).
Wolf 359 has conflicting age estimates in the literature, but most indicate that the star is young (\(<1\) Gyr). The star is highly active, with stellar flares that occur approximately once every 2 hr (Lin et al., 2022). Wolf 359 has strong flare activity even among flaring M dwarfs (Lin et al., 2021), which is consistent with a youthful age estimate. An age estimate by Pavlenko et al. 2006 made by modeling the spectral energy distribution predicts that Wolf 359 could be as young as \(100-350\) Myr, which is consistent with its high activity. Wolf 359 also has a fast rotation period (\(P_{\rm rot}=2.705\pm 0.007\) d; Guinan and Engle, 2018), as confirmed with photometry from _K2_(Howell et al., 2014), among other observatories. The combination of the gyrochronological relationship from Engle and Guinan (2018) and Wolf 359's stellar rotation period suggests an age estimate of \(<500\) Myr. However, the star lies at the edge of Engle and Guinan (2018)'s rotation-activity-age relationship for M0-6 stars, and the rotation period cannot act as a direct proxy for age in this system in this context.
The combination of Wolf 359's proximity and potential youth make it an ideal system for searching for companions using infrared direct imaging. An exoplanet candidate like Wolf 359 b would not be possible to directly image around most star systems. However, because Wolf 359 is one of our nearest neighbors, the parameters of the Wolf 359 b candidate can be constrained using our current generation of HCI instruments operating at 8-10 m telescopes.
## 2 Observations and Data Reduction
### Keck-II NIRC2 Vortex Coronagraphy
We conducted high-contrast imaging observations of the Wolf 359 system with the W.M. Keck Observatory NIRC2 imager coupled with the vector vortex coronagraph (Serabyn et al., 2017). We completed our observations over three nights, as summarized in Table 3.
We conducted HCI observations using the fixedhex pupil stop with Keck's L/M-band vortex coronagraph. The telescope was operated in the vertical angle rotation mode (Sky PA = 4.43\({}^{\circ}\)) to enable angular differential imaging (ADI) analysis methods. The centering of the vortex was controlled using the in-house QACTIS IDL software package (Huby et al., 2017). Each QACTIS sequence consisted of a set of (1) three calibration images to acquire an off-axis star PSF and sky images, (2) three optimization images to center the star on the vortex and stabilize the tip/tilt in the adaptive optics system, (3) a series of science images.
We operated the Keck-II adaptive optics system with the recently commissioned near-infrared pyramid wavefront sensor (PyWFS) (Bond et al., 2020) in natural guide star mode. We selected the PyWFS over the facility Shack-Hartmann wavefront sensor because it is better suited for performing adaptive optics corrections when using an M-dwarf as a natural guide star because it operates in H-band (1.633\(\mu m\), NIRC2 Filters) rather than R-band (0.641 \(\mu m\), Bessell, 2005). Wolf 359 is 5.2 magnitudes brighter in the H-band versus R-band (Lan
dolt, 2009; Cutri et al., 2003), thus we were able to take advantage of the improved AO quality with the significantly more flux available for wavefront correction.
Our HCI survey spanned three nights in 2021: February 22, February 23, and March 31 (UT). We collected images using the Ms filter (\(4.670\mu m\), NIRC2 Filters) with NIRC2 operated in narrow mode. The science images had a frame size of of 512 x 512 pixels (\(5.090^{\prime\prime}\times 5.090^{\prime\prime}\); pixel scale = 0.009942 \(\pm\) 0.00005 arcsec/pixel, Keck General Specs). The frames were taken with an integration time of 0.3 s with 90 coadds. We obtained a total of 664 science frames over 14 QACTIS sequences, totaling \(4.98\) hr of science integration time.
We performed our data reduction using the _VIP: Vortex Imaging Processing_ python package (VIP)(Gomez Gonzalez et al., 2017). We pre-processed the NIRC2 data for bad pixels, flat-field correction, and sky background correction using the automated pipeline described in Xuan et al. 2018 using VIP version 0.9.9. Sky subtraction was completed using the PCA-based approached described in Hunziker et al. 2018 using VIP version 1.3.0. After pre-processing the science images, we removed 5% of the lowest-quality science frames using VIP's Pearson correlation bad-frame detection from each night.
To establish an anchor for our reported contrast, we measured the flux of Wolf 359 using the unobstructed PSF images taken at the start of the QACTIS sequence. We created a PSF template by combining and then normalizing the 14 PSF images taken on 2021 March 31 (UT). The PSF frames were collected using an integration time per coadd of 0.015s with 100 coadds. We performed the stellar photometry using the fit_2dgaussian function, as outlined in the VIP tutorial. We measured the full width half max of the NIRC2 Ms PSF to be \(FWHM=9.67\) pixel (\(0.0962^{\prime\prime}\)).
We created the final reduced image using the combined image set from the three nights with the 631 images that passed bad-frame detection. We applied a highpass filter to each individual image using a VIP's Gaussian highpass filter with size 2.25 \(FWHM\). The images were then derotated using the parallactic angle and median combined. We subtracted the stellar point spread function (PSF) using full-frame angular differential imaging principle component analysis (PCA) using VIP's pca module (following the methods of Soummer et al., 2012 and Amara and Quanz, 2012). We performed PCA optimization by injecting a fake companion 100 pixels from the star to determine the number of principle components that yielded the max signal-to-noise of the fake companion. The three-night combined image set had an optimal number of principal components of PC = 18 (PC = 4 when highpass filtering was applied). While performing PCA stellar point spread subtraction, we adopted a center masking of 2 \(FWHM\) and a parallactic exclusion angle the size of 1 \(FWHM\). The final reduced image from the highpass-filtered three-night combined image set is shown in Figure 1 along with its accompanying signal-to-noise threshold map. We detected no point source signals above a 2\(\sigma\) threshold using VIP's built in detection function in log mode. We thus conclude that we did not detect any companions in the direct imaging portion of this survey.
We calculated contrast curves using VIP's contrast_curve function, which calculates the \(\sigma*noise/throughput\) using fake planet injection with a student-t distribution correction. We found that applying a highpass filter had little affect on our final sensitivity, so our contrast curves are reported using the images with no applied highpass filtering. The combined-night contrast curve was calculated by first processing the
\begin{table}
\begin{tabular}{c c} \hline \hline Property & Value \\ \hline RA J2000 & 10 56 28.92 \\ DEC J2000 & \(+07\) 00 53.00 \\ Distance & \(2.4086\pm 0.0004\) pc \\ Parallax & \(415.18\pm 0.07\) mas \\ Spectral Type & solar-metallicity M6 \\ Mass & \(0.110\pm 0.003\)\(M_{\odot}\) \\ Teff & \(2749^{+44}_{-41}\) K \\ Radius & \(0.144\pm 0.004\)\(M_{\odot}\) \\ log(g) & \(5.5\) cgs \\ V mag & \(13.5\) cgs \\ R mag & \(11.684\) \\ H mag & \(6.482\) \\ MKO Ms mag & \(5.85\pm 0.06\) \\ Rotation Period & \(2.705\pm 0.007\) d \\ Age range & \(100\) Myr–1.5 Gyr \\ \hline \end{tabular}
\end{table}
Table 1: Properties of Wolf 359
sensitivity by separation for each night separately. The combined-nights sensitivity was then calculated using a weighted variance at each separation, \(\sigma_{comb}(sep)=\sqrt{1/(\sigma_{n1}(sep)^{-2}+\sigma_{n2}(sep)^{-2}+\sigma_{ n3}(sep)^{-2})}\). The overall sensitivity of the HCI survey of Wolf 359 is plotted in Figure 2.
### Radial Velocity Observations
#### 2.2.1 Keck-HIRES
We present an additional 12 Keck-HIRES high-precision RV measurements gathered by the California Planet Search (CPS) team between Dec 25 2017 and Jan 13 2022 (UT). The Keck-HIRES velocities are available in Appendix A and online in a machine readable format. These measurements extend the baseline of the Keck-HIRES post-2004 velocities to over 17 years when combined with the 40 Keck-HIRES RVs included in Tuomi et al. (2019). The new Keck-HIRES exposures were collected with the C2 decker (14\({}^{\prime\prime}\times\)0\({}^{\prime\prime}\).86, \(R\) = 45,000) and had a median integration time of 1800 s, corresponding to a median SNR of 65 pix\({}^{-1}\) at 5500 A.
Observations were taken with a warm (50\({}^{\circ}\) C) cell of molecular iodine at the entrance slit (Butler et al., 1996) and RVs were determined following the procedures of Howard et al. (2010). The superposition of the iodine absorption lines on the stellar spectrum provides both a fiducial wavelength solution and a precise, observation-specific characterization of the instrument's PSF. Each RV spectrum was then modeled as the product of the deconvolved template spectrum and the FTS molecular iodine spectrum which is convolved with the point-spread function. The chi-squared value of this model is minimized with the RV (\(Z\)) as one of the free parameters.
RVs computed via the iodine cell method require a high-SNR iodine-free "template" of the stellar spectrum. Ideally, CPS aims for template spectra to have an SNR of about 200 pix\({}^{-1}\) at 5500 A in order to properly deconvolve the spectrum with the instrument's PSF, which is measured by observing rapidly rotating B stars immediately before and after the template exposure(s). In the case of Wolf 359, CPS acquired three consecutive iodine-free exposures of the star on 2005 Feb 27 with the B1 decker (3.5\({}^{\prime\prime}\times\)0\({}^{\prime\prime}\).574, \(R\) = 60,000). Each observation had an exposure time of 400 s corresponding to a combined SNR of 40 pix\({}^{-1}\) at 5500 A. Because Wolf 359 is relatively faint in \(V\)-band (\(V\) = 13.5 mag; Landolt, 2009), high SNR Keck-HIRES exposures quickly become prohibitively expensive (SNR of \(\sim\) 100 pix\({}^{-1}\) would take well over an hour of integration). Rather than attempt to acquire another, higher SNR template of Wolf 359, we searched for a best-match template from a library of over 300 stars with high-SNR, iodine-free Keck-HIRES spectra and bracketing B star observations following the methods of Dalba et al. (2020). Recomputing the RVs using the best-match template that we identified increased the RV errors by a factor of \(\sim\) 2, so we chose to continue to use the original CPS template. The poor match might be a consequence of Wolf 359's late spectral type--the library from Dalba et al. (2020) contains stars with \(T_{\rm eff}>3000\) K. Using the CPS template, RVs taken before the Keck-HIRES detector upgrade in 2004 have a median measurement error of 8.2 m/s and post-upgrade RVs have a median measurement error of 3.9 m/s.
#### 2.2.2 Maroon-X
We publish 68 measurements of Wolf 359 made with the MAROON-X spectrograph at Gemini Observatory. The MAROON-X velocities are available in Appendix A and online in a machine readable format. The MAROON-X data were acquired with both the red (649-920 nm) and blue (491-670 nm) arm simultaneously during 34 observing nights. These observations were taken over 5 observing runs during February 2021, April 2021, May 2021, November 2021, and April 2022.
Spectra were taken with a fixed exposure time of 30 min and showed an average peak SNR of 90 pix\({}^{-1}\) in the blue arm and 460 pix\({}^{-1}\) in the red arm. The data were reduced by the instrument team using a custom
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multicolumn{1}{c}{ Candidate} & Period (d) & \(m\sin i(M_{\bigoplus})\) & a (AU) & Status & Note \\ \hline Wolf 359 b & \(2938\pm 436\) & \(43.9^{+29.5}_{-23.9}\) & \(1.845^{+0.289}_{-0.258}\) & possible & investigated in \\ & & & & cold-Neptune & this work \\ Wolf 359 c & \(2.6869^{+0.0004}_{-0.0003}\) & \(3.8^{+2.0}_{-1.6}\) & \(0.018\pm 0.002\) & false positive\({}^{\dagger}\) & RV signal is due \\ & & & & to star rotation \\ \hline \multicolumn{1}{c}{\({}^{\dagger}\) Wolf 359 c was refuted by Lafarga et al. 2021.} & & & & \\ \end{tabular}
\end{table}
Table 2: Exoplanet Candidates Identified by Tuomi et al. (2019)
Figure 1: Final reduced image of the Wolf 359 system from the Keck-NIRC2 high-contrast imaging survey: Our final reduced image of the Wolf 359 system was created using the the highpass-filtered three-night combined image cube. The corresponding S/N map is shown in (b). The red circle shown in (a) corresponds to the predicted semi-major axis of the Wolf 359 b candidate. The stellar PSF was subtracted using full-frame PCA with VIP. No companion-like point sources were detected to more than 2\(\sigma\) above the background using VIP’s built-in detection function.
Figure 2: Contrast curves from the Keck-NIRC2 imaging survey: The contrast curves were created using the fullframe PCA algorithm in VIP with the images that were not highpass filtered. The solid black line represents the 5\(\sigma\) sensitivity achieved with the combined-nights cube.
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{Date (UT)} & Total Sci & Total Int & \multirow{2}{*}{PA Change} & PWV & Optimal PC & \(5\sigma\Delta Mag\) & \(5\sigma\Delta Mag\) & \(5\sigma\Delta Mag\) \\ & Frames & Time (hr) & & opacity & full fr. PCA & 0.2” (0.5 AU) & 0.98” (1 AU) & 1.7” (4 AU) \\ \hline
2021 Feb 22 & 181 & 1.36 & 77.35\({}^{\circ}\) & 0.08-0.20 & 8 & 6.0 & 7.3 & 7.5 \\
2021 Feb 23 & 200 & 1.50 & 76.88\({}^{\circ}\) & 0.06-0.10 & 28 & 6.1 & 7.5 & 8.3 \\
2021 Mar 31 & 283 & 2.12 & 126.29\({}^{\circ}\) & 0.10-0.16 & 14 & 6.4 & 7.7 & 8.6 \\ \hline Combined Nights & 664 & 4.98 & – & – & 18 & 6.8 & 8.2 & 8.9 \\ \hline \end{tabular}
* All science images were collected using the MKO Ms filter, \(int=0.3s\), \(coadd=90\), and \(subframesize=512\times 512\) with NIRC2 in narrow mode. The precipitable water vapor opacity was measured at 225 GHz by the Submillimeter Array and retrieved at [http://www.eao.hawaii.edu/weather/opacity/mk/archive/](http://www.eao.hawaii.edu/weather/opacity/mk/archive/). The optimal PC and \(5\sigma\) contrast is reported for the image sets with no highpass filter applied. To convert the listed \(\Delta Mag\) to MKO Ms apparent magnitude, add 5.85.
\end{table}
Table 3: High-Contrast Imaging Keck-NIRC2 Observing Summary
python3 data reduction pipeline to produce optimum extracted and wavelength calibrated 1D spectra. The radial velocity analysis was performed using SERVAL (Zechmeister et al., 2018), a template matching RV retrieval code in a custom python3 implementation. On average, the RV uncertainty per datum was 1.0 m/s for the blue arm and 0.3 m/s for the red arm. MAROON-X uses a stabilized Fabry-Perot etalon for wavelength and drift calibration (Seifahrt et al., 2022) and can deliver 30 cm/s on-sky RV precision over short timescales (Trifonov et al., 2021) but suffers from inter-run RV offsets with additional per-epoch uncertainties ranging from 0.5-1.5 m/s, corresponding to increased uncertainties of 1.4 m/s for the blue arm and 0.9 m/s for the red arm for signals on timescales longer than one month.
## 3 Analysis
### Stellar Age Estimation
We provide an updated analysis of the age of Wolf 359 in order to constrain the sensitivities of our high-contrast imaging survey. We correlate our age estimates to our HCI survey sensitivity using evolutionary cooling models in order to determine the maximum mass of an unseen companion in Section 3.2.
**Gyrochronology:** The relation between rotation period, age, and mass has been studied extensively for low-mass stars (e.g. Skumanich, 1972; Barnes and Kim, 2010; Irwin et al., 2011; Curtis et al., 2020). It has been shown that stars begin their life with a fast rotation period and spin down with time via magnetic braking. The particular shape of this relation and the time it takes a star to spin down depends on its mass. The gyrochronology relation for Sun-like stars is calibrated, so the rotation period can be used to estimate an age. However, this gyrochronology relationship for Sun-like stars does not hold for M dwarfs (e.g. Angus et al., 2019). While the relationship for low-mass stars has not been calibrated, it has been shown that rotation correlates with relative maturity (e.g. Popinchalk et al., 2021; Dungee et al., 2022; Pass et al., 2022).
We calculated Wolf 359's Rossby number to be \(R_{0}=0.02\) using the convective turnover time computed from Wright et al. (2011). We then compared our \(R_{0}\) value to Figure 6 in Newton et al. (2017). We find that Wolf 359 lies in the magnetically saturated portion of this plot. For Sun-like stars, being in the saturated regime means the star is young (\(<100\) Myr). However M dwarfs stay rotating fast longer, thus a fast rotation period does not always mean the star is as young (e.g. Irwin et al., 2011; Medina et al., 2022). Recently, Medina et al. (2022) estimated that fully convective M dwarfs transition between the saturated to the unsaturated regime at around \(2.4\pm 0.3\) Gyr, which provides an approximate upper limit to the age of Wolf 359 but is not informative. Below we combine rotation period with kinematics to estimate a more constrained upper limit on the age of Wolf 359.
**CMD age dating**: We compared the color magnitude diagram position of Wolf 359 against the 100 pc sample of M dwarfs from _Gaia_ and empirical sequences based on bona fide members of young associations of several ages (Gagne et al., 2021). From Figure 3, we conclude that Wolf 359 has already converged into the main sequence. This analysis suggests that Wolf 359 is older than the age of the Pleiades cluster (112 Myr) as the lowest mass stars in this cluster have not converged into the main sequence. From the CMD analysis, we conclude the age of Wolf 359 is older than 112 Myr.
**Isochrone age dating:** We used the MESA Isochrones and Stellar Tracks (MIST; Dotter, 2016; Choi et al., 2016) to estimate Wolf 359's age using a color-magnitude diagram. We adopt the MESA models associated with an M6 star (\(0.11M_{\odot}\)) with a metallicity of [Fe/H] = 0.25 dex (Mann et al., 2015) and rotation of \(0.4v/v_{\rm crit}\). We used _Gaia_ photometry (apparent magnitude \(g=11.038\pm 0.003\), absolute magnitude \(G=14.130\pm 0.003\), apparent magnitude \(g_{\rm bp}=13.770\pm 0.005\)) to compare with the MIST isochrones (Figure 4). Our isochrone age estimate is largely driven by the measurement of the _Gaia_ G magnitude.
Figure 3: CMD comparison with young moving groups: We plot the color-magnitude diagram for Wolf 359 with empirical sequences from young associations of ages 10 Myr, 24 Myr, 112 Myr, 562 Myr, and 750 Myr (Gagne et al., 2021) and the _Gaia_ 100 pc sample of M dwarfs. The red star represents the position of Wolf 359. The color-magnitude position of Wolf 359 is not in agreement with the youngest moving groups of 10–112 Myr. We find that Wolf 359 is in better agreement with the Coma Berenices (562 Myr) and Hyades (750 Myr) moving groups and the field sample. We conclude that Wolf 359 has converged on to the main sequence and that its age is older than 112 Myr.
While the MIST models can be unreliable for low-mass stars, they were recently shown to provide a good fit for stars like Wolf 359 with masses below \(0.25\,M_{\odot}\) and a metallicity of [Fe/H]=+0.25 using the Hyades single star sequence (Brandner et al., 2023). We predict an age of \(\sim 400\) Myr using the MIST models.
**Kinematic age dating:** We estimated Wolf 359's kinematic age to be \(1.53\pm 0.3\) Gyr following the methods outlined in Lu et al.2021. Briefly, this method consists of estimating the vertical velocity dispersion of a group of stars with similar temperatures and similar rotation periods. Assuming that the evolution of rotation period for stars with similar temperatures is the same, the stars in this group should have similar ages. Therefore we can use an age-velocity relation to estimate the average age of the group from the vertical velocity dispersion. We obtained a group of stars with similar mass and rotation period as Wolf 359 from the MEarth sample in Newton et al.2018. We combined their reported rotation periods, mass, and radial velocities with their proper motions and parallaxes retrieved from _Gaia_ eDR3 (Gaia Collaboration et al., 2021) in galpy2(Bovy, 2015) to calculate their vertical velocities. We then created a bin in mass and rotation period around Wolf 359, selecting similar stars with similar ages. To define the size of the bin, we used a group of stars with similar mass and rotation period as one M dwarf in the MEarth sample which is co-moving with a white dwarf. We used wdwarfdate(Kiman et al., 2022) to get the age of the white dwarf from its effective temperature and surface gravity (retrieved from Gentile Fusillo et al.2021), and set the bin size so the kinematic age of the group reproduced that age. We used the age-velocity relation from Yu and Liu2018 to correlate the vertical velocity dispersion with ages and then performed a Monte Carlo propagation of the vertical velocity uncertainties to determine the uncertainty in the kinematic age of Wolf 359. The resulting distribution from the Monte Carlo simulation is shown in Figure 5. We obtained a kinematic age of \(1.5\pm 0.3\) Gyr. However, as most of the stars in the bin are in the saturated regime, their rotation period still depends on their initial rotation period, making the dispersion in age larger. Therefore, we adopt an age of \(1.5\) Gyr as an upper bound for Wolf 359's age.
Footnote 2: Galpy: [https://github.com/jobovy/galpy](https://github.com/jobovy/galpy)
**Age summary:** Our age estimate from the MIST isochrone comparison (\(400\) Myr) is consistent with our young association comparison (\(>112\) Myr). Our CMD comparison with young moving groups shows it is probable that Wolf 359 has converged on to the main sequence. While the \(2.7\) d rotation period cannot be used to provide an exact age using gyrochronology, Wolf 359's fast rotation is a relative indicator of youth (\(<2.4\) Gyr). We provide a better constrained upper bound estimate using the kinematic age dating of \(\sim 1.5\pm 0.3\) Gyr.
For completeness through the remainder of this paper, we consider ages for Wolf 359 between \(100\) Myr - \(1.5\) Gyr in our HCI analysis. However, our analysis suggests that the ages estimated by Pavlenko et al.2006 using the spectral energy density distribution (\(\sim 100-350\) Myr) seem less likely due to Wolf 359's suspected convergence with the main group. If we someday measure the dynamical mass and temperature of an exoplanet companion around Wolf 359 using infrared direct imaging, we may then be able to apply planetary-mass isochrones to refine this age estimate.
### High-Contrast Imaging Analysis
We used the Keck-NIRC2 contrast curves (Figure 2) to determine the final \(5\sigma\) sensitivity of our imaging survey across separations between \(0.23\) AU and \(4.18\) AU. We cannot make constraints on companions orbiting beyond separations of \(4.18\) AU on the night of observation because the field of view of the camera was limited to \(5.1\arcsec\times 5.1\arcsec\)(\(512\times 512\) pixel) to increase the speed of camera readout.
We then applied published isochrone models to predicted the upper mass limits for companions ruled out by the HCI observations. In Figure 6a, we applied the
Figure 4: Isochrone age dating: We used the MIST isochrone models with the _Gaia_ eDR3 photometry in G and BP to estimate an age for Wolf 359. The blue line represents the MIST isochrone track for a star of \(0.11\,M_{\odot}\) with metallicity of \([Fe/H]=+0.25\) dex. Wolf 359 is represented by the red star, which lies closest to the isochrone point with an age of \(393\) Myr (between \(373\) Myr and \(414\) Myr). We estimate an age of \(400\) Myr from isochrone dating.
isochrone models created by Isabelle Baraffe3 to place constraints in the speckle-limited region at the tightest angular separations (\(<1\) AU). We used the BHAC15 models for the stellar regime (\(T_{\rm eff}>3000\) K; Baraffe et al., 2015), the DUSTY models for the brown dwarf regime (\(1700\) K \(<T_{\rm eff}<3000\) K; Chabrier et al., 2000), and the COND models for the planetary regime (\(T_{\rm eff}\)\(<1400\) K; Baraffe et al., 2003). The Baraffe models predict that companions with masses above the deuterium burning limit (\(>13\)\(M_{\rm Jup}\)) with ages younger \(<1.5\) Gyr will be brighter than \(M_{s}=14.0\). Our survey reached a greater than \(5\sigma\) sensitivity to companions with \(Ms=14\) at separations greater than \(0.25\) AU. We therefore rule out any stellar and brown dwarf companions orbiting outside of \(0.25\) AU to \(4.18\) AU at the time of observation.
Footnote 3: The Baraffe isochron models were retrieved at [http://perso.ens-lyon.fr/isabelle.baraffe/](http://perso.ens-lyon.fr/isabelle.baraffe/).
In Figure 6b, we used the isochrone models presented by Linder et al., 2019 to set the mass upper limit in the background-limited regime from 1-4.18 AU (Figure 6b), where the sensitivity is limited by the sky background rather than the stellar contrast. Our combined-night contrast curve averages a sensitivity of \(Ms=17.7\) in this region. This sensitivity rules out companions with a mass bigger than \(2.1\)\(M_{\rm Jup}\) (\(667\)\(M_{\oplus}\)) for ages younger than \(1.5\) Gyr. We cannot rule out companions to \(5\sigma\) with masses smaller than \(0.4\)\(M_{\rm Jup}\) (\(127\)\(M_{\oplus}\)) for any adopted age older than \(100\) Myr.
In order to estimate the completeness by mass and orbital semi-major axis of the high-contrast imaging survey, we utilized the Exoplanet Detection Map Calculator (Exo-DMC) package (Bonavita, 2020) (Figure 7). We converted the combined-night \(5\sigma\) Keck-NIRC2 contrast curves from apparent M mag into upper mass estimates adopting four ages: \(100\) Myr, \(300\) Myr, \(500\) Myr, \(1\) Gyr (Figure 7a). We used the Linder and Ames-COND isochrone models for this conversion and averaged the estimated masses in areas where the models overlapped. The Ames-COND isochrones4 were accessed using the species package (Stolker et al., 2020).
Footnote 4: The Ames-COND models can be found at [https://phoenix.ens-lyon.fr/Grids/AMES-Cond/](https://phoenix.ens-lyon.fr/Grids/AMES-Cond/)
The greater than \(10\%\) survey coverage spans from a semi-major axis range of \(0.2\) AU to \(10\) AU. We find the best survey coverage (\(>95\%\)) of semi-major axis between 1-3 AU. Assuming an age younger than \(1\) Gyr, we rule out companions with a semi-major axis of \(1\)-3 AU above \(10\)\(M_{jup}\). While the semi-major axis predicted for the Wolf \(359\) b candidate (\(a=1.8\pm 0.2\) AU) is within this range, we do not reach the sensitivity to probe to the minimum mass predicted (\(m\sin i\sim 0.14\)\(M_{jup}\)) regardless of age. For an age of \(1\) Gyr, we rule out that the Wolf \(359\) b candidate as described by Tuomi et al., 2019 cannot be bigger than \(4M_{jup}\). For an age of \(100\) Myr, we rule out that the Wolf \(359\) b cannot be bigger than \(1M_{jup}\).
### Radial Velocity Analysis
Our RV analysis incorporates 275 velocities from four instruments: Calar Alto Observatory's CARMENES (Quirrenbach et al., 2016), ESO-HARPS (Mayor et al., 2003), Keck-HIRES, and Gemini-MAROON-X. The RV instruments and measurements used in our analysis of Wolf 359 are summarized in Table 4 and are available in full in machine readable format online. The CARMENES data were retrieved from the DR1 release which spans from 2016-2020 (Ribas et al., 2023). The MAROON-X, HIRES, and HARPS data were provided directly by the observing teams.
We elected to use the HARPS data as analyzed with the TERRA pipeline (Anglada-Escude and Butler, 2012) in order to remain consistent with the analysis presented in Tuomi et al. (2019). The 77 HARPS-TERRA velocities used in this analysis incorporate the velocities presented in the 2019 announcement.
The MAROON-X RVs were computed using both the red and blue arms of the spectrograph, producing two RV measurements per observation. We treat each the MAROON-X red-arm and blue-arm measurements as being from different instruments to account for different instrumental offsets and RV jitter amplitudes. We do the same for the Keck-HIRES velocities collected before
Figure 5: Kinematic age dating: Wolf 359’s kinematic age was measured using the methods outlined in Kiman et al., 2019. The results of the Monte Carlo simulation shown here finds the kinematic age to be \(1.53\pm 0.3\) Gyr. We adopt this kinematic age as our age upper-bound for Wolf 359.
and after a detector upgrade in 2004. Within each instrument, we bin observations collected within 0.1 d of one another.
We used the RVSearch5 python package (Rosenthal et al., 2021) to perform a blind planet search within our RV timeseries data (Figure 8). We detected the known signal associated with the rotation period of the star (2.71 d). Once the stellar-rotation activity signal was removed, we detected no signals over a False Alarm Probability of 0.1%. We used the injection-recovery tools built into RVSearch to estimate the sensitivity of our RV survey to planets of specified \(m\sin i\) and semi-major axis to create the completeness contour shown in Figure 9. The probability of detection for a planet with a minimum mass equivalent to a Neptune-mass, Jupiter-mass, and the Wolf 359 b candidate is also shown in Figure 9. RVSearch yielded a 32% completeness to an equivalent \(m\sin i\) and semi-major axis as the Wolf 359 b candidate. Because we do not have a significant completeness in this space, we are not able to confirm or deny the candidacy of Wolf 359 b using RVSearch with our RV dataset.
Footnote 5: RVsearch: [https://github.com/California-Planet-Search/rvsearch](https://github.com/California-Planet-Search/rvsearch)
To further explore the candidacy of Wolf 359 b, we used the open-source software package radvel6(Fulton et al., 2018) to model the RV data. We used the Tuomi et al. 2019 results for Wolf 359b listed in Table 2 as priors. We employed fits with and without the Gaussian Process Fitting module which can be used to fit and remove signals due to stellar activity. We ran our radvel MCMCs using \(N_{walkers}=50\), \(N_{steps}=10000.0\), \(N_{ensembles}=6\), and \(MinAutoFactor=30.0\). In all radvel fits, the chains did not pass the convergence test to indicate that the walkers were well mixed. The convergence criteria could not be met, so we draw no conclusions about the properties of the Wolf 359b candidate from our radvel fits.
Footnote 6: Radvel: [https://github.com/California-Planet-Search/radvel](https://github.com/California-Planet-Search/radvel)
We detected no new candidates. At 95% confidence, our RV analysis excludes planets with a minimum mass bigger than \(m_{\rm p}\sin i>13.5\ M_{\oplus}\) (0.04 \(M_{\rm Jup}\)) for \(a=0.1\) AU and planets with a minimum mass bigger than \(m_{\rm p}\sin i>147\ M_{\oplus}\) (0.46 \(M_{\rm Jup}\)) for \(a=1\) AU. We have over 50% completeness to exclude planets with an \(m_{\rm p}\sin i\) equivalent or bigger than \(1M_{\rm Jup}\) within 5.3 AU and 1 Neptune-mass within 0.52 AU. Our RV survey has little coverage to companions orbiting with a semi-major axis larger than \(a>10\) AU for all masses.
## 4 Discussion
### Performance of the direct imaging survey with Keck-NIRC2
Few Keck-NIRC2 HCI observations have been published that span multiple nights that utilize the Ms filter (4.67\(\mu\)m) in conjunction with the vector vortex coronagraph. Previous published deep surveys of this type
Figure 6: Isochrones overlaid with the 5\(\sigma\) constraints from the Keck-NIRC2 survey: The horizontal lines represent our imaging survey’s 5\(\sigma\) sensitivity at 1 and 4 AU of separation. (a) We use the BHAC15/DUSTY/COND models (Baraffe et al., 2015, Chabrier et al., 2000, Baraffe et al., 2003) to rule out all tight stellar and brown dwarf companions (\(>13\,M_{\rm Jup}\)) outside of 0.25 AU (0.1′′). (b) From 1–4.18 AU of separation, we apply the Linder et al. (2019) low-mass planetary cooling models to place upper mass limits on planetary companions. We rule out planets with masses \(>1.5M_{\rm Jup}\) to 5\(\sigma\) if Wolf 359 is younger than 500 Myr in this region.
have so far limited to the Eps Eri results from Mawet et al.2019 and Llop-Sayson et al.2021. However, it is expected that surveys similar to the work presented here will become more common as data from indirect methods of exoplanet detection become more widely available and drive targeted direct imaging surveys towards studying colder companions. We document the expected performance of our imaging survey as compared to our measured performance to aid in the planning of future multi-night Keck-NIRC2 HCI surveys that are completed with the Ms filter with the vortex coronagraph.
We report that our measured efficiency on the night with the greatest number of images (2021 March 31 UT) was 52%. This excludes the setup time and used the observing configuration described in Section 2.1. After our initial setup, we observed Wolf 359 for 4.05 hr and totalled 2.12 hr of science integration time. We ran the majority of QACITS sequences with 50 science images (22.5 min total integration time) and experienced
Figure 7: Keck-NIRC2 High Contrast Imaging Survey Completeness: (a) The NIRC2 combined-nights 5\(\sigma\) contrast was converted to mass space using the Linder+2019 and Ames-COND isochrone models. (b-d) The NIRC2 survey completeness maps for the ages 100 Myr, 500 Myr, 1 Gyr were estimated using the Exoplanet Detection Map Calculator (Exo-DMC) python package from the mass-space combined-nights 5\(\sigma\) contrast curves. Our imaging survey has 10% coverage to companions with a semi-major axis of 0.2-10 AU and reaches 95% coverage for companions with a semi-major axis between 1-3 AU. The black star represents the Wolf 359 b semi-major axis and minimum mass as predicted by Tuomi et al.2019.
Figure 8: **RV timeseries & periodograms from analysis using RVSearch**_(a)_ We plot the RV timeseries using the available data from CARMENES, HARPS (TERRA pipeline), HIRES (pre- and post-2004), and MAROON-X (red and blue arm). The blue line represents the detected signal from the known rotation period (2.71 d; Guinan & Engle, 2018). _(b)_ The time series residuals after the stellar-rotation activity signal is removed. _(c)_ The folded timeseries for the rotation period signal. _(d)_ The periodogram before removing the rotation period signal. The highest peak corresponds to the rotation period signal (2.71 d), and the second peak corresponds with half the rotation period (1.4 d). _(e)_ We find that the quantification of the strength of the detection for the 2.7 d signal as a function of the number of observations monotonically increases as expected. _(f)_ The periodogram of the residuals after removing the 2.71 d signal from the stellar rotation period. We do not find evidence for any additional candidates above our False Alarm Probability threshold (0.1%). The local peak at 4370 d corresponds to a \(\Delta BIC=7.0\) (0.001 FAP corresponds with \(\Delta BIC=45.7\)).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multicolumn{1}{c}{ Instrument} & \multicolumn{1}{c}{Source} & Spectral & \multirow{2}{*}{\#Meas.} & \multirow{2}{*}{Baseline} & Avg. RV & Inst. \\ & & Range (nm) & & & Precision & Offset* \\ \hline CARMENES & Retrieved from Ribas et al. (2023) & 550-1700 & 78 & 2.23 yr & 1.99 m/s & 0.05 m/s \\ ESO-HARPS & Directly from M. Tuomi & 378-691 & 77 & 15.3 yr & 3.09 m/s & -3.22 m/s \\ HIRES Pre-2004 & California Planet Search team & 300-1000 & 14 & 5.05 yr & 8.09 m/s & -9.68 m/s \\ HIRES Post-2004 & California Planet Search team & 300-1000 & 38 & 17.14 yr & 4.26 m/s & -3.28 m/s \\ MAROON-X blue & MAROON-X team & 499-663 & 34\({}^{\dagger}\) & 1.17 yr & 1.39 m/s & -6.47 m/s \\ MAROON-X red & MAROON-X team & 649-920 & 34\({}^{\dagger}\) & 1.17 yr & 0.88 m/s & -5.62 m/s \\ \hline \end{tabular} \({}^{\dagger}\)The MAROON-X blue and red data were collected simultaneously
* The instrument offsets were calculated from the fit made using RVSearch when detecting the signal from the stellar rotation period.
\end{table}
Table 4: Wolf 359 RV data summary.
Figure 9: Radial Velocity Survey Completeness: We used the injection-recovery function within RVSearch to determine the completeness of our Wolf 359 RV survey as a function of the minimum planet mass and semi-major axis. Our analysis methods yield a 32% chance of recovering a signal that matched the Wolf 359b candidate as described by Tuomi et al. 2019 (green star).
no significant QACITS centering issues while collecting science data.
We adopt the predictions produced by the Keck Observatory's online NIRC2 SNR and Efficiency Calculator to quantify the expected SNR in the background limited regime of our contrast curves. These equations for the NIRC2 SNR Calculator are outlined in Appendix B. We do not consider the speckle-limited regime of our contrast curves from this comparison (\(sep<0.8^{\prime\prime}\)) as the NIRC2 SNR calculator cannot quantify the SNR in the speckle-limited region.
We evaluate the performance using one night of observations to avoid complications in the performance discussion from combining data across multiple nights. We elected to use 2021 March 31 (UT) because it is the night of our survey with the most available data. Our contrast curve for this night was generated using 269 of the 283 images taken with an exposure time of 0.3 s and coadd of 90, totalling approximately 2 hr of integration time. We measured an average \(5\sigma\) contrast in the background-limited region of the contrast curve (\(>0.8^{\prime\prime}\)) to be \(\Delta m_{s}\) = 8.53 (apparent magnitude of \(m_{s}=14.38\)).
Our measured \(5\sigma\) detection limit from 2021 March 31 is consistent with the performance on individual nights of the Eps Eri survey where Llop-Sayson et al. 2021 used the pyramid wavefront sensor (pyWFS) to collect approximately 2 hours of integration time. The best SNR achieved by Llop-Sayson et al. 2021 was between the separations of \(1.5^{\prime\prime}-1.75^{\prime\prime}\) and corresponds to an apparent magnitude of \(m_{s}=14.4\) (\(\Delta mag=12.7\)). Both this work and the Eps Eri surveys indicate it is improbable to detect a companions dimmer than \(m_{s}=14.4\) to \(5\sigma\) with this instrument configuration in one half-night of Keck NIRC2 time when operating with the vortex coronagraph paired with the pyWFS.
We next checked our measured results against the prediction made by the NIRC2 SNR calculator using the parameters that matched our observing setup: 0.3 s integration time with 90 coadds, narrow mode, 2 reads, 269 images, and no telescope nodding. We assumed a Strehl ratio of 0.85 which is a conservative estimate associated with 300 nm of wavefront error. The NIRC2 SNR calculator assumes that the background flux and flux from the source will follow Poisson statistics. We find that the calculator predicts the \(5\sigma\) threshold to be at an apparent magnitude of \(m_{s}=16.07\), which is not consistent with with our observed results. Our measured SNR was 1.69 magnitudes brighter than the predicted performance by the NIRC2 SNR calculator, meaning we were more restricted in the companions that we could detect at the background-limited wide separations than was predicted by the calculator.
We expect that the prediction by the NIRC2 calculator would be somewhat inconsistent with our results because the NIRC2 SNR calculator was not designed to predict observations when the vortex mask is used. To better refine our predicted performance estimate, we modified the equations used by the NIRC2 SNR calculator. These modifications are documented in Appendix B and incorporate a throughput penalty to the measured signal to account for the use of the fixhex pupil stop and the vortex mask at \(4.7\,\mu m\) (Total throughput penalty, \(0.57\pm 0.03\)). We additionally offer a revision to the background flux counts when the vortex is used in M-band (17850 DN/s per pixel). When we apply our revised equations to estimate the predicted performance for our 2021 March 31 dataset, we find that our \(5\sigma\) detection threshold is predicted to be at \(m_{s}=15.49\). While this estimate better aligns with our measured performance, this method still over-estimates the brightness of the \(5\sigma\) detection threshold by 1.11 magnitudes when comparing to our measured performance from that night (\(m_{s}=14.38\)).
We ruled out the possibility that this performance gap was due to uncorrected non-uniform background counts spatially in the individual images through highpass filtering. We used VIP's internal highpass filtering function to determine the optimal highpass filtering by injecting a fake planet into each individual image, running six types of highpass filters on each image, and then using stellar photometry to recover the SNR of the injected planet. The optimal highpass filtering method was gaussiansubt with size \(2.25*fwhm_{irc2}\). We then edited VIP's contrast curve function to include the highpass filtering step using the optimal highpass filter. The highpass filtering step was added after fake planet injection but before running PCA. There were slight differences between the contrast curves produced from the image sets with and without the highpass filter, but the differences did not affect the contrast achievable in the background limited region of the image. We thus conclude that the performance gap is not due to poorly corrected background structures in each frame.
To determine if the performance gap was due to the image background noise not obeying Poisson statistics temporally, we measured how the sky background noise over time compared to the statistics expected from photon noise. We measured the sky background noise by summing the counts inside four circular apertures with a diameter equal to the \(fwhm_{NIRC2}\) using the 2021 March 31 image set before and after sky subtraction was completed. The apertures were located 1.76 arcseconds from the image center in the direction of the image corners in order to avoid contamination from the star.
We found our measured background noise value using 20 frames from the image cube after the sky subtraction was applied. The 20 frames were chosen from the full cube where the conditions were stable (no background drift, average background counts in the raw frames are consistent, and similar adaptive optics correction). We plotted the aperture sum counts of each aperture and then took the standard deviation of the counts over time. The corresponding photon noise value was determined using the image cube before the background subtraction was made. We measured the sum of photons inside each aperture, averaged the sums, and then took the square root of the average sum to act as the expected photon noise. The ratio between our measured-noise to photon-noise contribution was 1.9 from the subset of the 20 stable frames. Across the full image cube, we found the ratio of measured/theoretical-photon noise to be 3.0. This corresponds to a flux difference of 0.69 and 1.2 magnitudes respectively. This range of values is consistent with the performance gap we see after accounting for the throughput loss from the vortex and pupil stop (\(\Delta mag=1.1\)).
We hypothesize that the background noise does not follow Poisson statistics because of short time-scale water vapor variations at timescales less than the length of our 30 s images. This hypothesis could be tested when upgrades to the NIRC2 electronics are completed in 2023 which will allow for faster readout and background corrections to be made at shorter timescales. If proven true, the limits of previous surveys may be improved upon by observing the target again using sub-second integration times in order to improve background correction.
### Prospects for Directly Imaging an Exoplanet around Wolf 359 using JWST
JWST offers an opportunity to directly image exoplanets in infrared wavelengths without the contamination from the Earth's atmosphere allowing for the telescope to probe for colder companions as compared to ground based telescopes. In this section, we present simulations to explore the potential of JWST to directly image a cold giant planet orbiting Wolf 359 using the Near Infrared Camera (NIRCam) Coronagraphic Imaging mode and Mid-Infrared Instrument (MIRI) imaging. MIRI and NIRCam can be used in combination to span wider coverage for companions in orbital separations, cloudiness, and temperature. NIRCam can be used to achieve high contrasts at sub-arcsecond inner working angles at shorter infrared wavelenghts (0.6-5\(\mu\)m, Rieke et al., 2023), which was demonstrated successfully during the Early Release Science Program to image the super-Jupiter mass exoplanet HIP 65426b (Carter et al., 2022). MIRI operates at longer infrared wavelengths (5-28\(\mu\)m, Wright et al., 2023), giving greater sensitively to cold and cloudy companions.
Because of Wolf 359's proximity, a planet revealed through NIRCam or MIRI imaging has the potential to become the coldest directly image exoplanet that could be characterized with JWST spectroscopy. If such an exoplanet is detected, detailed characterization would allow the planet to become an anchor to test theories related to the atmosphere and formation of cold gas giant and ice giant planets.
#### 4.2.1 NIRCam Coronagraphic Imaging
We explore the possibilities of using the NIRCam Coronagraphic Imaging mode to directly image companions orbiting Wolf 359 by simulating contrast curves using the Pandeia Coronagraphy Advanced Kit for Extractions (PanCARE) python package7(Girard et al., 2018), Perrin et al. 2018, Carter et al. 2021). We considered observations in the F444W filter, as the broadest band between the 4-5\(\mu\)m peak in brightness, in conjunction with the round coronagraphic mask MASK335R. We simulated integration times of 20 min, 1 hr, and 10 hr with ADI and RDI subtraction techniques. To simulate the ADI contrast curve, we assumed the total exposure time was split between two rolls (0\({}^{\circ}\) and 10\({}^{\circ}\)) when imaging the target. For the RDI simulations, we assumed a perfect reference with the same properties of Wolf 359 and used a 9-point circle dither pattern. PSFs were generated using the precomputed library over on-the-fly generation with wavefront evolution to reduce computational intensity. As such, these contrast curves represent an optimistic estimate of the achievable performance. We allowed PanCAKE to optimize the readout parameters for dither pattern, number of groups, and number of integrations.
Footnote 7: Pandeia Coronagraphy Advanced Kit for Extractions; [https://github.com/spacetelescope/pandeia-coronagraphy](https://github.com/spacetelescope/pandeia-coronagraphy)
To estimate what types of exoplanets may be detectable, we generated atmospheric models for companions with masses between 20 \(M_{\oplus}\)- 1 \(M_{\rm Jup}\) for ages spanning 100 Myr - 1.5 Gyr using the PICASO 3.08(Mukherjee et al., 2023; Batalha et al., 2019) radiative-convective-thermochemical equilibrium model to simulate cloud-free 1D atmospheres for such companions. We assumed solar metallicity and C/O ratio for our simulated atmospheres. To estimate the \(T_{\rm eff}\) and radius of a companion with a given mass at a certain age, we used the Linder et al. (2019) evolutionary tracks and linearly extrapo
lated along the age axis when needed. The Phoenix stellar models (Husser et al., 2013) were employed to generate the stellar model for Wolf 359 using a spectral type of M5V and the Vega mag scaled to \(2MASS\,k_{s}=6.084\). An example of the set of thermal emission spectra from our generated atmospheric models are shown in Figure 10.
Our simulated NIRCam contrast curves are shown in Figure 11. Table 5 summarizes the detectability of theoretical cloudless exoplanets with varying masses using NIRCam in ADI mode with 1 hour of total integration time. While our simulations span from 1-7 AU (0.4''-3''), the full NIRCam field of view from the MASK335R inner working angle (0.57'') to 20''would correspond to 1.4 - 48.2 AU. We estimate that the region from 7-48.2AU will be background limited and have the same contrast as the result at 7AU for future observing planning purposes.
One hour of NIRCam integration time would provide sensitivity to a cloudless Jupiter-mass companion outside of 0.62''(1.5 AU) at any predicted age range. Cloudless Saturn-mass exoplanets (0.3 \(M_{\rm Jup}\)) would be detectable at small separations if Wolf 359 is in the youngest part of its age range and at wider background-limited separations for ages up to \(\sim\)1 Gyr. A Neptune-like exoplanet (17 \(M_{\oplus}\), 0.06 \(M_{\rm Jup}\)) will be visible if it is orbiting at wider separations and Wolf 359 is in the youngest part of its age range. The detection of a cloudless sub-Neptune exoplanet is unlikely with 1hr NIRCam ADI at any separations within Wolf 359's age range.
#### 4.2.2 MIRI Imaging
Exoplanet gas giants with clear atmospheres are particularly bright in the emission band between 4-5\(\mu m\), often making them detectable by the JWST NIRCam instrument. However, gas giants with cloudier atmospheres have muted emission from 4-5\(\mu m\), instead emitting more at longer wavelengths (\(>\)15\(\mu m\)), as illustrated in Figure 12. This figure shows the emission differences between a cloudy (solid lines) and clear (dashed lines) young, sub-Saturn exoplanet (0.12\(M_{\rm Jup}\)). The cloudy and clear models used in this figure were generated using the method described in Limbach et al. (2022). This figure demonstrates that exoplanets with cloudy atmospheres may be more easily detected through JWST Mid-Infrared Instrument (MIRI) broadband imaging at 21\(\mu m\), while clear atmospheres are more readily detected through direct imaging with NIRCam at 4.5\(\mu m\).
We briefly explore the possibility of imaging exoplanets, like the Wolf 359b candidate from Tuomi et al. 2019, with MIRI. In the mid-IR, the planet's emission is increasing and the star's emission is decreasing. This results in a favorable contrast ratio of the planet to star of 1:1120 for an exoplanet that is 100 Myr, 0.12\(M_{\rm Jup}\) exoplanet with moderate cloud cover (\(f_{SED}=2\)). However, the diffraction limit of JWST at 21 \(\mu\)m is 0.67''(6 pix) which is comparable to the separation between Wolf 359b and the host star. Using the coronagraphic mask at 23 \(\mu\)m, which has an inner working angle of 3.3\(\lambda\)/D, would block exoplanets at separations \(<\)2.16 arcsecs. Therefore, we instead consider directly imaging the system without a coronagraph and using KLIP (Soummer et al., 2012) in post-processing to recover the exoplanet. KLIP has the potential to improve contrast by approximately \(\sim 100\times\)(Rajan et al., 2015).
Figure 13 shows the simulated MIRI contrast curve. To create this simulation, we used the pre-made set of point spread functions for JWST MIRI based on the in flight optical performance WebbPSF tool9. We used the F21000W PSF that includes geometric optical distortions. The contrast curve for KLIP was calculated assuming performance similar to that described in Rajan et al. (2015).
Footnote 9: jwst-docs.stsci.edu/jwst-mid-infrared-instrument/
miriri-performance/miri-point-spread-functions
In Figure 13, the shaded blue region above the black dashed line indicates the detectable exoplanet parameter space. With 3 hours of observation and using KLIP, a 0.12 \(M_{\rm Jup}\) planet with an age of 100 Myr with moderate cloud cover would be detectable at separations greater
Figure 10: Simulated atmospheric models for Wolf 359 and a cloudless 1\(M_{\rm Jup}\) companion: The modeled companion spectra shown correspond to ages of 100 Myr (orange), 500 Myr (green), and 1.5 Gyr (red). The simulated spectra of the Wolf 359 host star is shown in blue. The estimates of the flux between 3.881\(\mu m-4.982\mu m\) were used to determine the expected brightness and expected SNR for each companion type to simulate a NIRCam observation with F444W + MASK335R using PanCAKE.
than 1.5 AU. With the same 3 hr integration time, an older (1 Gyr) exoplanet of this size would also be detectable if at wider separations (\(>4\) AU). This approach requires an integration time which could fit into the JWST small proposals program and has the potential to detect nearby exoplanets to remarkably low masses.
## 5 Conclusions
We conducted a joint high-contrast imaging survey and radial velocity survey with the goal of constraining long-period companions around the nearby M-dwarf star Wolf 359. We do not rule out or confirm the Wolf 359 b RV candidate as presented by Tuomi et al. 2019.
To define the companion mass upper limits placed by our imaging search, we performed an updated age analysis of Wolf 359 through kinematic age dating, CMD
Figure 11: Simulated JWST NIRCam Coronagraphic Imaging 5\(\sigma\) contrast curves at with F444W filter: _(left)_ We show contrast curves simulated using PanCAKE for three NIRCam exposure times in ADI and RDI mode. We predict that if a cloudless exoplanet existed with a mass greater than \(1M_{\rm Jup}\) outside of \(\sim 1.5\) AU, it would be detectable with 20 minutes of integration time. _(right)_ The NIRCam F444W 5\(\sigma\) ADI contrast curves were converted to mass space using the Linder et al. 2019 models with an adopted age of 500 Myr. We find that exoplanets larger than 1 Saturn mass will be detectable outside of 2 AU if Wolf 359 is in the younger part of its age range. A Neptune mass planet would be detectable beyond 6 AU if Wolf 359 is younger than 500 Myr and a \(>10\) hr exposure was used.
\begin{table}
\begin{tabular}{c c c c} \hline \hline \multirow{2}{*}{ Planet Mass} & \multirow{2}{*}{Age} & Predicted apparent & Sep. where Detectable \\ & & F444W mag & by NIRCam 1 hr ADI \\ \hline \(1M_{jup}\) & 100 Myr & 12.90 & \(>0.6AU\) \\ \(1M_{jup}\) & 300 Myr & 14.42 & \(>0.8AU\) \\ \(1M_{jup}\) & 500 Myr & 15.19 & \(>0.9AU\) \\ \(1M_{jup}\) & 1200 Myr & 17.05 & \(>1.4AU\) \\ \(1M_{jup}\) & 1500 Myr & 17.34 & \(>1.5AU\) \\ \hline \(0.5M_{jup}\) & 100 Myr & 14.32 & \(>0.8AU\) \\ \(0.5M_{jup}\) & 300 Myr & 16.09 & \(>1.1AU\) \\ \(0.5M_{jup}\) & 500 Myr & 17.08 & \(>1.4AU\) \\ \(0.5M_{jup}\) & 1200 Myr & 19.51 & \(>3.7AU\) \\ \(0.5M_{jup}\) & 1500 Myr & 20.37 & \(>4.7AU\) \\ \hline \(50~{}M_{\oplus}\) & 100 Myr & 17.06 & \(>1.4AU\) \\ \(50~{}M_{\oplus}\) & 300 Myr & 19.37 & \(>3.6AU\) \\ \(50~{}M_{\oplus}\) & 500 Myr & 20.85 & \(>5.7AU\) \\ \hline \(20~{}M_{\oplus}\) & 100 Myr & 19.70 & \(>3.9AU\) \\ \(20~{}M_{\oplus}\) & 300 Myr & 23.33 & Not Detectable \\ \hline \end{tabular}
\end{table}
Table 5: Summary of the NIRCam F444W Cornagraphic Imaging Detectability of Cloudless Companions.
young moving group comparisons, and a MIST stellar isochrone comparison. We draw a conclusion of relative youth from the star's rotation period, and adopt the kinematic age of \(1.53\pm 0.3\) Gyr as the upper bound for Wolf 359's age. We rule out age estimates that are younger than 112 Myr through the comparison with young moving groups. Our MIST isochrone analysis produced an age estimate of 400 Myr.
We conducted a high-contrast imaging survey using Keck-NIRC2 with the Ms filter (4.67 \(\mu m\)) in conjunction with the vector vortex coronagraph. We totalled 4.98 hr of integration time spread across 3 half-nights. The completeness of our imaging survey is highest (95%) for the semi-major axis range from 1-3 AU. Our HCI results rule out a stellar or brown dwarf companion with this semi-major axis range to 5\(\sigma\), and companions smaller than 0.4\(M_{\rm Jup}\) cannot be ruled out at any separation assuming an age older than 100 Myr. We compared our HCI survey's predicted performance as estimated by the NIRC2 SNR Calculator to our measured 5\(\sigma\) performance and found a discrepancy of 1.7 magnitudes for the night of 2021 March 31 (UT). This discrepancy can be partially accounted for by adjusting for the throughput loss when using the vortex at 4.7\(\mu m\) and the fixhex pupil stop. Our analysis suggests that the remaining performance discrepancy may be due to the background noise exceeding the expected Poisson-noise level over time, indicating that it may be possible to improve the sensitivity of future surveys using faster image readout to better compensate for changes in the sky background.
We performed an updated radial velocity analyses of Wolf 359 with the RVSearch and radvel python packages with data from four RV instruments: CARMENES, HARPS, Keck-HIRES, and MAROON-X. After removing the known RV signal caused by the stellar rotation, we detect no signals above a false alarm probability of 0.1%. To 2\(\sigma\), we exclude planets with a minimum mass bigger than \(m_{\rm p}\sin i>13.5~{}M_{\oplus}\) (0.0425 \(M_{\rm Jup}\)) with a semi-major axis smaller than \(a<0.1\) AU and planets with a minimum mass larger than \(m_{\rm p}\sin i>147~{}M_{\oplus}\) (0.46 \(M_{\rm Jup}\)) for a semi-major axis of less than \(a<1\) AU.
We simulated JWST NIRCam and MIRI observations to explore the potential of JWST to directly image ice giant and gas giant exoplanets orbiting Wolf 359. We predict that NIRCam Coronagraphic Imaging could detect a cloudless exoplanet with masses \(>1M_{\rm Jup}\) outside 1.5 AU and \(>0.5M_{\rm Jup}\) outside 4.7 AU with 1 hour of integration time (assuming an age younger than \(<1.5\) Gyr). Saturn and Neptune-mass exoplan
Figure 12: Emission from a cloudy (solid lines) and clear (dashed lines) sub-Saturn (0.12\(M_{\rm Jup}\)) at 100 Myr (blue) and 1 Gyr (red): The Black line shows the emission from the star, Wolf 359 assuming a M5V spectral type. The black bars show the 3 hr, 5\(\sigma\) detection limits of NIRCam F444W and MIRI broadband imaging. At 21\(\mu m\), the contrast ratio between the star and a 100 Myr, 0.12\(M_{\rm Jup}\) exoplanet is only 1120\(\times\). For the older exoplanet, the contrast ratio is 15,800\(\times\).
Figure 13: Simulated contrast curve for JWST broadband imaging at 21\(\mu m\). The solid black line shows the contrast in the raw image, the dashed black line is the residual after KLIP (assuming it is possible to achieve performance comparable to Rajan et al., 2015), and the dotted block line shows the photon-noise limit from stellar flux. The dash-dotted black line shows the 5 \(\sigma\) noise limit due to background emission. The shaded blue region above the KLIP contrast line and background emission line indicates the parameter space where it should be possible to detect exoplanets. The apparent magnitude of cloudy 100 Myr and 1 Gyr, 0.12\(M_{\rm Jup}\) exoplanets is shown by the red lines. This shows that a cloudy 100 Myr, 0.12\(M_{\rm Jup}\) exoplanet should be detectable at separations \(>\) 1.5 AU, and a cloudy 1 Gyr, 0.12\(M_{\rm Jup}\) exoplanet is detectable at separations \(>\) 4 AU. For this simulation, we assumed [F2100W] = 5.3 mag, based on the star’s WISE band 4 (\(\lambda\)=22.2 \(\mu\)m) magnitude.
ets are accessible to NIRCam in certain age/separation spaces, and it is unlikely that NIRCam could detect a sub-Neptune mass exoplanet. While MIRI imaging does not perform as well at smaller inner working angles, MIRI is capable of detecting cloudy exoplanets at smaller masses. We predict that a cloudy companion with a mass of \(0.12M_{\rm Jup}\) could be directly imaged to \(5\sigma\) if orbiting outside 4 AU using 3 hours of integration time (assuming an age of younger than 1 Gyr).
This survey of Wolf 359 further establishes the methods needed to comprehensively characterize exoplanet systems using the intersection of multiple measurement techniques. As our future direct imaging instrumentation and RV surveys gain an increased sensitivity to ice giant exoplanets and super-Earths, the Wolf 359 system will continue to be a compelling target for understanding the cold planet population and planet formation outside the snow line of low-mass stars.
## 6 Acknowledgements
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
RBR would like to thank Mikko Tuomi and Ignasi Ribas for their collaboration to include the HARPS-TERRA and CARMENES radial velocity data products. RBR also thanks Ester Linder, Jonathan Fortney, Andrew Skemer, Jorge Llop-Sayson, Andrew Howard, Caroline Morley, Kevin McKinnon, Kevin Wagner, Steve Ertl, Jason Wang, and Zack Breismeister for lending their scientific expertise. RBR thanks Jules Fowler for their endless sound-boarding, python help, and title suggestion for this paper.
The authors would like to acknowledge the Keck staff who supported this observation including the observing assistants, Arina Rostopchina and Julie Renaud-Kim, and the instrument scientists, Carlos Alvarez and Greg Doppmann. We thank Charlotte Bond and Sam Ragland who supported operation of the pyramid wavefront sensor and the following observers for their contribution in collecting the HIRES velocities: Isabel Angelo, Corey Beard, Aida Behmard, Sarah Blunt, Fei Dai, Paul Dalba, Benjamin Fulton, Steven Giacalone, Rae Holcomb, Emma Louden, Jack Lubin, Andrew Mayo, Daria Pidhorodetska, Alex Polanski, Malena Rice, Emma Turtelboom, Dakotah Tyler, Lauren Weiss, and Judah Van Zandt.
The data presented were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration. The Observatory was made possible by the financial support of the W. M. Keck Foundation.
The University of Chicago group acknowledges funding for the MAROON-X project from the David and Lucile Packard Foundation, the Heising-Simons Foundation, the Gordon and Betty Moore Foundation, the Gemini Observatory, the NSF (award number 2108465), and NASA (grant number 80NSSC22K0117). We thank the staff of the Gemini Observatory for their assistance with the commissioning and operation of the instrument. The Gemini observations are associated with programs GN-2021A-Q-119, GN-2021B-Q-122, and GN-2022A-Q-119.
GS acknowledges support provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51519.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555.
J.M.A.M. is supported by the National Science Foundation (NSF) Graduate Research Fellowship Program under Grant No. DGE-1842400. J.M.A.M. acknowledges the LSSTC Data Science Fellowship Program, which is funded by LSSTC, NSF Cybertraining Grant No. 1829740, the Brinson Foundation, and the Moore Foundation; his participation in the program has benefited this work.
QACTIS IDL software package (Huby et al., 2017), VIP: Vortex Imaging Processing python package (Gomez Gonzalez et al., 2017), Species (Stolker et al., 2020), Exo-DMC (Bonavita, 2020), RVSearch (Rosenthal et al., 2021), radvel (Fulton et al., 2018), PanCAKE (Girard et al., 2018; Perrin et al., 2018; Carter et al., 2021), galpy (Bovy, 2015), Astropy (Astropy Collaboration et al., 2013, 2018, 2022).
|
2309.08557 | The thermodynamic cost of choosing | Choice can be defined in thermodynamical terms and be shown to have a
thermodynamic cost: choosing between a binary alternative at temperature T
dissipates an energy E > kT ln 2. | Carlo Rovelli | 2023-09-13T19:10:00Z | http://arxiv.org/abs/2309.08557v2 | # The thermodynamic cost of choosing
###### Abstract
Choice can be defined in thermodynamical terms and be shown to have a thermodynamic cost: choosing between a binary alternative at temperature \(T\) dissipates an energy \(E\geq kT\ln 2\).
In a celebrated paper, Landauer noticed that _erasing information_ requires dissipation [1]. The precise status of this observation is controversial (see a discussion and extended references in [2]), but the idea has had a large impact on our understanding of the thermodynamics involved in computation and information. Here I observe that there is a related connection between _choosing_ (which _generates_ infrmation) and dissipation. This statement can be given a precise meaning under a definition of choice, discussed below. With this definition, a choice has a thermodynamic cost: choosing between a binary alternative at temperature \(T\) dissipates an energy \(E\geq kT\ln 2\), where \(k\) is the Boltzmann constant.
The verb "to choose" is used in a variety of contexts. One might say that a thermostat has chosen to turn the heating on right now. This is not the kind of choice that here I am interested in, because a thermostat functions on the basis of a predictable mechanism. Here I rather reserve the term "choice" for the situations when the outcome of the choice is not predictable.
I consider the classical context first, then the quantum one. In the context of classical mechanics, lack of predictability is necessarily associated with incomplete information. Therefore when when we talk about choice we are always talking about our ignorance of an underlying dynamical process determining the outcome [3].
We can formalize this situation by considering a system \(S\) with a phase space \(\Gamma\) and an incomplete set of \(N\) variables \(A_{n}:\Gamma\to I\!\!R\), with \(n=1,...,N\), that capture our incomplete knowledge. Here 'incomplete' means that the map into \(I\!\!R^{n}\) defined by the set of the \(A_{n}\)'s is not injective.
This structure defines a thermodynamic-like context, because we can distinguish'micro-states' \(s\in\Gamma\), from'macro-states' \(a_{n}\) that are the possible values that the observables \(A_{n}\) can take. Each macro-state \(a_{n}\) defines a region \(R(a_{n})\) in \(\Gamma\) as its inverse image \(R(a_{n})=\{s\in\Gamma,A_{n}(s)=a_{n}\}\) and if a phase space has a natural associated volume \(V\), each macro-states has an associated entropy \(S(a_{n})\) defined by \(S=k\ln V(R(a_{n}))\). The entropy of a macro-state measures how many micro-states it contains, and therefore the amount of ignorance about the details, if we only the macro-state is known.
A microscopic history of a system is described by a function \(s(t)\), where \(t\) is time. A macroscopic history of a system is described by the functions \(a_{n}(t)\). A given macro-history \(a_{n}(t)\) is the ensemble of all the micro-histories \(s(t)\) such that \(A_{n}(s(t))=a_{n}(t)\). Because of the determinism of classical mechanics, knowledge of \(s(t)\) for \(t<0\) is sufficient to determine \(s(t)\) uniquely also for \(t>0\). But not so for macroscopic histories. That is, it is possible to have two macroscopic histories \(a_{n}(t)\) and \(a^{\prime}_{n}(t)\), both compatible with the dynamics of the system such that
\[a_{n}(t) \neq a^{\prime}_{n}(t),\mbox{ for }t<-\epsilon<0,\] \[a_{n}(t) = a^{\prime}_{n}(t),\mbox{ for }t>\epsilon>0 \tag{1}\]
or
\[a_{n}(t) = a^{\prime}_{n}(t),\mbox{ for }t<-\epsilon<0,\] \[a_{n}(t) \neq a^{\prime}_{n}(t),\mbox{ for }t>\epsilon>0 \tag{2}\]
These two processes are represented in Figures 1 and 2, respectively. The first of these processes (Fig 1) represents a Landauer memory erasure: erasure is a process that merges two macro-histories. Information is not lost at the micro-level, where evolution is deterministic. Where information is lost is at the macro-level.
In the second process (Fig 2), the macroscopic system chooses between two different alternatives, both compatible with the dynamics, during the interval \([-\epsilon,\epsilon]\). This
Figure 2: A choice. A macro-state splits into two macro-states, each carrying a fraction of the micro-states.
Figure 1: Landauer memory erasure. Black lines represent micro-histories; thick grey lines macro-histories. The system is initially in one of two macro-states; time merges the two..
is what happens when we make a choice by flipping a coin: the macroscopic future is determined by a small fluctuation in the air's density, that determines whether the coin ends up head or cross. The micro-history, of course, is deterministic.
The thermodynamics of a Landauer erasure can be understood in this language. Consider Figure 1. Say that there are \(N_{1}>0\) micro-histories that start in the left macro-history and \(N_{2}>0\) micro-histories that start in the right macro-history. Say that the world is in one of the micro-histories, for instance one that starts in the left macro-state. Before the erasure, the entropy of the actual macro-state of the world is \(S_{i}=k\ln N_{1}\); after the erasure, the entropy of the actual macro-state of the world is \(S_{f}=k\ln\left(N_{1}+N_{2}\right)>S_{i}\). Hence erasure increases entropy. If \(N_{1}=N_{2}=1\), the increase of entropy is \(\Delta S=k\ln 2\). This is the Landauer principle, that states the minimal cost for erasing a memory. This statistical interpretation of the Landauer principle is ultimately equivalent to the thermodynamical ones.
The simple argument above might lead one to expect that a choice _decreases_ entropy. This is _not_ the case, and this is the main point I make in this article.
The reason the entropy does not actually decrease in a choice is that something needs to stabilize the information transferred from the microphysics to the macro-physics by the choice. This differentiates the erasure from the choice. In the erasure, the entropy grows, therefore standard irreversibility implied by equilibration is sufficient to stabilize the erasure. In a choice, described by the second figure, the entropy decreases, and therefore equilibration ("the second principle") tends to take it back immediately. That is, the two macroscopic states in which the process in the right panel end up are immediately merged again by equilibration unless something stabilizes their outcome. In the absence of something that stabilizes the choice, the right panel only describes the first phase of an arbitrary (undetected) thermal fluctuation. The flipped coin needs to hit the ground and dissipate its kinetic energy in order for providing us with a choice. Let us see this phenomenon in detail in an example.
Consider the device depicted in Figure 3. It is formed by a single particle moving fast in a cavity with two small windows. At each window there is a pendulum. If the ball hits a pendulum, it may set it in motion. This can be considered as a devise that makes a choice, if we take the position and velocity of the particle to be micro-variables and the energy in the pendulums to be macro-variables. In the macro-world evolution is unpredictable: it is determined by the unknown micro-history: if the pendulums are initially at rest, which one of the two is set in motion first is determined by the actual position and velocity of the particle, which play here the role of the flipping coin.
Let us analyze the _full_ thermodynamics of the process.
We can associate a temperature \(T_{h}\) to the particle (which has three degrees of freedom), defined by \(E=3\times\frac{1}{2}kT_{h}\), where \(E\) is the energy of the particle. If the pendulums are initially exactly at rest, they have zero energy hence zero temperature. A collision transfers some energy from the higher temperature system to the lower temperature system, hence raises entropy. To be more realistic, imagine that the pendulums themselves have an ambient temperature \(T\). As long as \(T_{h}>T\) it is more probable that in the collision energy is transferred from the particle to the pendulum, because of energy equipartition. Therefore the pendulum hit first can receive energy and register the choice in the macro-world. For this to happen, the transferred \(\Delta E\) energy must be larger than the thermal energy of the pendulum \(E=\frac{1}{2}kT\), otherwise the oscillation of the pendulum caused by the collision cannot be distinguished from the thermal fluctuations. The increase in entropy due to the transfer of energy from a hot to a cold system is
\[\Delta S=\frac{\Delta E}{T}-\frac{\Delta E}{T+\Delta T} \tag{3}\]
where \(T_{h}=T+\Delta T\). The increase of information due to the outcome of the binary choice is a single bit. For this to be consistent with the second principle, the overall entropy increase must at least compensate it, namely \(\Delta S>k\ln 2\), giving
\[\Delta E\left(\frac{1}{T}-\frac{1}{T+\Delta T}\right)>k\ln 2 \tag{4}\]
By taking \(\Delta T\) arbitrary large we can minimize the parenthesis, leaving
\[\Delta E>kT\ln 2, \tag{5}\]
which is the relation we were seeking. That is, the minimal energy that must be dissipated in the binary choice at temperature \(T\) is \(\Delta E=kT\ln 2\). It is easy to see that the mechanism is general.
Choosing appears to be a phenomenon that is in a sense inverse of erasing information (because branching is the inverse of merging). Therefore one may be puzzled by the above result and rather be tempted to expect that if erasing macroscopic information raises thermodynamical entropy, then choosing would decrease it.
Figure 3: A mechanical choice device. The ball, whose exact position is not known, hits first one or the other of the two penduli.
But this is wrong. The reason is that there is no reversibility in dissipative phenomena. A cyclic sequence that produces a choice and then erases it repeatedly is not reversible: it continuously increase entropy at every step, like most real macroscopic phenomena. Entropy is raised both in choosing and in erasing the choice. The information (negative-entropy )gained by the split of the macroscopic state must be compensate by a larger entropy increase (diffusion, energy equipartition,...), in order for the process to happen and stabilize.
Consider next quantum theory. At first sight, it seems that quantum theory can give us a free ride, circumventing the result above. For instance, a Stern-Gerlach apparatus can give rise to a binary choice. But a moment of reflection shows that this is wrong: it disregards the physics of the apparatus and the environment. This cannot be disregarded if we are concerned with the thermodynamical balance.
In fact, as nicely pointed out in [4], the conventional textbook description of a quantum measurement appears to violate all three laws of thermodynamics! Thermodynamical legality is restored by considering the thermal properties of the apparatus and the environment, where entropy raises in order to store the result of the measurement into the 'classical' macro-physics. See [4] for a detailed discussion of this point.
Equivalently, for an actual quantum measurement to be completed, it is necessary for the branches corresponding to the different outcomes to decoherence, and this implies that information is lost into the environment, hence the entropy relative to the observer increases. See [7] for a similar discussion in the context of the Relational Interpretation and [8] for the role of dissipation in Bohmian Mechanics' measurement. I shall not pursue a quantitative analysis f he quantum case here, leaving it to future work, but the above notes are sufficient to show that quantum measurement cannot produce a choice without raising entropy either.
A general observation underpins the fact that a choice has a thermodynamical cost. Thermal fluctuations are not predictable because they refer to the dynamics of the microscopic degrees of freedom and these are by definition characterized by their general inaccessibility. If we _measure_ a specific thermal or quantum fluctuation, we get a way to extract novel information from a thermal or quantum system; but _any measurement implies dissipation_. The same basic physical principle (also rooted in the second principle of thermodynamics) underpins the thermodynamical cost of learning [6].
The first to notice the basic and -seems to me- under-appreciated fact that any measurement is dissipative is, as far as I know, Reichenbach in [5]. In turn, the fact that measurement implies dissipation follows from an even simpler observation: a measurement records information _in the future_ of the measurement interaction. This breaks time reversal invariance, and the only physical source for the breaking of time reversal invariance is thermodynamical irreversibility, namely dissipation.
In summary, choosing can be modelled as bringing information from the micro (or quantum) physics to the macro (and classical) world. This process necessitates to be stabilized, and this can only happen via dissipation. The overall entropic balance is necessarily positive: the information gained in the choice is over-compensated by what is lost in the associated dissipation. The second law is not circumvented. As the Germans say 'Wahl ist Leiden', to choose is to suffer: perhaps not sufferance, but certainly dissipation.
***
This work was made possible through the support of the ID#62312 grant from the John Templeton Foundation, as part of the "The Quantum Information Structure of Spacetime (QISS)" Project (qiss.fr). I thank all the participant to the August 2023 FQXi retreat, where I have found key ideas for completing this note.
|
2309.11006 | STARNet: Sensor Trustworthiness and Anomaly Recognition via Approximated
Likelihood Regret for Robust Edge Autonomy | Complex sensors such as LiDAR, RADAR, and event cameras have proliferated in
autonomous robotics to enhance perception and understanding of the environment.
Meanwhile, these sensors are also vulnerable to diverse failure mechanisms that
can intricately interact with their operation environment. In parallel, the
limited availability of training data on complex sensors also affects the
reliability of their deep learning-based prediction flow, where their
prediction models can fail to generalize to environments not adequately
captured in the training set. To address these reliability concerns, this paper
introduces STARNet, a Sensor Trustworthiness and Anomaly Recognition Network
designed to detect untrustworthy sensor streams that may arise from sensor
malfunctions and/or challenging environments. We specifically benchmark STARNet
on LiDAR and camera data. STARNet employs the concept of approximated
likelihood regret, a gradient-free framework tailored for low-complexity
hardware, especially those with only fixed-point precision capabilities.
Through extensive simulations, we demonstrate the efficacy of STARNet in
detecting untrustworthy sensor streams in unimodal and multimodal settings. In
particular, the network shows superior performance in addressing internal
sensor failures, such as cross-sensor interference and crosstalk. In diverse
test scenarios involving adverse weather and sensor malfunctions, we show that
STARNet enhances prediction accuracy by approximately 10% by filtering out
untrustworthy sensor streams. STARNet is publicly available at
\url{https://github.com/sinatayebati/STARNet}. | Nastaran Darabi, Sina Tayebati, Sureshkumar S., Sathya Ravi, Theja Tulabandhula, Amit R. Trivedi | 2023-09-20T02:20:11Z | http://arxiv.org/abs/2309.11006v1 | STARNet: Sensor Trustworthiness and Anomaly Recognition via Approximated Likelihood Regret for Robust Edge Autonomy
###### Abstract
Complex sensors such as LiDAR, RADAR, and event cameras have proliferated in autonomous robotics to enhance perception and understanding of the environment. Meanwhile, these sensors are also vulnerable to diverse failure mechanisms that can intricately interact with their operation environment. In parallel, the limited availability of training data on complex sensors also affects the reliability of their deep learning-based prediction flow, where their prediction models can fail to generalize to environments not adequately captured in the training set. To address these reliability concerns, this paper introduces STARNet, a Sensor Trustworthiness and Anomaly Recognition Network designed to detect untrustworthy sensor streams that may arise from sensor malfunctions and/or challenging environments. We specifically benchmark STARNet on LiDAR and camera data. STARNet employs the concept of approximated likelihood regret, a gradient-free framework tailored for low-complexity hardware, especially those with only fixed-point precision capabilities. Through extensive simulations, we demonstrate the efficacy of STARNet in detecting untrustworthy sensor streams in unimodal and multimodal settings. In particular, the network shows superior performance in addressing internal sensor failures, such as cross-sensor interference and crosstalk. In diverse test scenarios involving adverse weather and sensor malfunctions, we show that STARNet enhances prediction accuracy by approximately 10% by filtering out untrustworthy sensor streams. STARNet is publicly available at [https://github.com/sinataybehat/STARNet](https://github.com/sinataybehat/STARNet).
OOD Detection; LiDAR point clouds; Multimodal inference.
## I Introduction
The demand for improved perception and a deeper understanding of the environment in autonomous robotics has led to an increased reliance on complex sensors. These sensors, such as those capable of capturing data beyond the visible spectrum, provide robots with a more detailed view of their surroundings. For instance, LiDAR sensors offer precise depth perception and spatial resolution, enabling detailed 3D mapping, which is valuable for object detection, mapping, and localization in autonomous navigation. LiDAR is also effective in various lighting conditions, including nighttime and overcast situations, where cameras struggle [1]. Similarly, RADAR performs well in adverse weather conditions, reliably detecting objects in fog, rain, or snow and accurately measuring velocities. Consequently, there is a growing interest in utilizing combinations of these complex sensors to enhance object recognition, material differentiation, environmental monitoring, and related applications for _robust autonomy_ in diverse scenarios.
Meanwhile, acquiring sufficient training data for advanced sensors is challenging due to several factors. For instance, with the evolution of LiDAR technology, new sensor designs, such as optical phased array [2], yield data with distinct characteristics. Keeping training data up-to-date with these advancements is intricate. Similarly, reflections from stationary objects like buildings, trees, or parked vehicles can influence RADAR sensor readings, making target differentiation in cluttered environments a significant labeling challenge. Additionally, active sensors require more diverse training data due to the unique influences of various environments and lighting conditions on return signals. These factors complicate developing exhaustive training datasets that cover all potential scenarios a robot might face in its operational life.
Additionally, complex sensors are also susceptible to various failures. The demand for compact, energy-efficient sensors necessitates innovative fabrication techniques. As sensor complexity increases, the manufacturing process must achieve higher precision, which also raises the risk of internal component failures. For example, on-chip integration of edge-emitting lasers with nanometer node transistor technologies can result in complex failure mechanisms due to thermal over-stress [3], electrostatic discharge [4], fabrication variabilities [5], aging [6], _etc._ Likewise, in emerging multi-pixel LiDAR and RADAR sensors [7], interference from nearby pixels can result in cross-talk, thus corrupting their sensor readings.
With the growing integration of complex sensors and advanced deep learning in next-generation robotics, ensuring
Fig. 1: **Motivation for STARNet: (a)** Predictions for edge autonomy can be impacted by a lack of reliability in sensing, learning, and computing. Various failures may impact sensors, such as a beam missing in LiDAR. Hard-to-generalize environments impact learning reliability. Limited computing resources at the edge impact the reliability of computations such as those demanding high-precision operations. **(b)** In this work, we present STARNet, which feeds upon intermediate representations of the sensor streams, a function of sensor reliability and environment, to detect when the prediction failures are likely. STARNet is designed to operate under low precision, so edge computing constraints less impact it.
the continuous reliability of sensor data streams becomes crucial. To address the challenges of robust autonomy, we introduce **STARNet**: a **S**ensor **T**rustworthiness and **A**nomaly **R**ecognition **N**etwork. STARNet detects untrustworthy sensor features, which may arise from sensor malfunctions or difficult-to-generalize environments, to alert downstream decision-making processes of potential inaccuracies.
Within STARNet, we employ the concept of _approximated likelihood regret_ which is tailored for low-complexity hardware, especially those with only fixed-point precision capabilities, thus ensuring minimal impact from edge computing constraints. STARNet utilizes the generative capabilities of variational auto-encoders (VAE) to learn the distribution of trustworthy sensor data streams, which jointly depend on sensor reliability and the generalizability of the environment based on the training set. The likelihood of a sensor stream is computed using pre-trained VAE \(L_{\text{VAE}}\) and then again by optimizing the VAE against the input sample,i.e., \(L_{\text{OPT}}\) using the proposed gradient-free processing. The difference in likelihoods, \(L_{\text{OPT}}-L_{\text{VAE}}\), termed _likelihood regret_, effectively differentiates between trustworthy and untrustworthy streams: trustworthy streams display lower likelihood regret as they align closely with the learned distribution. In contrast, untrustworthy streams exhibit a pronounced regret. By continuously monitoring and filtering out untrustworthy sensor streams, we demonstrate that bSTARNet improves the prediction accuracy by \(\sim\)10% across various test cases on adverse weather and sensor malfunctioning.
## II Background
### _Sensor Failures_
In autonomous navigation systems, sensor reliability directly influences the efficacy of deep learning-based decision-making. While complex sensors are becoming prevalent in autonomous robotics, they complicate reliability verification due to their intricate failure mechanisms. For instance, LiDAR sensors can experience ghost readings from multi-path interference, reduced accuracy during adverse weather like fog, or even false negatives due to the absorption of signals by raindrops [8]. Infrared sensors might be impacted by ambient temperature fluctuations or direct exposure to bright sources, leading to noisy or even incorrect readings [9]. The emerging event-based cameras are impacted by rapid scene changes, causing a temporal information overload [10]. Modern solid-state RADARs can suffer from false reflections or clutter from non-moving objects [11]. Acoustic sensors, used for subterranean or underwater navigation, can face signal attenuation or reflection complications due to environmental inconsistencies [12]. MEMS-based inertial sensors can drift over time or get affected by sudden temperature shifts [13]. While precise modeling of these sensor faults is challenging, especially due to their complex interaction with the environment and autonomy task, even minor inaccuracies in the input data can lead to disproportionately large errors in the output due to the non-linearity of deep learning models.
### _Out-of-Distribution Detection (OOD)_
In STARNet, we utilize out-of-distribution (OOD) detection techniques to identify sensor streams that significantly deviate from the training data distribution. Various OOD detection methods exist. For instance, WarningNet [14] detects sensor anomalies due to alterations in image input patterns via deep learning, while the PAAD network [15] proactively signals robots about anomalies in unpredictable environments. Generative models, such as Variational Autoencoders (VAEs) [16] and Generative Adversarial Networks (GANs) [17], are employed for OOD detection, using VAEs' reconstruction errors and GANs' discriminator scores as OOD indicators. Recently, there has been a surge of self-supervised learning methods [18, 19] for OOD detection as well.
However, many of these established methods, including Softmax probability scores [20] and feature representations from the network's last layer [21], often fail to detect seemingly straightforward OOD scenarios, such as discriminating the distribution of CIFAR-10 dataset images from MNIST images. A key drawback of these methods is the potential overlap in the feature space of in-distribution and OOD data. If an OOD sample is proximate to the decision boundary in this space, it may be incorrectly identified as an in-distribution sample. Unlike earlier approaches, the likelihood regret (LR) measure [22] has shown a much-improved ability to detect OOD data by training a distribution learner (e.g., VAE) on incoming sample to ascertain the discrepancy between the predicted likelihood from the primary model and that from a model retrained on this new sample. The LR values for OOD samples are typically substantially different than those for in-distribution samples, aiding discrimination. However, the computational cost of LR-owing to the need for retraining on every sample-constrains its use in real-time edge devices. We address this using gradient-free computation techniques, discussed next.
### _Gradient-Free Optimization_
In STARNet, we use gradient-free optimization suitable for fixed-point precision hardware. Traditional gradient-based methods rely on floating-point precision hardware to accurately represent a broad spectrum of gradient values. This increases energy and footprint due to extra circuitry for the mantissa, exponent processing, and error correction. On the other hand, fixed-point precision hardware, which uses a simpler arithmetic logic unit, is more common in edge devices because deep learning inference often works well with fixed-point computation. Therefore, by tailoring STARNet's optimization problems to be gradient-free, we present a more resource-efficient and widely applicable framework.
Among the prominent gradient-free optimization methods, ZO-SGD [23] and ZO-SCD [24] stand out for optimizing unconstrained stochastic optimization with convergence rates of \(O(\sqrt{d}/\sqrt{T})\), where \(T\) represents the iteration count. However, their convergence efficiency is hampered by the variable dimension \(d\). The ZO stochastic mirror descent method (ZO-SMD) was introduced to mitigate this, establishing a tighter
bound with dimension-dependent factors [25]. Additionally, variance reduction techniques have bolstered ZO-SGD and ZO-SCD, leading to stochastic variance-reduced algorithms that boast enhanced convergence rates and iteration complexities [26]. Recent ZO optimization algorithms, such as ZO proximal SGD (ZO-ProxSGD) [27], ZO via conditional gradient (ZO-CG) [28], and online alternating direction method for multipliers (ZO-ADMM) [29], emphasize gradient-free constrained optimization. Another significant method in this domain is Simultaneous Perturbation Stochastic Approximation (SPSA), which efficiently estimates gradients by perturbing all dimensions simultaneously, allowing for scalability [30].
## III Framework of STARNet
Fig. 2 presents the overview of STARNet. STARNet operates on the intermediate sensor stream representation generated for downstream tasks, such as object detection. These representations are fed to a VAE to learn their distribution from the training set. Notably, the generated distributions are a function of _both_ the sensor reliability and the operating environment. By leveraging the same representations for the downstream task, STARNet presents minimal overheads. The reliability of an incoming sensor stream is verified by computing its likelihood regret (LR) using the proposed gradient approximation methods to determine if the sensor features emanate from the learned distribution or are out-of-distribution, in which case, the downstream controller is alerted for possible misprediction.
We characterize STARNet in two settings: LiDAR-only processing and LiDAR+Camera processing. For the LiDAR-only case, the PointNet architecture [31] is employed. Instead of relying on data representations like volumetric grids or multiple 2D projections, PointNet processes raw point coordinates directly, ensuring invariance under permutations of the input set. The network structure in PointNet employs a combination of shared multi-layer perceptrons (MLPs) and max-pooling layers to capture local features of individual points and global contextual information of the entire point set. For LiDAR+camera processing, PointFusion [32] is employed for feature extraction and fusion. By aligning and integrating these heterogeneous data types, PointFusion exploits the spatial depth information from point clouds and the rich texture and color information from images. Even though we present our results on these two specific test cases, the framework of STARNet can be generalized to other extractors.
Fig. 3: **Gradient-free Optimization: A comparative evaluation of three gradient-free optimization methods—Zero Order Sign, Zero Order Stochastic Gradient Descent, and Simultaneous Perturbation Stochastic Approximation (SPSA)—under two conditions: (a) heavy snow and (b) light snow. The graph shows noticeable fluctuations in the performance of Zero Order Sign and Zero Order Stochastic Gradient Descent as the number of iterations increases. At the same time, SPSA is highlighted for its enhanced robustness and stability.**
Fig. 2: **STARNet on Multimodal Data Streams: We demonstrate STARNet using multimodal sensor streams from cameras and LiDAR point clouds. However, STARNet’s framework is versatile and can be adapted to other multimodal scenarios. In the current configuration, STARNet feeds upon sensor feature representations generated for the primary task, specifically from PointNet and PointFusion. A VAE ingests these features to learn their typical distribution. During inference, STARNet calculates the likelihood regret through gradient-free optimization to identify the discrepancy between the sensed feature and learned feature distribution to alert the primary prediction mechanism of potential inaccuracies when discrepancies are excessive.**
### _Unsupervised Learning of Sensor Feature Distribution_
We utilize a variational autoencoder (VAE) to learn the distribution of extracted sensor features. This VAE models the observed variable \(x\) using a latent random variable \(z\). The generative model is represented as:
\[p(x)=\int p(x|z)p(z)dz \tag{1}\]
For a computationally efficient training of VAE, we focus on maximizing the evidence lower bound (ELBO), given by:
\[\log(p_{\theta}(x))\geq\mathbb{E}_{q_{\phi}(z|x)}\left[\log p_{\theta}(x|z) \right]-D_{KL}\left[q_{\phi}(z|x)||p(z)\right] \tag{2}\]
Here, \(q_{\phi}(z|x)\) is the variational approximation to the true posterior \(p_{\theta}(z|x)\), with global parameters \(\phi\) (encoder) and \(\theta\) (decoder). The Kullback-Leibler divergence, \(D_{KL}\), measures the difference between the prior and the approximating distribution. The main goal is to train the VAE to maximize the ELBO across training data, ensuring the VAE captures the data's inherent distribution and offers a useful representation in the latent space.
### _Likelihood Regret under Gradient Approximation_
LR captures the discrepancy between an input's likelihood from its optimized posterior distribution and the likelihood predicted by the VAE, defined as \(\text{LR}=L_{\text{OPT}}-L_{\text{VAE}}\), where \(L_{\text{OPT}}\) is the empirical dataset (or sample) optimum and \(L_{\text{VAE}}\) is the optimum corresponding to the input example. In-distribution samples should have lower LR values than out-of-distribution samples. We explored gradient approximation techniques for the computations of LR, differentiating it from the original method in [22]. We specifically investigated several gradient approximation methods, including zero-order optimization with stochastic gradient descent (ZO_SGD) [33, 34], zero-order optimization with sign [34, 35], and simultaneous perturbation stochastic approximation (SPSA) [30].
Fig. 3 shows the comparison among methods using AUC (Area Under the ROC, i.e., Receiver Operating Characteristic curve) as the evaluation metric, a conventional metric for assessing the performance of OOD classification. Compared to other methods, SPSA shows improved performance on a more challenging light snow case and also consistently benefits with more iterations of gradient-free model updates, making it our preferred choice. Additionally, the method is also lightweight because it only requires evaluating the system at two perturbed points, regardless of the dimensionality of the problem. Also, considering the feature extraction flow of PointNet in Fig. 4(a), we also analyzed the optimal abstraction level that results in the best performance. PointNet comprises three components: point, feature, and encoder levels. Upon extracting features from each component, our analysis showed that encoder features yielded the highest AUC values, as presented in Fig. 4(b). These findings are utilized in our simulation results discussed next.
## IV Simulation Results
### _Benchmark Testsets_
To characterize the efficacy of STARNet, we developed benchmark testing datasets featuring eight distinct corruption types, each presented in two different severity levels (Heavy and moderate). These datasets are divided into the following categories: (i) _Natural Corruptions:_ Environmental factors like fog, rain, and snow, (ii) _External Disruptions:_ Issues such as motion blur and beam missing (absence of laser beams), and (iii) _Internal Sensor Failures:_ Sensor-specific problems including crosstalk, incomplete echo responses, and cross-sensor interference. Though our approach utilizes LiDAR-Camera fusion, all corruption scenarios affected both data types. However, the internal sensor failures pertained solely to the LiDAR sensor. We evaluated our method using the KITTI-C dataset, which provides corruption sets for LiDAR and camera data [36]. Figure 5 showcases dataset samples.
We evaluated STARNet's efficacy in unimodal and multimodal settings considering these corruptions. In the unimodal setting, only the LiDAR point cloud was processed. The point cloud was combined with its corresponding image in the multimodal setting. In the following, we discuss our simulation results for both cases.
### _Results for LiDAR-only Processing_
We examined eight failure scenarios for LiDAR point clouds, each presented in heavy and moderate intensities. As shown in Fig. 6, our method performs well, especially when dealing with internal sensor issues like cross-sensor interference and crosstalk. Specifically, our method achieved an AUC value of 0.9658 in cases with intense crosstalk. For cross-sensor interference, the AUC value was 0.9938.
Table I provides a detailed comparison of our method against various Out-of-Distribution (OOD) detection techniques, including the original Likelihood Regret [22], Log-Likelihood, Likelihood Ratio [37], and the Input Complexity method [38]. The results in the table show that our approach,
Fig. 4: **Feature Extraction Analysis: (a) High-level architecture of the PointNet version used for feature extraction, detailing its three main components: point level, feature level, and encoder level. (b) Comparative analysis of AUC values derived from features extracted at each level, highlighting the superior performance of encoder features under heavy and light snow conditions.**
which uses an approximate gradient-free likelihood regret with SPSA, is almost as effective as the original method that relies on gradient descent. Additionally, these methods perform better than other techniques, such as the likelihood ratio and input complexity.
### _Results for Camera + LiDAR Processing_
In the context of multimodal scenarios, we undertook an analysis of four distinct failure situations concerning the fusion of LiDAR point clouds and images. Two levels of intensity characterized each of these scenarios. As illustrated in Fig. 7, our method's performance closely aligns with the original LR's. A comprehensive summary of our findings is presented in Table II. The table incorporates accuracy values based on the R2 score. This metric underscores the notion that, while there might be minor discrepancies in OOD detection, the overall task can still be executed with high precision. Our results indicate that integrating images and LiDAR data enhances the AUC values. For instance, in the presence of snow, the AUC increases by 0.0801 under heavy snow. Similarly, AUC
Fig. 5: **Visualization of Corruption sets: Our multimodal test-set contains the clean KITTI dataset and four different failure cases on both camera images and bird-view LiDAR point clouds.**
Fig. 6: **Unimodal results: Performance evaluation of STARNet on LiDAR point clouds for eight different failure cases. The figure highlights the superior performance of our method in addressing internal sensor failures, specifically cross-sensor interference and crosstalk. Notably, the AUC values for heavy crosstalk and cross-sensor interference are 0.9658 and 0.9938, respectively.**
increases by 0.0939 under moderate snow conditions.
## V Conclusion
Although deep learning models can be enhanced for robustness by augmenting training data with noisy sets and diverse operating environments, solely relying on augmentation is insufficient since it is hard to anticipate and gather enough training data to capture the complexity and diversity of environments encountered in practical scenarios. Comparatively, STARNet enables risk-conscious decision-making by continuously monitoring the trustworthiness of incoming sensor streams. In Fig. 8, we summarize the net significance of STARNet in improving robust decision-making. Utilizing a VAE trained solely on clean data for object detection, we observed its susceptibility to several environmental and sensor-based corruptions, including snow, cross-sensor noise, and motion blur. These disturbances result in a decline from its original prediction accuracy of approximately 87.32%. With STARNet's integration to detect and mitigate untrustworthy sensor data, the prediction accuracy in these challenging scenarios improved by \(\sim\)10%, effectively reverting to the level observed with clean data. Moreover, STARNet's reliance on computationally efficient gradient approximation techniques ensures its viability for continuous monitoring, even on resource-constrained edge devices.
\begin{table}
\begin{tabular}{c||c c c||c} \hline & **LR+SPSA** & **LR** & **Log-LL** & **ACC** \\ \hline H Fog & 0.9548 & 0.9502 & 0.2860 & 0.9663 \\ M Fog & 0.9479 & 0.922 & 0.2846 & 0.9667 \\ \hline H Snow & 0.9164 & 0.905 & 0.3283 & 0.9756 \\ M Snow & 0.8707 & 0.8674 & 0.3974 & 0.97 \\ \hline H Rain & 0.8745 & 0.8547 & 0.351 & 0.9798 \\ M Rain & 0.8590 & 0.8352 & 0.4042 & 0.9799 \\ \hline H Motion Blur & 0.8902 & 0.8529 & 0.6058 & 0.9750 \\ M Motion Blur & 0.8428 & 0.7829 & 0.6238 & 0.9763 \\ \hline \end{tabular} Accuracy (ACC) is reported as the R2 score.
The pre-trained model is on the Clean KITTI point cloud.
\end{table} TABLE II: **Fusion Results (Metric: AUC)**
Fig. 8: **Object Detection Accuracy for KITTI dataset: We employed a VAE to process LiDAR point clouds and corresponding labels of various objects, including cars, pedestrians, and cyclists. The VAE then outputs bounding boxes for these objects. The network was tested on challenging conditions, including heavy, medium, and low snow and other corruption cases.**
\begin{table}
\begin{tabular}{c||c c c c c} \hline & **Ours (LR+SPSA)** & **Likelihood Regret [22]** & **Likelihood** & **Likelihood Ratio [37]** & **Input Complexity [38]** \\ \hline Fog & 0.7786 & 0.7804 & 0.4294 & 0.7612 & 0.4009 \\ Snow & 0.8363 & 0.8077 & 0.3613 & 0.7884 & 0.3587 \\ Rain & 0.9058 & 0.9138 & 0.4248 & 0.9216 & 0.3910 \\ Motion Blur & 0.8174 & 0.7839 & 0.3972 & 0.7407 & 0.3914 \\ Beam Missing & 0.7807 & 0.7421 & 0.4551 & 0.7531 & 0.4407 \\ Incomplete Echo & 0.7385 & 0.7384 & 0.3839 & 0.7079 & 0.3635 \\ Cross-sensor & 0.9938 & 0.9939 & 0.4180 & 0.8743 & 0.3734 \\ Crosstalk & 0.9658 & 0.969 & 0.3087 & 0.9235 & 0.2780 \\ \hline \end{tabular} The pre-trained model is only on the clean KITTI point cloud. Thus, the results are obtained under unsupervised learning.
\end{table} TABLE I: **Comparison Results for Unimodal case (Metric: AUC)**
Fig. 7: **Multimodal results: Performance evaluation of STARNet on LiDAR point clouds and images for four different failure cases.** |
2308.16440 | Localizing Transitions via Interaction-Induced Flat Bands | This paper presents a theory of interaction-induced band-flattening in
strongly correlated electron systems. We begin by illustrating an inherent
connection between flat bands and index theorems, and presenting a generic
prescription for constructing flat bands by periodically repeating local
Hamiltonians with topological zero modes. Specifically, we demonstrate that a
Dirac particle in an external, spatially periodic magnetic field can be cast in
this form. We derive a condition on the field to produce perfectly flat bands
and provide an exact analytical solution for the flat band wave functions.
Furthermore, we explore an interacting model of Dirac fermions in a spatially
inhomogeneous field. We show that certain Hubbard-Stratonovich configurations
exist that ``rectify'' the field configuration, inducing band flattening. We
present an explicit model where this localization scenario is energetically
favorable -- specifically in Dirac systems with nearly flat bands, where the
energy cost of rectifying textures is quadratic in the order parameter, whereas
the energy gain from flattening is linear. In conclusion, we discuss
alternative symmetry-breaking channels, especially superconductivity, and
propose that these interaction-induced band-flattening scenarios represent a
generic non-perturbative mechanism for spontaneous symmetry breaking, pertinent
to many strongly-correlated electron systems. | Alireza Parhizkar, Victor Galitski | 2023-08-31T04:04:05Z | http://arxiv.org/abs/2308.16440v1 | # Localizing Transitions via Interaction-Induced Flat Bands
###### Abstract
This paper presents a theory of interaction-induced band-flattening in strongly correlated electron systems. We begin by illustrating an inherent connection between flat bands and index theorems, and presenting a generic prescription for constructing flat bands by periodically repeating local Hamiltonians with topological zero modes. Specifically, we demonstrate that a Dirac particle in an external, spatially periodic magnetic field can be cast in this form. We derive a condition on the field to produce perfectly flat bands and provide an exact analytical solution for the flat band wave functions. Furthermore, we explore an interacting model of Dirac fermions in a spatially inhomogeneous field. We show that certain Hubbard-Stratonovich configurations exist that "rectify" the field configuration, inducing band flattening. We present an explicit model where this localization scenario is energetically favorable - specifically in Dirac systems with nearly flat bands, where the energy cost of rectifying textures is quadratic in the order parameter, whereas the energy gain from flattening is linear. In conclusion, we discuss alternative symmetry-breaking channels, especially superconductivity, and propose that these interaction-induced band-flattening scenarios represent a generic non-perturbative mechanism for spontaneous symmetry breaking, pertinent to many strongly-correlated electron systems.
_Introduction_ - Flat bands are remarkable phenomena in which electrons cease to propagate - their group velocity vanishes: \(\partial E(\mathbf{k})/\partial\mathbf{k}=0\). Hence for electrons in a flat band all other energy scales become relevant as they are now infinitely dominant over the kinetic energy. This makes the flat band a crucial subject of study in strongly correlated electron systems, where the comparison between interaction and kinetic scales often distinguishes distinct phases of matter [1]. In the case of a conventional quadratic dispersion relation, the notions of "infinite mass", \(m\rightarrow\infty\), and "flat band" can be used interchangeably. If the mass divergence occurs as a result of tuning the parameters of an interacting system, then it can be seen as a phase transition. In fact, there has been a long lasting endeavor [2; 3; 4; 5; 6] of exploring-such scenarios through interaction-induced renormalization of the effective mass. However, such phenomenological or perturbative approaches often come short in reliably describing singularities in strongly correlated systems. Here, we present an alternative approach, rooted in non-perturbative topological arguments, to recognize flat bands and also provide examples where a transition involving localization or band-flattening occurs due to interactions.
The main idea of our work hinges on a connection between the flat bands and the index theorem [7; 8]. We propose a generic prescription for constructing flat bands by a periodic continuation of a local Hamiltonian with a zero mode, which grows into a flat band. We show that one explicit realization of such a construction is an electron in a periodic classical gauge field. For interacting electrons, interactions can be decoupled in terms of a fluctuating order parameter. There exist classical configurations of the spatially inhomogenous order parameter that give rise to flat bands of the corresponding Bogoliubov quasiparticles. Spontaneous formation of such band-flattening configuration corresponds to a new localization mechanism by many-body effects. Whether such a transition occurs and in what channel is determined by energetics of the model. We show however that for systems where bare electrons' band structure is already close to being flat, this scenario is energetically favorable because the energy gain of flattening the bands is linear in the order parameter while the energy cost of such inhomogeneity is quadratic and hence physics is dominated by the former.
_Index_--Zero modes of a Hamiltonian and its square are in a one-to-one correspondence. Given a Hamiltonian \(H\), if in some basis one can write
\[H^{2}=\left[\begin{array}{cc}D^{\dagger}D&0\\ 0&DD^{\dagger}\end{array}\right]\,, \tag{1}\]
then the non-zero eigenvalues of \(H^{2}\) in one block have an equal counterpart in the other block, since if \(D^{\dagger}D\left|\lambda_{+}\right\rangle=\lambda^{2}\left|\lambda_{+}\right\rangle\) then \(\left(DD^{\dagger}\right)D\left|\lambda_{+}\right\rangle=\lambda^{2}D\left| \lambda_{+}\right\rangle\) and thus \(\left|\lambda_{-}\right\rangle\equiv D\left|\lambda_{+}\right\rangle/\lambda\) is a properly normalized eigenvector of \(DD^{\dagger}\) with the same positive eigenvalue. This also means that for each energy level there is another with either the same energy or its negative. So a straightforward conclusion is that if the dimensions of \(D^{\dagger}D\) and \(DD^{\dagger}\), which we respectively call \(n_{+}\) and \(n_{-}\), do not match, the bigger matrix must be enlarged by zero eigenvalues. Therefore, if \(n_{+}-n_{-}\neq 0\), a zero energy mode necessarily appears.
A straightforward example of the above construction that possesses zero modes would be certain bipartite chains with odd number of sites [9]. However, as we incorporate continuous models, differential operators, and gauge fields, the situation becomes more intricate. In this general case, \(n_{+}-n_{-}\) can be defined as the follow
ing index,
\[n_{+}-n_{-}=\lim_{M\rightarrow\infty}\,\left[\text{Tr}f\left(\frac{D^{\dagger}D}{M^ {2}}\right)-\text{Tr}f\left(\frac{DD^{\dagger}}{M^{2}}\right)\right]\,, \tag{2}\]
where \(f(x)\) is a generic function which is equal to \(1\) for \(x<1\) but goes to zero rapidly (but smoothly) for \(x>1\). Given that the number of non-zero eigenvalues of \(D^{\dagger}D\) and \(DD^{\dagger}\) are the same, they do not contribute to the above expression. The remaining value entirely depends on the difference in the number of their zero eigenvalues.
More generally, for any such Hamiltonian there exists an operator, \(\Gamma\), which maps the zero-mode subspace onto itself, \(\Gamma\{0\}\rightarrow\{0\}\), and has a zero trace on the compliment subspace. Then the trace of that operator exclusively contains information about zero-modes, and can be used as a proxy for a more general index,
\[\text{Tr}\,\Gamma=\sum_{n}\bra{n}\Gamma\ket{n}=\sum_{n\in\{0\}}\bra{n}\Gamma \ket{n}+\sum_{n\notin\{0\}}\bra{n}\!\!\bra{n}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
raise its strength. When the difference of the dimensions becomes an integer, \(n_{+}-n_{-}=1\), a zero mode appears. But when does this happen?
Looking at Eq. (2), or Eq. (4), and recalling that the \(D^{\dagger}D\) and \(DD^{\dagger}\) blocks only differ by \(\pm B_{z}\), we can guess that the difference in the dimensionality of the two blocks - the index - must be proportional to the magnetic flux. Let us evaluate this explicitly by working out the right hand side of Eq. (2). We focus on \(D^{\dagger}D\) first
\[\lim_{M\to\infty}{\rm Tr}f\left(\frac{D^{\dagger}D}{M^{2}}\right) =\lim_{M\to\infty}\,\int_{x}\int_{k}e^{i{\bf k}\cdot{\bf x}}f\left(\frac{D^{ \dagger}D}{M^{2}}\right)e^{-i{\bf k}\cdot{\bf x}}=\lim_{M\to\infty}M^{2}\int_{ x}\int_{k}f\left[\left({\bf k}+\frac{i\mathbf{\nabla}+e{\bf A}}{M}\right)^{2}- \frac{eB_{z}}{M^{2}}\right]= \tag{7}\] \[\lim_{M\to\infty}M^{2}\int_{x}\int_{k}\left[f({\bf k}^{2})+f^{ \prime}({\bf k}^{2})\left(2{\bf k}\cdot\frac{i\mathbf{\nabla}+e{\bf A }}{M}+\frac{(i\mathbf{\nabla}+e{\bf A})^{2}}{M^{2}}-\frac{eB_{z}}{M^{ 2}}\right)+\frac{1}{2}f^{\prime\prime}({\bf k}^{2})\frac{(2i{\bf k}\cdot\mathbf{\nabla}+2e{\bf k}\cdot{\bf A})^{2}}{M^{2}}\right]\,,\]
where we used \(\int_{x}\int_{k}\dots=\int d^{2}x\int\frac{d^{2}k}{(2\pi)^{2}}\dots\) for brevity. The first equality is the definition of the trace in the plane wave basis. The second equality is obtained by pulling \(e^{-i{\bf k}\cdot{\bf x}}\) through the regulator and rescaling \({\bf k}\to M{\bf k}\). The second line is the expansion of the regulator about \({\bf k}^{2}\), up to orders linear in \(1/M^{2}\). All higher orders vanish in the limit \(M\to\infty\). \({\rm Tr}f\left(\frac{DD^{\dagger}}{M^{2}}\right)\) can be obtained by changing the sign of the magnetic field \(B_{z}\to-B_{z}\) in the expression above for \({\rm Tr}f\left(\frac{D^{\dagger}D}{M^{2}}\right)\). Subtracting the two from each other yields
\[n_{+}-n_{-}=\frac{1}{2\pi}\int d^{2}x\,eB_{z}\,, \tag{8}\]
so we see that the number of zero-modes is indeed given by the flux. The integration above is over all space, but the same condition applies if we exclude the regions where \(B_{z}\) is zero. We can view the zero-modes as confined within \(B_{z}\neq 0\) regions. Consider for example a triangle-shaped region with a unit flux of \(B_{z}\), and designate it as \(\triangledown\) (we consider \(\triangledown\) for illustration purposes only, any other shape of a magnetic unit cell is equally valid). If the rest of the space is tiled by similar magnetic regions, we can think of \(\triangledown\) as having periodic boundary conditions (as in our previous example) and hence periodically-repeating the cell gives us a new quantum number labeling the band of zero-modes. Therefore a periodic magnetic field with an integer flux threading each cell, yields a flat band.
This flat band, as reflected in Eq. (8), has a connection to the chiral anomaly: The path-integral formulation of the system above is given by \(\int{\cal D}\bar{\psi}\psi\exp\{\int_{x}\bar{\psi}i{\not\!D}\psi\}\), with \([\bar{\psi},\psi]\) being independent fermionic fields and \({\not\!D}=i\partial_{t}-h\). Zero-modes of \({\not\!D}\) are not zero _energy_ modes, but instead they are _on-shell_ modes. The chiral anomaly counts the number of right-handed modes minus the number of left-handed modes and is given by Eq. (4) but with \({\not\!D}^{2}\) substituting \(H^{2}\)--the chiral anomaly penalises the off-shell modes rather than non-zero energy modes. However, there is an instance where the two coincide. As the system develops a flat band at some energy level, say by increasing \(B_{z}\), the "shell" is going to lie on that energy level so that off-shell and non-zero energy become synonymous. \({\not\!D}\) and \(H\) differ only in the time dimension and in fact they coincide exactly when the band becomes flat and the time dimension becomes irrelevant. A transition amplitude from any state \(|\xi_{i}\rangle\) to any other state \(|\xi_{f}\rangle\) within a flat band is independent of time, \(\langle\xi_{f}|\,e^{iHt}\,|\xi_{i}\rangle\sim\langle\xi_{f}\mid\xi_{i}\rangle\), so time can be removed from the path-integral, e.g. by setting \(\partial_{t}\psi=0\). Further, if the gauge field is periodic in tiles of \(\triangledown\), then we can reduce the path-integral over patches of \(\triangledown\). The anomaly for this "timeless" path-integral is given exactly by Eq. (8) with the spatial integral covering only \(\triangledown\). Since the chiral anomaly is the Jacobian of the chiral transformation, after a full chiral rotation, \(\psi\to e^{i2\pi\Gamma}\psi=\psi\) and \(\vec{\psi}\to e^{i2\pi\Gamma}\vec{\psi}=\vec{\psi}\), we must have,
\[\int_{\triangledown}{\cal D}\bar{\psi}{\cal D}\psi e^{\int_{x}\bar{\psi}i{ \not\!D}\psi}=\int_{\triangledown}{\cal D}\bar{\psi}{\cal D}\psi e^{\int_{x} \bar{\psi}i{\not\!D}\psi}e^{i2\pi(n_{+}-n_{-})}\,. \tag{9}\]
where the subscript \(\triangledown\) means that the path-integral is over one patch of \(\triangledown\), and the last exponential is the Jacobian of the chiral transformation with the exponent being the anomaly. But the above means the path-integral, and therefore the flat band, is realizable only when there is an integer flux through \(\triangledown\).
Our example of a continuous model can be mapped to many systems, among which optical lattices, graphene under periodic strain, and moire structures readily come to mind. It also has the desirable feature of being exactly solvable. For a magnetic field, which is periodic with lattice vectors \({\bf a}_{1,2}\), the zero energy flat band wave-functions are,
\[\psi_{\bf k}^{+}=\left(\begin{array}{c}\eta_{\bf k}(z)e^{+\phi}\\ 0\end{array}\right)\quad{\rm and}\quad\psi_{\bf k}^{-}=\left(\begin{array}{c}0 \\ \bar{\eta}_{\bf k}(\bar{z})e^{-\phi}\end{array}\right)\,, \tag{10}\]
with \(\mathbf{\nabla}^{2}\phi\equiv-B_{z}\) in Coulomb gauge, \(z=x+iy\), and
\[\eta_{\bf k}(z)=\frac{\sum e^{i\pi\left[\frac{\partial_{2}}{\partial z_{1}} \left(n+\frac{{\bf k}_{\bf 1}}{2\pi}\right)^{2}+2\left(n+\frac{{\bf k}_{\bf 1}}{2\pi}\right)\left(\frac{z}{z_{1}}- \frac{{\bf k}_{\bf 2}}{2\pi}\right)\right]}}{\sum e^{i\pi\left[\frac{\partial_{2}}{ \partial z_{1}}n^{2}+2n\frac{d}{z_{1}}\right]}} \tag{11}\]
are elliptic functions. The sum runs over \(n=\mathbb{Z}\), \(a_{1,2}\equiv a_{1,2}^{x}+ia_{1,2}^{y}\) and \(\bar{\eta}\) is defined as \(\eta\) but with \(a_{1,2}\) replaced by \(\bar{a}_{1,2}\). If we exchange \(\eta\) and \(\bar{\eta}\) for identity in above equations, we get only two zero-modes, \(\psi_{0}^{\pm}\). By introducing \(\eta\) we are introducing the additional quantum number. Notice that \(\eta_{\mathbf{k}}(z)\) is only a function of \(z\) and thus, since \(\partial_{z}\eta_{\mathbf{k}}(z)=0\), it does not alter the zero-mode equation for \(\psi_{\mathbf{k}}^{+}\). Since the function \(\eta_{\mathbf{k}}(z)\), and consequently the wave-function \(\psi_{\mathbf{k}}^{+}\), is quasi-periodic with respect to lattice vectors, \(\eta_{\mathbf{k}}(z+a_{1,2})=\eta_{\mathbf{k}}(z)e^{i\mathbf{k}\cdot\mathbf{a }_{1,2}}\), this additional quantum number is the lattice momentum. Each \(\psi_{\mathbf{k}}^{\pm}\) satisfies \(h\psi_{\mathbf{k}}^{\pm}=0\) and has a lattice momentum of \(\mathbf{k}\). If the magnetic field is composed of equally strong positive and negative patches across the space, then the flat band is two-fold degenerate. On the contrary, if for example there is a constant magnetic field background, then \(\phi\) becomes unbounded leading to only one of the wave-functions in Eq. (10) being normalizable.
_Interaction-induced localizing transitions_--We now turn our attention to emergence of flat bands due to interactions. As the most straightforward example naturally following from the previous discussion, consider electrons with current-current interactions:
\[I = \int\!\mathcal{D}\bar{\psi}\mathcal{D}\psi\exp\!\int\!d^{3}x\bigg{[} \bar{\psi}i\gamma^{\mu}\partial_{\mu}\psi+\frac{\lambda^{2}}{2}\bar{\psi}\gamma ^{\mu}\psi\bar{\psi}\gamma_{\mu}\psi\bigg{]}\] \[= \int\!\mathcal{D}\bar{\psi}\mathcal{D}\psi\mathcal{D}a\exp\!\int \!d^{3}x\bigg{[}\bar{\psi}i\gamma^{\mu}(\partial_{\mu}-i\lambda a_{\mu})\psi- \frac{1}{2}a_{\mu}a^{\mu}\bigg{]},\]
where in the second line we have decoupled the currents via a Hubbard-Stratonovich field, which depends on space and time. We will focus on classical, spatially inhomogenous configurations only in which case the action reduces to the previous situation of a Dirac particle in a magnetic field. The on-shell value of the Hubbard-Stratonovich field is \(a_{\mu}=\lambda j_{\mu}\equiv\lambda\bar{\psi}\gamma_{\mu}\psi\). Our periodic magnetic field example readily demonstrates the existence of certain configurations of \(a_{\mu}\) that create flat bands. A non-vanishing expectation value of \(\left\langle a_{\mu}(x)\right\rangle\) implies an ordered symmetry-broken phase of electron currents \(j_{\mu}\). We know from our discussion in the previous section that any configuration of periodic \(a_{\mu}(x)\) that satisfies Eq. (8) for each Hubbard-Stratonovich unit cell (e.g., \(\triangledown\) for the triangular cell), generates a flat band of fermions for this particular realization. If we set \(a_{0}=0\) and demand \(\int_{\triangledown}d^{2}x\nabla\times\left\langle\mathbf{a}(x)\right\rangle=1\), it would imply a phase of spontaneously generated periodic loop current textures with a flat band for the corresponding Bogoliubov excitations.
However, there is no apriori reason to choose this or any other particular configuration of the order parameter \(\lambda\left\langle\bar{\psi}\gamma_{\mu}\psi\right\rangle\) among infinitely many possibilities. Ultimately, the relevant configuration(s) are determined by the energetics of the model. We now argue that there exists a simple general class of model where such a band-flattening transition is natural and indeed energetically favorable.
Let us reintroduce an external magnetic field, \(A_{\mu}(x)\), into the interacting model. Consider specifically the case where the magnetic field strength is close but not exactly equal to the magic value where the flat band emerges. Denote the corresponding magic gauge field configuration as \(\bar{A}_{\mu}(x)\). The corresponding path-integral reads:
\[I = \!\!\int\!\mathcal{D}\mu\exp\!\int\!\!d^{3}x\bigg{[}\bar{\psi}i \gamma^{\mu}(\partial_{\mu}-ieA_{\mu}-i\lambda a_{\mu})\psi-\frac{1}{2}a_{\mu} a^{\mu}\bigg{]}\,,\]
In contrast to Eq. (12), what \(a_{\mu}\) needs to provide in order to generate a flat band has reduced to what \(A_{\mu}(x)\) has failed to provide. To "correct" the gauge field in order to satisfy the constraint 8, the order parameter needs to acquire a small average value such as
\[\lambda\left\langle\bar{\psi}(x)\gamma_{\mu}\psi(x)\right\rangle=\bar{A}_{\mu }(x)-A_{\mu}(x)\,,\]
In this case, the quadratic term \(a_{\mu}a^{\mu}\) - the energy cost of such a texture in Eq. (12) -is disregardable compared to the energy benefit of flattening the band, which depends only linearly on \(a_{\mu}\). Hence, a transition that flattens the band can happen for \(A_{\mu}(x)\) close enough to the magic configuration, \(\bar{A}_{\mu}(x)\). Note that this argument is independent of the particular gauge field configuration or the physical nature of the gauge field.
Moire systems [18; 19; 20; 21; 22; 23; 24] provide a natural physical platform where this scenario can happen and in fact such behavior has been reported in experiment [25]. In the case of twisted bilayer graphene [26; 27; 28; 29; 30; 31; 32; 33] moire patterns give rise to emergent classical gauge fields, which play the role of the background gauge field. At magic twist angles, these fields generate flat bands. The recipe for having a spontaneous formation of the flat band (potentially involving spontaneous symmetry breaking) is similar to the previous example: We introduce an "error" letting the gauge fields be slightly off their magic value or by breaking the chiral symmetry that enables exact flat bands in the first place, as it happens in actual experiment. Then, if the error is small enough it can become spontaneously corrected by an ordered phase which regenerates the flat band[34]. A simple example of this scenario would be to keep the twist angle slightly away from its magic value (see Supplemental Material for a discussion of this qualitative example). Another experimentally-relevant scenario of weakly perturbing the flat-band quantization condition may occur due to deformations of the graphene layers and the moire pattern due to lattice relaxation.
Note that the interaction-induced localization described above generally applies for other types of interactions, where band-flattening would happen due to a spontaneous symmetry breaking in a different channel or channels, with the latter corresponding to a co-existence of different quantum phases. In this context, the superconducting Cooper channel is of particular interest.
As our last and complimentary discussion, let us consider the possibility of a flat band of Bogoliubov quasiparticles in a superconducting phase. Consider the following action of interacting electrons moving across a two-dimensional Dirac material with the superconducting field, \(\Delta\), written in the Nambu space as follows
\[S = \!\!\int\!d^{3}x\!\left(\!\bar{\Psi}i\!\left[\begin{array}{cc} \partial_{t}-\mathbf{\sigma}\cdot\mathbf{\nabla}&-i\mathbf{\sigma}\cdot\mathbf{\Delta}\\ -i\mathbf{\sigma}\cdot\bar{\mathbf{\Delta}}&\partial_{t}+\mathbf{\sigma}\cdot\mathbf{\nabla} \end{array}\right]\!\Psi-\frac{2}{g}\bar{\mathbf{\Delta}}\cdot\mathbf{\Delta}\!\right)\!,\]
with \(\bar{\Psi}\equiv\left[\begin{array}{cc}\bar{\psi}_{\uparrow}&\psi_{\downarrow }^{T}\end{array}\right]\), \(\Psi\equiv\left[\begin{array}{cc}\psi_{\uparrow}&\bar{\psi}_{\downarrow}^{T} \end{array}\right]^{T}\) and \(\Uparrow
\(D^{\dagger}_{N\times M}\) is proportional to \(e^{i\mathbf{k}\cdot\mathbf{a}_{j}}\) the corresponding bond is connecting the \(n\)-th site of the cell to the \(m\)-th site of its neighboring cell; the one to the \(\mathbf{a}_{j}\) neighborhood of the original cell. In the language of periodic boundary condition, these components let the cell return to itself along the \(\mathbf{a}_{j}\) direction.
* Aharonov and Casher (1979)Y. Aharonov and A. Casher, Physical Review A **19**, 2461 (1979).
* Andrei _et al._ (2021)E. Y. Andrei, D. K. Efetov, P. Jarillo-Herrero, A. H. MacDonald, K. F. Mak, T. Senthil, E. Tutuc, A. Yazdani, and A. F. Young, Nature Reviews Materials **6**, 201 (2021).
* Mak and Shan (2022)K. F. Mak and J. Shan, Nature Nanotechnology **17**, 686 (2022).
* Xu and Balents (2018)C. Xu and L. Balents, Phys. Rev. Lett. **121**, 087001 (2018).
* Crepel _et al._ (2023)V. Crepel, N. Regnault, and R. Queiroz, The chiral limits of moire semiconductors: origin of flat bands and topology in twisted transition metal dichalcogenides homobilayers (2023), arXiv:2305.10477 [cond-mat.mes-hall].
* Popov and Tarnopolsky (2023)F. K. Popov and G. Tarnopolsky, Magic angles in equal-twist trilayer graphene (2023), arXiv:2303.15505 [cond-mat.str-el].
* Guerci _et al._ (2023)D. Guerci, Y. Mao, and C. Mora, Chern mosaic and ideal flat bands in equal-twist trilayer graphene (2023), arXiv:2305.03702 [cond-mat.mes-hall].
* Parhizkar and Galitski (2022)A. Parhizkar and V. Galitski, Moire gravity and cosmology (2022), arXiv:2204.06574 [hep-th].
* Choi _et al._ (2021)Y. Choi, H. Kim, C. Lewandowski, Y. Peng, A. Thomson, R. Polski, Y. Zhang, K. Watanabe, T. Taniguchi, J. Alicea, and S. Nadj-Perge, Nature Physics **17**, 1375 (2021).
* Tarnopolsky _et al._ (2019)G. Tarnopolsky, A. J. Kruchkov, and A. Vishwanath, Phys. Rev. Lett. **122**, 106405 (2019).
* Bistritzer and MacDonald (2011)R. Bistritzer and A. H. MacDonald, Proceedings of the National Academy of Sciences **108**, 12233 (2011), [https://www.pnas.org/content/108/30/12233.full.pdf](https://www.pnas.org/content/108/30/12233.full.pdf).
* Lopes dos Santos _et al._ (2007)J. M. B. Lopes dos Santos, N. M. R. Peres, and A. H. Castro Neto, Phys. Rev. Lett. **99**, 256802 (2007).
* Cao _et al._ (2018)Y. Cao, V. Fatemi, A. Demir, S. Fang, S. L. Tomarken, J. Y. Luo, J. D. Sanchez-Yamagishi, K. Watanabe, T. Taniguchi, E. Kaxiras, _et al._, Nature **556**, 80 (2018).
* Cao _et al._ (2018)Y. Cao, V. Fatemi, S. Fang, K. Watanabe, T. Taniguchi, E. Kaxiras, and P. Jarillo-Herrero, Nature **556**, 43 (2018).
* Balents (2019)L. Balents, SciPost Phys. **7**, 48 (2019).
* Parhizkar and Galitski (2022)A. Parhizkar and V. Galitski, Phys. Rev. Research **4**, L022027 (2022).
* Parhizkar and Galitski (2023)A. Parhizkar and V. Galitski, A generic topological criterion for flat bands in two dimensions (2023), arXiv:2301.00824 [cond-mat.mes-hall].
* (34)If the moire gauge field or/and external gauge field configuration is far from the magic value, symmetry breaking into a flat band can still occur, but its recognition requires an analysis of the energetics of the specific model.
* Coleman (2007)P. Coleman, Handbook of Magnetism and Advanced Magnetic Materials (2007).
# Localizing Transitions via Interaction-Induced Flat Bands
Supplemental Material
Alireza Parhizkar and Victor Galitski
Joint Quantum Institute, Department of Physics, University of Maryland, College Park 20742
###### Abstract
I. Periodic Magnetic Field
II. Bilayer Graphene
## I Periodic Magnetic Field
A spin-\(\frac{1}{2}\) particle has two components. Thus in \((2+1)\) dimensions its hamiltonian is given by \(h^{2}\) if its dispersion is quadratic and \(h\) if it is linear, while \(h\) is given by the following:
\[h\equiv i\sigma^{j}\left(\partial_{j}-ieA_{j}\right)\] (S1)
with \(j=\{x,y\}\). For ease of mind we should note that \(h^{2}\) can also be written in the familiar form below [1]:
\[h^{2} \equiv-\left(\mathbf{\nabla}-ie\mathbf{A}\right)^{2}-eB_{z}\sigma^{z}\] (S2) \[=[i\left(\partial_{x}-ieA_{x}\right)+\sigma^{z}\left(\partial_{y} -ieA_{y}\right)]\times[i\left(\partial_{x}-ieA_{x}\right)-\sigma^{z}\left( \partial_{y}-ieA_{y}\right)]\] (S3)
where \(B_{z}\equiv\partial_{x}A_{y}-\partial_{y}A_{x}\). Therefore, solving the zero-mode problem of the quadratic system is the same as the linear one. The zero-mode equation, \(h\psi\equiv h(\psi^{+},\psi^{-})^{T}=0\), for one of the components, \(\psi^{-}\), of the wave-function is given as,
\[\left[\partial_{x}-i\partial_{y}-ieA_{x}-eA_{y}\right]\psi^{-}=0\] (S4)
or
\[\left[\left(\partial_{x}-eA_{y}\right)-i(\partial_{y}+eA_{x})\right]\psi^{-}=0\] (S5)
Therefore \(\psi^{-}=e^{-\phi}\) will be a solution to the above equation if,
\[\partial_{x}\phi=-eA_{y}\quad\text{and}\quad\partial_{y}\phi=eA_{x}\quad \text{or}\quad\mathbf{\nabla}^{2}\phi=-B\] (S6)
where \(B\) is the external magnetic field normal to the material plane (see Supplemntary material for more details). [2] The other component of the wave-function, \(\psi^{+}\), must satisfy,
\[\left[\left(\partial_{x}+eA_{y}\right)+i(\partial_{y}-eA_{x})\right]\psi^{+}=0\] (S7)
thus \(\psi^{+}=e^{+\phi}\) will be the corresponding solution for this component. So the general solution to \(h\Psi=0\) is as follows,
\[h\Psi\equiv h\left(\begin{array}{c}\psi^{+}\\ \psi^{-}\end{array}\right)=h\left(\begin{array}{c}f_{+}(x+iy)e^{+\phi}\\ f_{-}(x-iy)e^{-\phi}\end{array}\right)=h\left(\begin{array}{c}f_{+}(z)e^{+ \phi}\\ f_{-}(\bar{z})e^{-\phi}\end{array}\right)=0\] (S8)
where we used the fact that \(\partial_{z}f(\bar{z})=\partial_{z}f(z)=0\), with \(z\equiv x+iy\). The functions \(f_{\pm}\) must leave \(\Psi\) normalizable. Note that one can always choose either of \(f_{\pm}\) to be zero. This becomes important when \(|\phi|\) is asymptotically unbounded. Then one can just eliminate the troubling component while the remaining wave-function will necessarily be bounded for a well-behaved magnetic field \(B\). For an example of unbounded \(\phi\) one can point to \(\phi=\alpha(x-x_{0})^{2}+\beta(y-y_{0})^{2}\)
which gives constant magnetic field of \(B=2(\alpha+\beta)\). The fact that in the presence of constant magnetic field one of the components is unbounded means that such magnetic field breaks the degeneracy of the ground state.
So far we have found zero energy eigenstates for two dimensional electrons under inhomogeneous magnetic field. Next we wish to find a whole band at zero energy. To have a band structure we need periodicity, which gives rise to an additional quantum number--the crystal momentum. We can create this periodicity by making the external magnetic field periodic with lattice vectors \({\bf a}_{1,2}\). So what we want is to find a set of wave-functions, \(\psi_{\bf k}\), which satisfy two conditions, (i) they have zero energy: \(h\psi_{\bf k}=0\), (ii) they have crystal momentum of \({\bf k}\):
\[\psi_{\bf k}({\bf r}+{\bf a}_{1,2})=\psi_{\bf k}({\bf r})e^{i{\bf k}.{\bf a}_{1,2}}\,.\] (S9)
Condition (i) is satisfied as long as \(\psi\) has the same form as in Eq. (S8). In order to satisfy condition (ii) note that \(\phi\) has the same periodicity as the magnetic field insured by \(\mathbf{\nabla}^{2}\phi=-B\). So the quasi-periodicity of the wave-function, Eq. (S9) must come from the functions \(f_{\pm}\)[S3, S4]. Consider the following,
\[\eta_{\bf k}(z)=\frac{\sum_{n\in{\cal Z}}\exp i\pi\left[\frac{a_{2}}{a_{1}} \left(n+\frac{{\bf k}\cdot{\bf a}_{1}}{2\pi}\right)^{2}+2\left(n+\frac{{\bf k} \cdot{\bf a}_{1}}{2\pi}\right)\left(\frac{z}{a_{1}}-\frac{{\bf k}\cdot{\bf a} _{2}}{2\pi}\right)\right]}{\sum_{n\in{\cal Z}}\exp i\pi\left[\frac{a_{2}}{a_{1 }}n^{2}+2n\frac{z}{a_{1}}\right]}\] (S10)
with \(a_{1,2}\equiv a_{1,2}^{x}+ia_{1,2}^{y}\). It is left to the reader to see that \(\eta(z+a_{1,2})=\eta(z)e^{i{\bf k}\cdot{\bf a}_{1,2}}\). Thus there is potentially a two-fold degenerate flat-band:
\[\psi_{\bf k}^{+}=\left(\begin{array}{c}\eta_{\bf k}(z)e^{+\phi}\\ 0\end{array}\right)\,,\quad\mbox{and}\quad\psi_{\bf k}^{-}=\left(\begin{array} []{c}0\\ \bar{\eta}_{\bf k}(\bar{z})e^{-\phi}\end{array}\right)\,.\] (S11)
where \(\bar{\eta}\) is defined as \(\eta\) but with \(a_{1,2}\) replaced by \(\bar{a}_{1,2}\). The normalizability criteria here reduces to normalizability over a unit cell. However, the flat-bands are not always realizable. The obstacle comes from the chiral anomaly or in other words the Atiyah-Singer index [S5, S6]. This obstacle is surpassed only when the flux through a unit cell is an integer multiple of the flux quantum.
## II Bilayer graphene
As shown in a previous work [S7], the physics of spinless free electrons residing in a system of bilayer graphene while coupled to an external electromagnetic field \(A_{j}\), can be described by the path-integral \(\int{\cal D}[\bar{\psi},\psi]\,e^{iS_{0}}\), where the free action \(S_{0}\) is given by,
\[S_{0}=\int d^{3}x\,\bar{\psi}i\not{D}\psi,\quad\not{D}\equiv\gamma^{\mu}\left( \partial_{\mu}-ieA_{\mu}+i{\cal A}_{\mu}\gamma_{5}+i{\cal S}_{\mu}i\gamma_{3} \right),\] (S12)
with \(\gamma^{\mu}\) being the four-by-four gamma matrices. In the path-integral above the four component spinor fields \(\bar{\psi}\) and \(\psi\) are treated as independent, also \(\mu\in\{0,1,2\}\) and \({\cal A}_{\mu}\) and \({\cal S}_{\mu}\) are two emergent gauge field with the following components,
\[{\cal A}_{0} = -\frac{v}{\nu_{F}}{\rm Re}[V({\bf r})]\,,\quad{\cal A}_{1}=\frac{ u}{2\nu_{F}}{\rm Re}[U({\bf r})+U(-{\bf r})]\,,\quad{\cal A}_{2}=\frac{u}{2\nu_{F}} {\rm Im}[U({\bf r})+U(-{\bf r})]\,,\] \[{\cal S}_{0} = +\frac{v}{\nu_{F}}{\rm Im}[V({\bf r})]\,,\quad{\cal S}_{1}=\frac{ u}{2\nu_{F}}{\rm Im}[U({\bf r})-U(-{\bf r})]\,,\quad{\cal S}_{2}=\frac{u}{2\nu_{F}} {\rm Re}[U(-{\bf r})-U({\bf r})]\,,\] (S13)
with \(\nu_{F}\), \(v\) and \(u\) respectively being the Fermi velocity of graphene, \(AA\) and \(AB\) interlayer hopping amplitudes,
\[V({\bf r})=\sum_{j=0}^{2}e^{-i{\bf q}_{j}\cdot{\bf r}}\,,\quad U({\bf r})=\sum _{j=0}^{2}e^{-i{\bf q}_{j}\cdot{\bf r}}e^{i2\pi j/3}\,,\] (S14)
and \({\bf q}_{j}\) being the three vectors generating the hexagonal structure of the moire reciprocal lattice of the bilayer graphene. In other words \({\bf q}_{j}\) are the momentum distance between the corresponding Dirac cones of the two layers. For certain values of \({\bf q}_{j}\)--the magic values--the system forms a flat-bad, at which the other bands are maximally pushed away from the flat-band [S3]. When the same site, \(AA\), hopping is turned off and the only mutual deformations of the layers of graphene are that of uniform twist, for the first magic angle (of twist) we have \({\bf q}_{0}\approx-\sqrt{3}u/\nu_{F}\hat{y}\) while \({\bf q}_{1}\) and
are given by successive \(2\pi/3\) counter-clockwise rotations. The interlayer hopping energy, \(u\), is responsible for creating a gap between the otherwise disengaged Dirac cones of each layer. A larger \(u\) is more successful in generating a large gap pushing the band towards flatness. When the first flat-band occurs the energy scale of the moire Brillouin zone, the gap required for flatness, is comparable to that of the interlayer hopping \(u\approx\nu_{F}|{\bf q}_{j}|/\sqrt{3}\). For the next flat-bands, \(\nu_{F}|{\bf q}_{j}|\) becomes smaller and smaller as the moire Brillouin zone shrinks. While the interlayer hopping energy must be sufficient for an extended range of \({\bf q}_{j}\), the obstacle for forming a flat-band comes from the chiral anomaly (see Ref. [8]) and it is removed when the gauge fields satisfy a topological condition, namely being able to fit into a moire cell by having a whole winding number within the cell.
Therefore, if the bilayer graphene is twisted away from the magic angle or if \(v\) becomes non-zero, the "flat"-band ceases to be flat and acquires some curvature. However this could be compensated for by applying external fields [8] and thus "correcting" the band into flatness. Here we ask a similar question about interactions: Can introducing interactions somehow make up for stepping away from flatness?
A distortion of the graphene monolayer behaves as an emergent gauge field [9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 109, 120, 130, 140, 141, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 170, 162, 163, 164, 165, 166, 167, 168, 169, 171, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 241, 250, 209, 226, 201, 202, 204, 206, 207, 208, 209, 211, 223, 241, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 320, 332, 334, 351, 352, 353, 354, 355, 356, 357, 358, 360, 371, 372, 373, 374, 375, 376, 377, 378, 38, 390, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 333, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 356, 357, 358, 359, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 411, 421, 434, 445, 446, 447, 448, 453, 46, 471, 481, 490, 412, 434, 449, 454, 46, 472, 473, 482, 483, 484, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 81, 84, 85, 86, 88, 89, 91, 85, 87, 89, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 109, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 132, 133, 144, 145, 146, 147, 148, 155, 156, 157, 158, 159, 160, 170, 181, 182, 183, 184, 185, 186, 187, 188, 191, 192, 193, 194, 195, 196, 197, 200, 203, 204, 205, 206, 207, 208, 209, 210, 212, 231, 209, 213, 214, 250, 209, 215, 209, 222, 231, 209, 223, 231, 241, 250, 209, 2231, 251, 263, 264, 265, 267, 278, 289, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 333, 340, 341, 342, 343, 343, 345, 346, 347, 348, 358, 360, 371, 378, 38, 390, 311, 312, 313, 314, 315, 316, 317, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 334, 340, 341, 342, 343, 345, 346, 347, 348, 359, 360, 371, 378, 389, 390, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 329, 333, 334, 348, 359, 360, 371, 389, 392, 393, 394, 395, 396, 397, 398, 399, 400, 412, 403, 404, 405, 406, 407, 409, 413, 407, 409, 414, 415, 417, 418, 419, 422, 435, 446, 447, 453, 46, 473, 48, 496, 497, 409, 415, 420, 409, 42, 409, 439, 416, 409, 421, 434, 445, 447, 453, 461, 473, 48, 496, 409, 420
An instance of above occurs for the chiral model of the twisted bilayer graphene [S3] when the twist angle is slightly away from the magic angle. To compensate for this error we only need non-zero \(a_{1,2}\) and \(s_{1,2}\), and as discussed above there exists a configuration of \(a_{1,2}\) and \(s_{1,2}\) that both does the job and is small. The electrons are coupled to \(\mathcal{A}_{\mu}-\lambda_{5}a_{\mu}\) and \(\mathcal{S}_{\mu}-\lambda_{3}s_{\mu}\), thus in fact whatever can be known about the whole band structure (including the density of states) is a function of these two terms. Thus for the energy of the whole system at zero temperature, \(E\), constituted by the energy of fermions, \(\mathcal{E}_{\nu}\), which also depends the chemical potential \(\nu\), and the energy cost of the (auxiliary) gauge fields, we have,
\[E =\mathcal{E}_{\nu}[\mathcal{A}_{\mu}-\lambda_{5}a_{\mu},\mathcal{S }_{\mu}-\lambda_{3}s_{\mu}]+\frac{1}{2}a_{\mu}a^{\mu}+\frac{1}{2}s_{\mu}s^{\mu}\] (S20) \[\approx\mathcal{E}_{\nu}\Bigg{|}_{\mathcal{A}_{\mu},\mathcal{S}_ {\mu}}-\lambda_{5}\frac{\delta\mathcal{E}}{\delta\mathcal{A}^{\mu}}\Bigg{|}_{ \mathcal{A}_{\mu},\mathcal{S}_{\mu}}a^{\mu}-\lambda_{3}\frac{\delta\mathcal{E} }{\delta\mathcal{S}^{\mu}}\Bigg{|}_{\mathcal{A}_{\mu},\mathcal{S}_{\mu}}s^{ \mu}+\frac{1}{2}a_{\mu}a^{\mu}+\frac{1}{2}s_{\mu}s^{\mu}\,,\]
so for an small enough deviation from flatness (or in other words curvature) and consequently small enough \(a_{\mu}\) and \(s_{\mu}\) a transition to flat-band can happen. For the case of the example at hand, we know that at the magic angle other bands are at a maximum distance from the flat-band [S3]. As the hopping potential pushes the band to its extreme (prefect flatness) it also maximally pushes the other bands away from it. This means that the linear part of Eq. (S20) can have a winning energy benefit for certain values of the chemical potential.
|
2309.11760 | Second Hankel determinant for logarithmic inverse coefficients of
strongly convex and strongly starlike functions | In this paper, we obtain the sharp bounds of the second Hankel determinant of
logarithmic inverse coefficients for the strongly starlike and strongly convex
functions of order alpha. | Vasudevarao Allu, Amal Shaji | 2023-09-21T03:46:24Z | http://arxiv.org/abs/2309.11760v1 | Second Hankel determinant for logarithmic inverse coefficients of strongly convex and strongly starlike functions
###### Abstract.
In this paper, we obtain the sharp bounds of the second Hankel determinant of logarithmic inverse coefficients for the strongly starlike and strongly convex functions of order alpha.
Key words and phrases:Univalent functions, Logarithmic coefficients, Hankel determinant, Strongly convex, Strongly Starlike 2010 Mathematics Subject Classification: 30C45, 30C50, 30C55
## 1. Introduction
Let \(\mathcal{H}\) denote the class of analytic functions in the unit disk \(\mathbb{D}:=\{z\in\mathbb{C}:\,|z|<1\}\). Here \(\mathcal{H}\) is a locally convex topological vector space endowed with the topology of uniform convergence over compact subsets of \(\mathbb{D}\). Let \(\mathcal{A}\) denote the class of functions \(f\in\mathcal{H}\) such that \(f(0)=0\) and \(f^{\prime}(0)=1\). Let \(\mathcal{S}\) denote the subclass of \(\mathcal{A}\) consisting of functions which are univalent (_i.e., one-to-one_) in \(\mathbb{D}\). If \(f\in\mathcal{S}\) then it has the following series representation
\[f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n},\quad z\in\mathbb{D}. \tag{1.1}\]
The _Logarithmic coefficients_\(\gamma_{n}\) of \(f\in\mathcal{S}\) are defined by,
\[F_{f}(z):=\log\frac{f(z)}{z}=2\sum_{n=1}^{\infty}\gamma_{n}z^{n},\quad z\in \mathbb{D}. \tag{1.2}\]
The logarithmic coefficients \(\gamma_{n}\) play a central role in the theory of univalent functions. A very few exact upper bounds for \(\gamma_{n}\) seem to have been established. The significance of this problem in the context of Bieberbach conjecture was pointed by Milin [21] in his conjecture. Milin [21] has conjectured that for \(f\in\mathcal{S}\) and \(n\geq 2\),
\[\sum_{m=1}^{n}\sum_{k=1}^{m}\left(k|\gamma_{k}|^{2}-\frac{1}{k}\right)\leq 0,\]
which led De Branges, by proving this conjecture, to the proof of Bieberbach conjecture [7]. For the Koebe function \(k(z)=z/(1-z)^{2}\), the logarithmic coefficients are \(\gamma_{n}=1/n\). Since the Koebe function \(k\) plays the role of extremal function for most of the extremal problems in the class \(\mathcal{S}\), it is expected that \(|\gamma_{n}|\leq 1/n\) holds for functions in \(\mathcal{S}\). But this is not
Introduction
Let \(\mathcal{S}\) be a
Let \(f(z)=z+\sum_{n=2}^{\infty}a_{n}z^{n}\) be a function in class \(\mathcal{S}\). Since \(f(f^{-1})(w)=w\) and using (1.4), it follows that
\[A_{2} =-a_{2},\] \[A_{3} =-a_{3}+2a_{2}^{2},\] \[A_{4} =-a_{4}+5a_{2}a_{3}-5a_{2}^{3}. \tag{1.5}\]
The notion of logarithmic inverse coefficients, _i.e._, logartithmic coefficients of inverse of \(f\), was proposed by ponnusamy _et al._[26]. The _logarithmic inverse coefficients_\(\Gamma_{n}\), \(n\in\mathbb{N}\), of \(f\) are defined by the equation
\[F_{f^{-1}}(w):=\log\frac{f^{-1}(w)}{w}=2\sum_{n=1}^{\infty}\Gamma_{n}w^{n}, \quad|w|<1/4. \tag{1.6}\]
By differentiating (1.6) together with (1.5), we get
\[\Gamma_{1} =-\frac{1}{2}a_{2},\] \[\Gamma_{2} =-\frac{1}{2}a_{3}+\frac{3}{4}a_{2}^{2},\] \[\Gamma_{3} =-\frac{1}{2}a_{4}+2a_{2}a_{3}-\frac{5}{3}a_{2}^{3}. \tag{1.7}\]
In [26] Ponnusamy _et al._ found the sharp upper bound for the logarithmic inverse coefficients for the class \(\mathcal{S}\). In fact ponnusamy _et al._[26] proved that when \(f\in\mathcal{S}\),
\[|\Gamma_{n}|\leq\frac{1}{2n}\begin{pmatrix}2n\\ n\end{pmatrix},\quad n\in\mathbb{N}\]
and equality holds only for the Koebe function or one of its rotations. Further, ponnusamy _et al._[26] obtained sharp bound for the initial logarithmic inverse coefficients for some of the important geometric subclasses of \(\mathcal{S}\).
Recently, Kowalczyk and Lecko [17] together have proposed the study of the Hankel determinant whose entries are logarithmic coefficients of \(f\in\mathcal{S}\), which is given by
\[H_{q,n}(F_{f}/2)=\begin{vmatrix}\gamma_{n}&\gamma_{n+1}&\cdots&\gamma_{n+q-1} \\ \gamma_{n+1}&\gamma_{n+2}&\cdots&\gamma_{n+q}\\ \vdots&\vdots&\ddots&\vdots\\ \gamma_{n+q-1}&\gamma_{n+q}&\cdots&\gamma_{n+2(q-1)}\end{vmatrix}.\]
Kowalczyk and Lecko [17] have obtained the sharp bound of the second Hankel determinant of \(F_{f}/2\), _i.e.,_\(H_{2,1}(F_{f}/2)\) for starlike and convex functions. The problem of computing the sharp bounds of \(H_{2,1}(F_{f}/2)\) has been considered by many authors for various subclasses of \(\mathcal{S}\) (See [4, 5, 16, 22]).
In this paper, we consider the notion of the second Hankel determinant for logarithmic inverse coefficients. Let \(f\in\mathcal{S}\) given by (1.1), then the second Hankel determinant of \(F_{f^{-1}}/2\) by using (1.7), is given by
\[\begin{split} H_{2,1}(F_{f^{-1}}/2)&=\Gamma_{1} \Gamma_{3}-\Gamma_{2}^{2}\\ &=\frac{1}{4}\left(A_{2}A_{4}-A_{3}^{2}+\frac{1}{4}A_{2}^{4} \right)\\ &=\frac{1}{48}\left(13a_{2}^{4}-12a_{2}^{2}a3-12a_{3}^{2}+12a_{2} a_{4}\right).\end{split} \tag{1.8}\]
It is now appropriate to remark that \(H_{2,1}(F_{f^{-1}}/2)\) is invariant under rotation, since for \(f_{\theta}(z):=e^{-i\theta}f\left(e^{i\theta}z\right),\theta\in\mathbb{R}\) when \(f\in\mathcal{S}\) we have
\[H_{2,1}(F_{f_{\theta}^{-1}}/2)=\frac{e^{4i\theta}}{48}\left(13a_{2}^{4}-12a_{ 2}^{2}a3-12a_{3}^{2}+12a_{2}a_{4}\right)=e^{4i\theta}H_{2,1}(F_{f^{-1}}/2). \tag{1.9}\]
In this paper we find sharp upper bounds for \(|H_{2,1}(F_{f^{-1}}/2)|\) when \(f\) belongs to the class of strongly convex or strongly starlike function of order alpha. Given \(\alpha\in(0,1]\), a function \(f\in\mathcal{A}\) is called strongly convex of order \(\alpha\) if
\[\left|\arg\left(1+\frac{zf^{\prime\prime}(z)}{f^{\prime}(z)}\right)\right|< \frac{\pi\alpha}{2} \tag{1.10}\]
The set of all such functions denoted by \(\mathcal{K}_{\alpha}\). Also, a function \(f\in\mathcal{A}\) is called strongly starlike of order \(\alpha\) if
\[\left|\arg\left(\frac{zf^{\prime}(z)}{f(z)}\right)\right|<\frac{\pi\alpha}{2} \tag{1.11}\]
The set of all such functions denoted by \(\mathcal{S}_{\alpha}^{*}\). The notion of strongly starlike functions was introduced by Stankiewicz [29] and s independently by Brannan and Kirwan [8]. An external geometric characterisation of strongly starlike functions was proposed by Stankiewicz [30]. Brannan and Kirwan [8] found a geometrical condition called \(\delta-\) visibility which is sufficient for functions to be starlike.
## 2. Preliminary Results
In this section, we present the key lemmas which will be used to prove the main results of this paper. Let \(\mathcal{P}\) denote the class of all analytic functions \(p\) having positive real part in \(\mathbb{D}\), with the form
\[p(z)=1+c_{1}z+c_{2}z^{2}+c_{3}z^{3}+\cdots. \tag{2.1}\]
A member of \(\mathcal{P}\) is called a Caratheodory function. It is known that \(|c_{n}|\leq 2,n\geq 1\) for a function \(p\in\mathcal{P}\). By using (1.10) and (1.11), functions in the classes \(\mathcal{S}_{\alpha}^{*}\) and \(\mathcal{K}_{\alpha}\) can be represented interms of functions in Caratheodory class \(\mathcal{P}\). To prove our main result, we need some preliminary lemmas. The first one is known as Caratheodory's lemma and the second one is due to Libera and Zlotkiewicz.
**Lemma 2.1**.: _[_12_]_ _For a function \(p\in\mathcal{P}\) of the form (2.1), the sharp inequality holds for each \(n\geq 1\). Equality holds for the function \(p(z)=1+z/1-z.\)_
**Lemma 2.2**.: _[_18, 19_]_ _If \(p\in\mathcal{P}\) is of the form (2.1) with \(c_{1}\geq 0\). Then there exits \(z,w\in\mathbb{D}\) such that_
\[2c_{2}=c_{1}^{2}+(4-c_{1}^{2})z\]
_and_
\[4c_{3}=c_{1}^{3}+2(4-c_{1}^{2})c_{1}z-c_{1}(4-c_{1}^{2})z+2(4-c_{1}^{2})(1-|x|^ {2})w.\]
Next we recall the following well-known result due to Choi _et al._[11]. Lemma 2.3 plays an important role in the proof of our main results.
**Lemma 2.3**.: _Let \(A,B,C\) be real numbers and_
\[Y(A,B,C):=\max_{z\in\mathbb{D}}\left(\left|A+Bz+Cz^{2}\right|+1-|z|^{2}\right).\]
_(i) If \(AC\geq 0\), then_
\[Y(A,B,C)=\begin{cases}|A|+|B|+|C|,&|B|\geq 2(1-|C|),\\ 1+|A|+\dfrac{B^{2}}{4(1-|C|)},&|B|<2(1-|C|).\end{cases}\]
_(ii) If \(AC<0\), then_
\[Y(A,B,C)=\begin{cases}1-|A|+\dfrac{B^{2}}{4(1-|C|)},&-4AC\left(C^{-2}-1\right) \leq B^{2}\wedge|B|<2(1-|C|),\\ 1+|A|+\dfrac{B^{2}}{4(1+|C|)},&B^{2}<\min\left\{4(1+|C|)^{2},-4AC\left(C^{-2}- 1\right)\right\},\\ R(A,B,C),&\text{otherwise,}\end{cases}\]
_where_
\[R(A,B,C)=\begin{cases}|A|+|B|+|C|,&|C|(|B|+4|A|)\leq|AB|,\\ -|A|+|B|+|C|,&|AB|\leq|C|(|B|-4|A|),\\ (|A|+|C|)\sqrt{1-\dfrac{B^{2}}{4AC}},&\text{ otherwise.}\end{cases}\]
## 3. Main Results
We prove the following sharp inequality for second hankel determinant of inverse logarithmic coefficient for \(\mathcal{K}_{\alpha}\).
**Theorem 3.1**.: _Let \(f\in\mathcal{K}_{\alpha}\) given by (1.1) then_
\[|H_{2,1}(F_{f^{-1}}/2)|\leq\begin{cases}\dfrac{\alpha^{2}}{36},&0<\alpha\leq 1/ 3,\\ \dfrac{\alpha^{2}(17+18\alpha+13\alpha^{2})}{144(4+6\alpha+\alpha^{2})},&1/3< \alpha\leq 1.\end{cases} \tag{3.1}\]
_The inequality is sharp._
Proof.: Fix \(\alpha\in(0,1]\) and let \(f\in\mathcal{K}_{\alpha}\) be of the form (1.1). Then by (1.10),
\[1+\dfrac{zf^{\prime\prime}(z)}{f^{\prime}(z)}=(p(z))^{\alpha} \tag{3.2}\]
for some \(p\in\mathcal{P}\) of the form (2.1). By comparing the coefficients on both the sides of (3.2), we obtain
\[a_{2} =\dfrac{\alpha}{2}\;\alpha c_{1},\] \[a_{3} =\dfrac{\alpha}{12}(2c_{2}+(3\alpha-1)c_{1}^{2}),\] \[a_{4} =\dfrac{\alpha}{144}\left(12c_{3}+6(5\alpha-2)c_{1}c_{2}+(17 \alpha^{2}-15\alpha+4)c_{1}^{3}\right). \tag{3.3}\]
Hence by (1.8), we have
\[H_{2,1}(F_{f^{-1}}/2)=\dfrac{\alpha^{2}}{2304}\left[24c_{1}c_{3}-16c_{2}^{2}-4 (2+3\alpha)c_{1}^{2}c_{2}+(4+6\alpha+\alpha^{2})c_{1}^{4}\right].\]
Since the class \(\mathcal{K}_{\alpha}\) is invariant under rotation[add one sentensece about lemma] and (1.9) holds, without loss of generality we can assume that \(c_{1}=c\), where \(0\leq c\leq 2\). Now using Lemma 2.2 and straight forward computation
\[H_{2,1}(F_{f^{-1}}/2)= \dfrac{\alpha^{2}}{2304}\left[(2+\alpha^{2})c^{4}-6\alpha(4-c^{2 })c^{2}z-2(4-c^{2})(8+c^{2})z^{2}\right.\] \[\left.+12c(4-c^{2})(1-|z|^{2})w\right]. \tag{3.4}\]
Now we consider different cases on \(c\).
**Case 1.** Suppose that \(c=0\). Then from (3.4), for \(\alpha\in(0,1]\),
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|=\dfrac{\alpha^{2}}{36}\,|z|^{2}\leq \dfrac{\alpha^{2}}{36} \tag{3.5}\]
**Case 2.** Suppose that \(c=2\). Then from (3.4), for \(\alpha\in(0,1]\),
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|\leq\dfrac{\alpha^{2}(2+\alpha^{2})}{144} \tag{3.6}\]
**Case 3.** Suppose that \(c\in(0,2)\). Since \(|y|\leq 1\), from (3.4) we obtain
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}| \leq\frac{\alpha^{2}}{2304}\left[\left|(2+\alpha^{2})c^{4}-6\alpha(4 -c^{2})c^{2}z-2(4-c^{2})(8+c^{2})z^{2}\right|\right. \tag{3.7}\] \[\left.+12c(4-c^{2})(1-|z|^{2})w\right]\] \[=\frac{\alpha^{2}}{192}(c(4-c^{2}))\left[|A+Bz+Cz^{2}|+1-|z|^{2}\right]\]
where
\[A:=\frac{c^{3}(2+\alpha^{2})}{12(4-c^{2})},\quad B:-\frac{c\alpha}{2},\quad C: =-\frac{8+c^{2}}{6c}\]
Since \(AC<0\), we apply case (ii) ofLemma 2.3.
**3(a).** Note that the inequality
\[-4AC\left(\frac{1}{C^{2}}-1\right)\leq B^{2}\]
is equivalent to
\[\frac{c^{4}(7\alpha^{2}-4)+8c^{2}(8+13\alpha^{2}))}{8+c^{2}}\geq 0\]
which is evidently holds for \(c\in(0,2)\). Moreover, the inequality \(|B|<2(1-|C|)\) is equivalent to \(c\left(16-2c+c^{2}(2+3\alpha)\right)<0\) which is false for \(c\in(0,2)\).
**3(b).** Since
\[4(1+|C|)^{2}=\frac{c^{4}+52c^{2}+64}{9c^{2}}>0,\]
and
\[-4AC\left(C^{-2}-1\right)=-\frac{(2+\alpha^{2})c^{2}(16-c^{2})}{18(8+c^{2})}<0.\]
We see that
\[\min\left\{4(1+|C|)^{2},-4AC\left(C^{-2}-1\right)\right\}=-4AC\left(C^{-2}-1 \right),\]
and from case \(3(a)\), we know that
\[-4AC\left(C^{-2}-1\right)\leq B^{2}.\]
Therefore, the inequality \(B^{2}<\min\left\{4(1+|C|)^{2},-4AC\left(C^{-2}-1\right)\right\}\) does not holds for \(0<c<2\).
**3(c).** Next observe that the inequality,
\[|C|(|B|+4|A|)-|AB|=\frac{192\alpha^{2}+8(8-3\alpha+4\alpha^{2})c^{2}+(8-12 \alpha+4\alpha^{2}-3\alpha^{3})c^{4}}{4-c^{2}}\leq 0\]
is equivalent to
\[\varphi_{1}(c^{2})\leq 0 \tag{3.8}\]
where
\[\varphi_{1}(t)=192\alpha^{2}+8(8-3\alpha+4\alpha^{2})x+(8-12\alpha+4\alpha^{2}-3 \alpha^{3})x^{2},\quad t\in(0,4).\]
Note that \(\varphi_{1}(0)>0\) and \(\varphi_{1}(4)>0\). Also we can easily seen that \(\varphi_{1}^{\prime}(t)>0\) for \(t\in(0,4)\). So \(\varphi_{1}\) is increasing and hence \(\varphi_{1}(t)>0\) in \(t\in(0,4)\). Thus we deduce that the inequality (3.8) is false.
**3(d).** Next note that the inequality
\[|AB|-|C|(|B|-4|A|)=\frac{\alpha c^{4}(2+\alpha)^{2}}{24(4-c^{2})}-\frac{8+c^{2} }{6c}\bigg{(}\frac{\alpha c}{2}-\frac{c^{3}(2+\alpha)^{2}}{3(4-c^{2})}\bigg{)} \leq 0 \tag{3.9}\]
is equivalent to
\[\varphi_{2}(u^{2})\leq 0, \tag{3.10}\]
where
\[\varphi_{2}(u)=(8+12\alpha+4\alpha^{2}+3\alpha^{3})u^{2}+8(8+3\alpha+4\alpha^{ 2})u-192\alpha,\quad\,u\in(0,4).\]
We see that the discriminant \(\Delta:=64(64+144\alpha+217\alpha^{2}+72\alpha^{3}+52\alpha^{4})>0.\) Thus we consider,
\[u_{1,2}=\frac{4(-(4\alpha^{2}+3\alpha+8)\mp\sqrt{52\alpha^{4}+72\alpha^{3}+21 7\alpha^{2}+144\alpha+64}}{3\alpha^{3}+4\alpha^{2}+12\alpha+8}.\]
It is clear that \(u_{1}<0\). Moreover \(0<u_{2}<4\) holds. Indeed both inequalities \(u_{2}>0\) and \(u_{2}<4\) are equivalent to the evidently true inequalities
\[8+12\alpha+4\alpha^{2}+3\alpha^{3}>0\]
and
\[192+33\alpha+264\alpha^{2}+264\alpha^{3}+102\alpha^{4}+48\alpha^{5}+9\alpha^{ 6}>0\]
Thus (3.10) and so (3.9) is valid only when
\[0<c\leq\tilde{c}=\sqrt{u_{2}}.\]
Thus by (3.7) and Lemma 2.3,
\[\begin{split}|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|& \leq\frac{\alpha^{2}}{192}c(4-c^{2})(-|A|+|B|+|C|)\\ &=\frac{\alpha^{2}}{2304}(64-(8-24\alpha)c^{2}-(4+6\alpha+\alpha ^{2})c^{4})\\ &=\Phi(c^{2})\end{split} \tag{3.11}\]
where
\[\Phi(t)=\frac{\alpha^{2}}{2304}(64-(8-24\alpha)t-(4+6\alpha+\alpha^{2})t^{2} ),\quad\,t\in(0,u_{2}).\]
Since
\[\Phi^{\prime}(t)=\frac{\alpha^{2}}{2304}((8-24\alpha)-2(4+6\alpha+\alpha^{2}) c),\quad t\in(0,u_{2}),\]
we see that \(0<\alpha\leq 1/3\), the funtion \(\Phi\) is decreasing and so
\[\Phi(t)\leq\Phi(0)=\frac{\alpha^{2}}{36},\ \ \ 0\leq t\leq u_{2}\]
In the case \(1/3<\alpha\leq 1\),
\[t_{0}=\frac{4(3\alpha-1)}{4+6\alpha+\alpha^{2}}\]
is a unique critical point of \(\Phi\). Clearly \(t_{0}>0\). It remains to check whether \(t_{0}<u_{2}\), which is equivalent to
\[\delta(\alpha)= 117\alpha^{8}+240\alpha^{7}-149\alpha^{6}-1212\alpha^{5}-4344 \alpha^{4}\] \[-6288\alpha^{3}-4464\alpha^{2}-1920\alpha-448<0,\ \ \ \alpha\in(1/3,1],\]
and since
\[\delta(\alpha)\leq-149\alpha^{6}-1212\alpha^{5}-4344\alpha^{4}-6288\alpha^{3} -4464\alpha^{2}-1920\alpha-91<0\]
for \(\alpha\in(1/3,1]\), we deduce that \(t_{0}<u_{2}\).
Thus for \(1/3<\alpha\leq 1\), we have
\[\Phi(t)\leq\Phi(t_{0})=\frac{\alpha^{2}(17+18\alpha+13\alpha^{2})}{144(4+6 \alpha+\alpha^{2})},\ \ \ 0<t<u_{2}.\]
We can conclude that, for \(0<c\leq\tilde{c}\)
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|\leq\begin{cases}\frac{ \alpha^{2}}{36},&0<\alpha\leq 1/3,\\ \frac{\alpha^{2}(17+18\alpha+13\alpha^{2})}{144(4+6\alpha+\alpha^{2})},&1/3< \alpha\leq 1.\end{cases} \tag{3.12}\]
**3(d).** We now consider the last case in Lemma 2.3, which in view of 3(d) holds for \(\tilde{c}<c<2\). Then by (3.7), we have
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}| \leq\frac{\alpha^{2}}{192}\:c\:(4-c^{2})(|C|+|A|)\sqrt{1-\frac{B^ {2}}{4AC}}\] \[=\frac{\alpha^{2}}{2304\sqrt{2(2+\alpha^{2})}}\big{(}64-8c^{2}+ \alpha^{2}c^{4}\big{)}\:\sqrt{\frac{32+52\alpha^{2}+c^{2}(4-7\alpha^{2})}{8+ c^{2}}}\] \[=\frac{\alpha^{2}}{2304\sqrt{2(2+\alpha^{2})}}g_{1}(c^{2})\sqrt {g_{2}(c^{2})} \tag{3.13}\]
where
\[g_{1}(x)=64-8x+\alpha^{2}x^{2}\]
and
\[g_{2}(x)=\frac{32+52\alpha^{2}+c^{2}(4-7\alpha^{2})}{8+c^{2}}\]
It is easily seen that \(g_{1}\) and \(g_{2}\) are decreasing on \((u_{2},4)\), and so from (3.13) we obtain
\[|\Gamma_{2}\Gamma_{3}-\Gamma_{1}^{2}|\leq\frac{\alpha^{2}}{2304\sqrt{2(2+\alpha^ {2})}}g_{1}(u_{2})\sqrt{g_{2}(u_{2})}=\Psi(u_{2}) \tag{3.14}\]
**Case 4.** It remains to compare the bounds in (3.5),(3.20),(3.12) and (3.14). The inequality
\[\frac{\alpha^{2}(2+\alpha^{2})}{144}\leq\frac{\alpha^{2}}{36},\ \ \ \alpha\in(0,1]\]
is trivial. Since the function is \(\Phi\) is decreasing in \((u_{2},4)\) and \(\Phi(u_{2})=\Psi(u_{2})\), the inequality
\[\frac{\alpha^{2}(2+\alpha^{2})}{144}\leq\frac{\alpha^{2}(17+18\alpha+13\alpha^ {2})}{144(4+6\alpha+\alpha^{2})}\]
is true for \(1/3<\alpha\leq 1\). Finally the inequality
\[\frac{\alpha^{2}}{36}\leq\frac{\alpha^{2}(17+18\alpha+13\alpha^{2})}{144(4+6 \alpha+\alpha^{2})},\ \ \alpha\in(1/3,1]\]
is equivalent to
\[9\alpha^{2}-6\alpha+1\geq 0\]
which is evidently true for \(1/3<\alpha\leq 1\).
Thus summarizing the results in cases 1-4, we see that (3.1) holds.
We now proceed to prove the equality part. When \(\alpha\in(0,1/3]\), equality holds for the function \(f\in\mathcal{A}\) given by (3.2) with \(p\) given by
\[p(z)=\frac{1+z}{1-z},\ \ \ \ z\in\mathbb{D}.\]
When \(\alpha\in(1/3,1]\), set
\[\tau:=\sqrt{t_{0}}=\sqrt{\frac{4(3\alpha-1)}{(4+6\alpha+\alpha^{2})}}.\]
Since \(\tau\leq 2\), the function
\[\tilde{p}(z)=\frac{1-\tau z+z^{2}}{1-z^{2}},\ \ \ z\in\mathbb{D},\]
belongs to \(\mathcal{P}\). Thus the function \(f\) given by (3.2), where \(p\) replaced by \(\tilde{p}\) belongs to \(\mathcal{K}_{\alpha}\) and is extremal function for the second inequality in (3.15)
where. This completes the proof.
For \(\alpha=1\), we get the following result [6].
**Corollary 3.2**.: _If \(f\in\mathcal{K}\), then_
\[|H_{2,1}(F_{f}/2)|\leq\frac{1}{33}.\]
We next find the sharp bound for the second Hankel determinant of inverse logarithmic coefficient of functions in \(\mathcal{S}_{\alpha}^{*}\).
**Theorem 3.3**.: _Let \(f\in\mathcal{S}_{\alpha}^{*}\) given by (1.1) then_
\[|H_{2,1}(F_{f^{-1}}/2)|\leq\begin{cases}\dfrac{\alpha^{2}}{4},&0< \alpha<1/5,\\ \dfrac{\alpha^{2}(2+5\alpha+15\alpha^{2})}{7+30\alpha+35\alpha^{2}},&1/5\leq \alpha\leq\alpha^{\prime},\\ \dfrac{\alpha^{2}}{36}(4+35\alpha^{2}),&\alpha^{\prime}<\alpha\leq 1.\end{cases} \tag{3.15}\]
_where \(\alpha^{\prime}=0.390595...\) is the unique root in \((0,1)\) of the equation \(44+60\alpha+155\alpha^{2}-1050\alpha^{3}-1225\alpha^{4}=0\). The inequality (3.15) is sharp._
Proof.: Fix \(\alpha\in(0,1]\) and let \(f\in\mathcal{S}_{\alpha}^{*}\) be of the form (1.1). Then by (1.11),
\[\dfrac{zf^{\prime}(z)}{f(z)}=(p(z))^{\alpha} \tag{3.16}\]
for some \(p\in\mathcal{P}\) of the form (2.1). By comparing the coefficients on both sides of (3.16), we obtain
\[\begin{split} a_{2}&=\alpha c_{1},\\ a_{3}&=\dfrac{\alpha}{4}(2c_{2}+(3\alpha-1)c_{1}^{2 }),\\ a_{4}&=\dfrac{\alpha}{36}\left(12c_{3}+6(5\alpha-2)c_ {1}c_{2}+(17\alpha^{2}-15\alpha+4)c_{1}^{3}\right).\end{split} \tag{3.17}\]
Hence by (1.8), we have
\[H_{2,1}(F_{f^{-1}}/2)=\dfrac{\alpha^{2}}{576}\left((7+30\alpha+35\alpha^{2})c 1^{4}-(1+5\alpha)c_{1}^{2}c_{2}-36c_{2}^{2}+48c_{1}c_{3}\right).\]
Since the class \(\mathcal{S}_{\alpha}^{*}\) is invariant under rotation, without loss of generality, we can assume that \(c_{1}=c\), where \(0\leq c\leq 2\). Therefore, by Lemma 2.2, for some \(c\in[0,2]\) and \(z,w\in\mathbb{D}\) we have
\[\begin{split} H_{2,1}(F_{f^{-1}}/2)=&\dfrac{\alpha^ {2}}{576}\left[(4+35\alpha^{2})c^{4}-30\alpha(4-c^{2})c^{2}z-3(4-c^{2})(12+c^ {2})z^{2}\right.\\ &\left.+24c(4-c^{2})(1-|z|^{2})w\right].\end{split} \tag{3.18}\]
Now we have the following cases on \(c\).
**Case 1:** Suppose that \(c=0\). Then by (3.18), we obtain
\[|H_{2,1}(F_{f^{-1}}/2)|=\dfrac{1}{4}|z^{2}|\alpha^{2}\leq\dfrac{\alpha^{2}}{4}. \tag{3.19}\]
**Case 2:** Suppose that \(c=2\). Then by (3.18), we obtain
\[|H_{2,1}(F_{f^{-1}}/2)|=\dfrac{\alpha^{2}}{36}(4+35\alpha^{2}). \tag{3.20}\]
**Case 3:** Suppose that \(c\in(0,2)\). Applying the triangle inequality in (3.18) and by using the fact that \(|w|\leq 1\), we obtain
\[\begin{split} H_{2,1}(F_{f^{-1}}/2)&=\frac{\alpha^{2 }}{576}\left[\left|(4+35\alpha^{2})c^{4}-30\alpha(4-c^{2})c^{2}z-3(4-c^{2})(12 +c^{2})z^{2}\right|\right.\\ &\left.+24c(4-c^{2})(1-|z|^{2})w\right]\\ &\leq\frac{\alpha^{2}}{24}c(4-c^{2})\left[\left|A+Bz+Cz^{2} \right|+1-|z^{2}|\right],\end{split} \tag{3.21}\]
where
\[A:=\frac{c^{3}(4+35\alpha^{2})}{24(4-c^{2})},\quad B:=-\frac{5}{4}\alpha c, \quad C:=-\frac{12+c^{2}}{8c}.\]
Since \(AC<0\), so we can apply case (ii) of Lemma 2.3.
**3(a).** Note that the inequality
\[-4AC\left(\frac{1}{C^{2}}-1\right)\leq B^{2}\]
is equivalent to
\[\frac{c^{2}(36+540\alpha^{2}+c^{2}(10\alpha^{2}-1))}{12+c^{2}}\geq 0\]
which evidently holds for \(c\in(0,2).\) Moreover, the inequality \(|B|<2(1-|C|)\) is equivalent to \(12+c^{2}(1+5\alpha)-8c<0\) which is not true for \(c\in(0,2).\)
**3(b).** Since
\[-4AC\left(C^{-2}-1\right)=\frac{c^{2}(-36+c^{2})(4+35\alpha^{2})}{48(12+c^{2}) }<0,\]
and
\[4(1+|C|)^{2}=\frac{(12-8c+c^{2})^{2}}{16c^{2}}>0.\]
So we get
\[\min\left\{4(1+|C|)^{2},-4AC\left(\frac{1}{C^{2}}-1\right)\right\}=-4AC\left( \frac{1}{C^{2}}-1\right),\]
and from \(3(a)\), we know that
\[-4AC\left(\frac{1}{C^{2}}-1\right)\leq B^{2}.\]
Therefore, the inequality \(B^{2}<\min\left\{4(1+|C|)^{2},-4AC\left(\frac{1}{C^{2}}-1\right)\right\}\) does not holds for \(0<c<2.\)
**3(c).** Note that the inequality
\[|C|(|B|+4|A|)\leq|AB|\]
is equivalent to
\[720\alpha+24c^{2}(4-5\alpha+35\alpha^{2})-c^{4}(-8+35\alpha-70\alpha^{2}+175\alpha ^{3})\leq 0.\]
Consider the function \(\phi_{1}:(0,4)\to\mathbb{R}\) defined by
\[\phi_{1}(x)=720\alpha+24x(4-5\alpha+35\alpha^{2})-x^{2}(-8+35\alpha-70\alpha^{2} +175\alpha^{3})\]
Clearly \(\phi_{1}(0)=720\alpha>0\) and \(\phi_{1}(4)=16(32-20\alpha+280\alpha^{2}-175\alpha^{3}>0\). It can be shown that \(\phi_{1}^{\prime}(x)>0\). Hence \(\phi_{1}\) is increasing and consequently we concluded that \(\phi_{1}(x)>0\). Therefore the inequality \(|C|(|B|+4|A|)\leq|AB|\) is false.
**3(d).** Note that the inequality
\[\begin{split}&|AB|-|C|(|B|-4|A|)\\ &=\frac{5c^{4}\alpha(4+35\alpha^{2})}{96(4-c^{2})}-\frac{12+c^{2}} {8c}\bigg{(}\frac{5c\alpha}{4}-\frac{c^{3}(4+35\alpha^{2})}{6(4-c^{2})}\end{split} \tag{3.22}\]
is equivalent to
\[\delta(c^{2})\geq 0, \tag{3.23}\]
where
\[\delta(t)=720\alpha-24(4+5\alpha+35\alpha^{2})t-(8+35\alpha+70\alpha^{2}+175 \alpha^{3})t^{2},\,\,\,t\in(0,4)\]
We see that for \(\alpha\in(0,1]\)
\[4+5\alpha+35\alpha^{2}>0,\,\,8+35\alpha+70\alpha^{2}+175\alpha^{3}>0,\]
and the discriminant \(\Delta:=2304(4+20\alpha+120\alpha^{2}+175\alpha^{3}+525\alpha^{4}]>0\) for \(\alpha\in(0,1]\).
Thus we consider
\[t_{1,2}=-\frac{12\left(4+5\alpha+35\alpha^{2}\pm 2\sqrt{4+20\alpha+120\alpha^{2 }+175\alpha^{3}+525\alpha^{4}}\right)}{8+35\alpha+70\alpha^{2}+175\alpha^{3}}\]
It is clear that \(t_{1}<0\) and so it remains to check if \(0<t_{2}<4\). The inequality \(t_{2}>0\) is equivalent to
\[4+5\alpha+35\alpha^{2}-2\sqrt{4+20\alpha+120\alpha^{2}+175\alpha^{3}+525 \alpha^{4}}<0\]
which is true for \(\alpha\in(0,1]\). Further the inequality \(t_{2}<4\) can be written as
\[6\sqrt{4+20\alpha+120\alpha^{2}+175\alpha^{3}+525\alpha^{4}}<5(4+10\alpha+35 \alpha^{2}+35\alpha^{3}),\]
which is also true for \(\alpha\in(0,1]\). There for (3.23) and so (3.22) is valid for
\[0<c\leq c^{*}:=\sqrt{t_{2}}.\]
Then by Lemma 2 and (3.22), for \(0<c\leq c^{*}\), we have
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}| \leq\frac{\alpha^{2}}{24}c(4-c^{2})\left(-|A|+|B|+|C|\right)\] \[=\frac{\alpha^{2}}{576}\big{(}144-24(1-5\alpha)c^{2}-(70+30\alpha+ 35\alpha^{2})c^{4}\big{)}\] \[=\frac{\alpha^{2}}{576}\phi_{2}(c^{2})\]
where
\[\phi_{2}(s)=144-24(1-5\alpha)s-(70+30\alpha+35\alpha^{2})s^{2},\ \ 0<s\leq t_{1}.\]
we note that \(\phi^{\prime}(s)=0\) only when \(s=s_{0}:=12(5\alpha-1)/(7+30\alpha+35\alpha^{2})\) and it is easy to see that \(s_{0}>0\) only if \(\alpha>1/5\). Further \(s_{0}\leq t_{2}\) is true if \(\beta(\alpha)<0\), where
\[\beta(x)= -96-1060x-7040x^{2}-29975x^{3}-77525x^{4}\] \[-124250x^{5}-82250x^{6}+153125x^{7}+459375x^{8},\ \ 0<x\leq 1\]
So we get \(0<s_{0}<t_{2}\) if and only if \(1/5<\alpha\leq\alpha_{0}\), where \(\alpha_{0}\) is the unique positive real root of the polynomial \(\beta(x)\) lies between \(0\) and \(1\). Since the leading coefficient of \(\phi(x)\) is negative, for \(1/5<\alpha<\alpha_{0}\), we get
\[\begin{split}|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|& \leq\frac{\alpha^{2}}{576}\phi_{2}(s_{0})\\ &=\alpha^{2}\left(\frac{2+5\alpha+15\alpha^{2}}{7+30\alpha+35 \alpha^{2}}\right).\end{split} \tag{3.24}\]
For \(0<\alpha\leq 1/5\), it is clear that \(\phi_{2}\) is decreasing, so we get
\[\begin{split}|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|& \leq\frac{\alpha^{2}}{576}\;\phi_{2}(0)\\ &=\frac{\alpha^{2}}{4}.\end{split} \tag{3.25}\]
For \(\alpha_{0}<\alpha\leq 1\), it is clear that \(\phi_{2}\) is increasing, so we get
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|\leq\frac{\alpha^{2}}{576}\;\phi_{2}(t_{1}) \tag{3.26}\]
**3(e).** Next suppose that \(c\in(c^{*},2)\). Then by Lemma 2.3, we have
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}| \leq\frac{\alpha^{2}}{576}c(4-c^{2})(|C|+|A|)\sqrt{1-\frac{B^{2}}{ 4AC}}\] \[=\frac{\alpha^{2}}{288\sqrt{4+35\alpha^{2}}}\left(144-24c^{2}+(1+ 35\alpha^{2})c^{4}\right)\sqrt{\frac{12+180\alpha^{2}+(1-10\alpha^{2})c^{2}}{ 12+c^{2}}}\] \[=\frac{\alpha^{2}}{288\sqrt{4+35\alpha^{2}}}\ h_{1}(c^{2})\sqrt{ h_{2}(c^{2})} \tag{3.27}\]
where \(h_{1}\) and \(h_{2}\) are defined by
\[h_{1}(x)=144-24x+(1+35\alpha^{2})x^{2}\]
and
\[h_{2}(x)=\frac{12+180\alpha^{2}+(1-10\alpha^{2})x}{12+x}.\]
Now we consider the function \(F(x)=h_{1}(x)\sqrt{h_{2}(x)}\) on \((t_{1},4)\) and we show that \(F\) is convex function. It is enough to show that \(F^{\prime\prime}(x)\geq 0\).
Since
\[F^{\prime\prime}(x)(h_{2}(x))^{3/2} =h_{1}^{\prime\prime}(x)(h_{2}(x))^{2}+h_{1}^{\prime}(x)h_{2}(x)h _{2}^{\prime}(x)-\frac{1}{4}h_{1}(x)(h_{2}^{\prime}(x))^{2}+\frac{1}{2}h_{1}( x)h_{2}(x)h_{2}^{\prime\prime}(x)\] \[=\frac{1}{(12+x)^{4}}\big{[}2592(8+820\alpha^{2}+14075\alpha^{4}+ 63000\alpha^{6})\] \[\quad-432(-16-890\alpha^{2}-4525\alpha^{4}+31500\alpha^{6})x\] \[\quad-54(-16-540\alpha^{2}+1475\alpha^{4}+31500\alpha^{6})x^{2}\] \[\quad-\ 6(8+195\alpha^{2}-2925\alpha^{4}+1750\alpha^{6})x^{3}+(1-1 0\alpha^{2})^{2}(1+35\alpha^{2})x^{4}\big{]}\] \[=\frac{1}{(12+x)^{2}}G(x)\]
It is easy to see that \(h_{2}\) is a positive decreasing function in \((t_{2},4)\). We show that our assertion is true by proving that \(G(x)\geq 0\) for \(x\in(t_{2},4)\).
Let \(x\in(t_{2},4)\) is fixed and
\[H(\alpha)=m_{0}+m_{1}\alpha^{2}+m_{2}\alpha^{4}+m_{3}\alpha^{6} \tag{3.28}\]
where
\[\begin{split}& m_{0}:=20736+6912x+864x^{2}+48x^{3}+x^{4},\\ & m_{1}:=2125440+384480x+29160x^{2}+1170x^{3}+15x^{4},\\ & m_{2}:=36482400+1954800x-79650x^{2}-17550x^{3}-600x^{4},\\ & m_{3}:=163296000-13608000x-146750x^{2}+10500x^{3}+3500x^{4}. \end{split} \tag{3.29}\]
Clearly \(m_{0}>0\) and \(m_{1}>0\). For \(x\in(x_{1},4)\), we have
\[\begin{split} m_{2}&=36482400+1954800x-79650x^{2}-1755 0x^{3}-600x^{4},\\ &>1200(28276+1629x)>0\end{split} \tag{3.30}\]
Similarly, we have
\[\begin{split} m_{3}&=163296000-13608000x-146750x^{2}+ 10500x^{3}+3500x^{4}.\\ &>3500(24408+3x^{3}+x^{4})>0\end{split} \tag{3.31}\]
So we get \(H(\alpha)>0\) for \(\alpha\in(0,1]\) and \(x\in(t_{2},4)\), which therefore shows that \(G(x)\geq 0\). So we can conclude that
\[\begin{split} F(x)&\leq\ \max\{F(t_{2}),F(4)\}\\ &\leq\begin{cases}F(t_{2}),&0<\alpha\leq\alpha^{*}\\ F(4),&\alpha^{*}\leq\alpha\leq 1\end{cases}\end{split}\]
So we get, for \(c\in(c^{*},2)\)
\[|\Gamma_{1}\Gamma_{3}-\Gamma_{2}^{2}|\leq\begin{cases}\dfrac{\alpha^{2}}{288 \sqrt{4+35}\alpha^{2}}\ F((c^{*})^{2}),&0<\alpha\leq\alpha^{*}\\ \dfrac{\alpha^{2}}{36}(4+35\alpha^{2}),&\alpha^{*}\leq\alpha\leq 1 \end{cases} \tag{3.32}\]
**Case 4.** Now we compare the bounds in (3.24),(3.25),(3.26) and (3.32). The inequality
\[\dfrac{\alpha^{2}}{36}(4+35\alpha^{2})\leq\dfrac{\alpha^{2}}{4}\]
is true for \(0<\alpha\leq 1/\sqrt{7}\). So we can conclude that for \(0<\alpha<1/5\),
\[|H_{2,1}(F_{f^{-1}}/2)|\leq\dfrac{\alpha^{2}}{4}.\]
The inequality
\[\dfrac{\alpha^{2}}{4}\leq\dfrac{\alpha^{2}(2+5\alpha+15\alpha^{2})}{7+30 \alpha+35\alpha^{2}}\]
is equivalent to the evidently true inequality \((1-5\alpha)^{2}\geq 0.\) A tedious long calculation shows that the following inequality
\[\dfrac{\alpha^{2}}{288\sqrt{4+35\alpha^{2}}}\ F((c^{*})^{2})\leq\dfrac{\alpha ^{2}(2+5\alpha+15\alpha^{2})}{7+30\alpha+35\alpha^{2}},\]
is true for \(0<\alpha\leq 1\). The inequality
\[\dfrac{\alpha^{2}}{36}(4+35\alpha^{2})\leq\dfrac{\alpha^{2}(2+5\alpha+15 \alpha^{2})}{7+30\alpha+35\alpha^{2}}\]
is equivalent to
\[44+60\alpha+155\alpha^{2}-1050\alpha^{3}-1225\alpha^{4}\geq 0,\]
which is true for \(0\leq\alpha\leq\alpha^{\prime},\) where \(\alpha^{\prime}\) is the unique positive real root of the polynomial \(44+60\alpha+155\alpha^{2}-1050\alpha^{3}-1225\alpha^{4}\) lies between \(0\) and \(1\). With further long computations, we can show that the inequality
\[\frac{\alpha^{2}}{576}\,\phi_{2}(t_{1})\leq\frac{\alpha^{2}}{36}(4+35\alpha^{2})\]
is true for \(2/3\leq\alpha\leq 1\). By summarizing the above cases we get (3.15) holds.
In order to show that the inequalities in (3.15) are sharp. For \(0<\alpha<1/5,\) consider the function
\[p(z)=\frac{1+z^{2}}{1-z^{2}},\quad z\in\mathbb{D}.\]
It is obvious that \(p\) is in \(\mathcal{P}\) with \(c_{1}=c_{3}=0\) and \(c_{2}=2\). The corresponding function \(f\in\mathcal{S}_{\alpha}^{*}\) is described by (3.16). Hence by (3.17), it follows that \(a_{2}=a_{4}=0\) and \(a_{3}=\alpha\). From (1.8), we obtain
\[|H_{2,1}(F_{f^{-1}}/2)|=\frac{\alpha^{2}}{4}.\]
For \(1/5\leq\alpha\leq\alpha^{\prime},\) consider the function \(f\in\mathcal{A}\) given by (3.16) with \(p\) given by
\[p(z)=\frac{1-\zeta z+z^{2}}{1-z^{2}},\quad z\in\mathbb{D},\]
where \(\zeta:=\sqrt{s_{0}}=\sqrt{12(5\alpha-1)/(7+30\alpha+35\alpha^{2})}.\) By simple computation we can show that
\[|H_{2,1}(F_{f^{-1}}/2)|=\frac{\alpha^{2}(2+5\alpha+15\alpha^{2})}{7+30\alpha+ 35\alpha^{2}}.\]
For the last case, \(\alpha^{\prime}<\alpha\leq 1\) equality holds for the function \(f\in\mathcal{A}\) of the form (3.16) with \(p(z)=1+z/1-z\). This completes the proof.
For \(\alpha=1\), we get the estimate for the class \(\mathcal{S}^{*}\) of starlike functions [4].
**Corollary 3.4**.: _If \(f\in\mathcal{S}^{*}\), then_
\[|H_{2,1}(F_{f^{-1}}/2)|\leq\frac{13}{12}.\]
**Acknowledgment:** The research of the first named author is supported by SERB-CRG, Govt. of India and the second named author's research work is supported by UGC-SRF.
|
2309.08231 | Chance Constrained Probability Measure Optimization: Problem
Formulation, Equivalent Reduction, and Sample-based Approximation | Choosing decision variables deterministically (deterministic decision-making)
can be regarded as a particular case of choosing decision variables
probabilistically (probabilistic decision-making). It is necessary to
investigate whether probabilistic decision-making can further improve the
expected decision-making performance than deterministic decision-making when
chance constraints exist. The problem formulation of optimizing a probabilistic
decision under chance constraints has not been formally investigated. In this
paper, for the first time, the problem formulation of Chance Constrained
Probability Measure Optimization (CCPMO) is presented to realize optimal
probabilistic decision-making under chance constraints. We first prove the
existence of the optimal solution to CCPMO. It is further shown that there is
an optimal solution of CCPMO with the probability measure concentrated on two
decisions, leading to an equivalently reduced problem of CCPMO. The reduced
problem still has chance constraints due to random variables. We then extend
the sample-based smooth approximation method to solve the reduced problem.
Samples of model uncertainties are used to establish an approximate problem of
the reduced problem. Algorithms for general nonlinear programming problems can
solve the approximate problem. The solution of the approximate problem is an
approximate solution of CCPMO. A numerical example of controlling a quadrotor
in turbulent conditions has been conducted to validate the proposed
probabilistic decision-making under chance constraints. | Xun Shen, Yuhu Wu, Satoshi Ito, Jun-ichi Imura | 2023-09-15T08:02:22Z | http://arxiv.org/abs/2309.08231v1 | Chance Constrained Probability Measure Optimization: Problem Formulation, Equivalent Reduction, and Sample-based Approximation
###### Abstract
Choosing decision variables deterministically (_deterministic decision-making_) can be regarded as a particular case of choosing decision variables probabilistically (_probabilistic decision-making_). It is necessary to investigate whether probabilistic decision-making can further improve the expected decision-making performance than deterministic decision-making when chance constraints exist. The problem formulation of optimizing a probabilistic decision under chance constraints has not been formally investigated. In this paper, for the first time, the problem formulation of _Chance Constrained Probability Measure Optimization_ (CCPMO) is presented towards realizing optimal probabilistic decision-making under chance constraints. We first prove the existence of the optimal solution to CCPMO. It is further shown that there is an optimal solution of CCPMO with the probability measure concentrated on two decisions, leading to an equivalently reduced problem of CCPMO. The reduced problem still has chance constraints due to uncertain disturbance. We then propose the sample-based smooth approximation method to solve the reduced problem. Samples of model uncertainties are used to establish an approximate problem of the reduced problem. Algorithms for general nonlinear programming problems can solve the approximate problem. The solution of the approximate problem is an approximate solution of CCPMO. A numerical example of controlling a quadrotor in turbulent conditions has been conducted to validate the proposed probabilistic decision-making under chance constraints.
Sample approximation Function approximation Chance constraint
## 1 Introduction
Efficient and robust decision-making algorithms are crucial to ensure the reliability of autonomous systems against uncertainties from disturbances and model misspecifications. One efficient way to ensure the robustness of the decision-making algorithms against uncertainties is to impose chance constraints [1, 2]. Decision-making with chance constraints has been extensively studied. For instance, reinforcement learning (RL) takes a trial-and-error way to obtain the optimal
policy, a feedback decision-making law. In safety-critical applications, the trial-end-error way may direct the system to a fatal error, which is unacceptable. In [3], the expected risk-return constraints are imposed in updating policy, which is a significant breakthrough for the safety-critical application of RL. Constrained RL can ensure the exploration process is implemented under a safe region defined by the expected risk-return constraints. With enough training data, constrained RL can train neural networks to give optimal decisions, which satisfies the imposed expected risk-return constraints [4, 5]. On the other hand, stochastic model predictive control (SMPC) can give an implicit policy satisfying chance constraints by solving a finite-time optimal control problem with chance constraints recursively [6, 7]. However, SMPC requires a precise model for implementation. A recent step towards data-driven SMPC and safe RL is to combine the advantages of both MPC and RL by embedding SMPC into RL [8, 9]. In SMPC-embedded RL, RL is applied to learn the parameters in the objective function, model, and constraints of SMPC, and SMPC is implemented to guide RL to deploy safe exploration.
The above decision-making methods focus on the policy that chooses decision variables deterministically. Choosing decision variables deterministically (_deterministic decision-making_) can be regarded as a particular case of choosing decision variables probabilistically (_probabilistic decision-making_). It is necessary to investigate whether probabilistic decision-making can further improve the expected decision-making performance than deterministic decision-making when chance constraints exist. In this paper, we use the terminology _probabilistic decision_ when the decision variables are chosen probabilistically. In previous research on data-driven SMPC and safe RL, probabilistic decision is adopted for searching the unknown region to improve the controller with newly obtained data [10]. Whether the probabilistic decision outperforms the deterministic decision when chance constraints exist has not been investigated. The problem of optimizing a probabilistic decision under chance constraints has not been formally given yet. The lack of the theoretical foundation of probabilistic decision-making under chance constraints motivates this study.
To obtain a deterministic decision, a chance constrained optimization in finite-dimension space of the control input is required to be solved [11]. This paper calls it _chance constrained optimization for deterministic decision-making_. On the other hand, if a probabilistic decision is expected to be optimized, the probability measure on the decision domain needs to be optimized considering chance constraints. In this paper, we formulate the problem, '_Chance Constrained Probability Measure Optimization_' (CCPMO), for optimizing probability measures with chance constraints. To the best of our knowledge, this is the first time to give the problem formulation of CCPMO, and there is still no existing research on the methods for solving it. We briefly review the research for chance constrained optimization for deterministic decision-making, including the cases with finite-dimension and infinite-dimension decision variables.
Chance constrained optimization for deterministic decision-making with finite-dimension decision variables has been intensively studied in stochastic programming and control engineering over the last five decades [12]. It is generally NP-hard due to the non-convexity of the feasible set and intractable reformulations [12, 13]. Thus, the majority of current research has two major streams: (a) give assumptions that the constraint functions or the distribution of random variables have some special structure, for example, linear or convex constraint functions [14], finite sample space of random variables [15], elliptically symmetric Gaussian-similar distributions [16], or (b) extract samples to approximate the chance constraints; [17, 18, 19, 20, 21, 22], which is the so-called sample-based method. The sample-based method intends to consider non-convex constraints and general distribution and thus does not adopt the methods in (a). For sample-based methods, the most famous approach in the control field is the scenario approach [17, 18, 19, 23]. The scenario approach generates a deterministic optimization problem as the approximation of the original one by extracting samples from the sample space of random variables. The probability of the feasibility of the approximate solution rapidly increases to one as the sample number increases. In another sample-based method, the so-called sample-average approach [15, 21, 20], both feasibility and optimality of the approximate solution are presented. However, neither the scenario approach nor the sample-average approach can be directly used to solve CCPMO. In CCPMO, the optimization is implemented in an infinite space. However, in both of the scenario approach and the sample-average approach, the dimension of the decision variable must be finite, and then the convergence can be deduced. Optimization with chance/robust constraints in finite-dimensional decision variable space is also intensively studied, in which the number of chance constraints is infinite [24, 25, 26]. In [24], the generalized differentiation of the probability function of infinite constraints is investigated. The optimality condition with explicit formulae of subdifferentials is given. In [25], the variational tools are applied to formulate generalized differentiation of chance/robust constraints. The method of getting the explicit outer estimations of subdifferentials from data is also established. An adaptive grid refinement algorithm is developed to solve the optimization with chance/robust constraints in [26]. However, the above research on optimization with chance/robust constraints in finite-dimensional vector space still can only prove the convergence in the case where the dimension of the decision variable is infinite.
Recently, chance constraints in infinite-dimensional decision variable space have attracted much attention [27, 28, 29]. In those papers, the infinite-dimensional decision variable means the space of a continuous function defined on a compact set or, more specifically, a bounded time interval. Solving the optimization with chance constraints in the functional space obtains a solution that is a single point in an infinite-dimensional functional space, which still gives a
deterministic decision. In CCPMO, the infinite-dimensional space is the probability measure space on a compact set of finite-dimensional decision variables. CCPMO optimizes the probability measure to give an optimal probabilistic decision whose motivation and problem formulation differ from those of previous research in [27; 28; 29].
In this paper, towards improving the expected decision-making performance with chance constraints by probabilistic decision, the CCPMO problem is formulated on Borel probability measure space on a metric space of decision variables (Section 2.3). This problem formulation is an essential step toward establishing the theory of the chance constrained probabilistic decision design and could inspire further study of CCPMO. CCPMO is an intractable problem. We prove the existence of the optimal solution of CCPMO under a mild assumption (Section 3). Besides, there exists an optimal solution of CCPMO that has the probability measure concentrated on two decisions, which leads to a reduced problem with finite-dimensional decision variables (Section 4). The sample-based smooth approximation method in [20] is extended to solve the reduced problem. We give the proposed approximation method's uniform convergence in the sense of almost everywhere and feasibility analysis (Section 5).
## 2 Problem Formulation
This section describes the origin of the problem and then presents the problem formulation. After introducing chance-constrained optimization for deterministic decision-making, we give a simple example demonstrating how probabilistic decision-making can improve the expected performance. Then, optimizing the probabilistic decision under chance constraints is formulated as a problem of optimizing a probability measure under chance constraints.
### Chance constrained optimization for deterministic decision-making
First, we review the problem formulation of chance-constrained optimization for deterministic decision-making. Let \(x\in\mathcal{X}\subset\mathbb{R}^{n}\) be the decision variable, where \(\mathcal{X}\) is a compact set. The objective function \(J:\mathcal{X}\rightarrow\mathbb{R}\) is a scalar function. This paper assumes that the objective function \(J(x)\) is continuous on \(\mathcal{X}\). The constraint function involves random variables. Let \(\xi\) be an \(s-\)dimensional continuous random vector, and it is assumed to have a known joint continuous probability density function \(p(\xi)\) with support \(\Xi\subseteq\mathbb{R}^{s}\). Besides, we use \(\mathsf{Pr}_{\xi}\{\cdot\}\) to represent a probability measure of a set \(\Xi_{\mathsf{Leb}}\subseteq\Xi\), written by
\[\mathsf{Pr}_{\xi}\{\xi\in\Xi_{\mathsf{Leb}}\}:=\int_{\Xi_{\mathsf{Leb}}}p(\xi )\mathsf{d}\xi.\]
The constraint function \(h:\mathcal{X}\times\Xi\rightarrow\mathbb{R}^{m}\) is a continuous vector-valued function. Define \(\mathbb{P}:\mathcal{X}\rightarrow[0,1]\) by
\[\mathbb{P}(x):=\int_{\Xi}\mathbb{I}\{\bar{h}(x,\xi)\}p(\xi)\mathsf{d}\xi, \tag{1}\]
where \(\bar{h}:\mathcal{X}\times\Xi\rightarrow\mathbb{R}\) is defined by
\[\bar{h}(x,\xi)=\max_{i\in[m]}h_{i}(x,\xi), \tag{2}\]
and \(\mathbb{I}\{y\}\) presents the indicator function with \(\mathbb{I}\{y\}=1\) if \(y\leq 0\) and \(\mathbb{I}\{y\}=0\) otherwise. Note that \(\mathbb{P}(x)\) is the probability that \(\bar{h}(x,\xi)\leq 0\) holds for a given \(x\). Then, chance constrained optimization problem for deterministic decision-making is formulated as
\[\min_{x\in\mathcal{X}} J(x)\] ( \[Q_{\alpha}\] ) \[\mathsf{s.t.} \mathbb{P}(x)\geq 1-\alpha,\]
where \(\alpha\in[0,1]\) is the violation probability threshold. Some notations for Problem \(Q_{\alpha}\) are given here. Define the feasible set of Problem \(Q_{\alpha}\) by
\[\mathcal{X}_{\alpha}:=\{x\in\mathcal{X}:\mathbb{P}(x)\geq 1-\alpha\}. \tag{3}\]
Define the optimal objective value \(J_{\alpha}^{*}\) and optimal solution set \(X_{\alpha}\) of \(Q_{\alpha}\) by
\[J_{\alpha}^{*}: = \min\{J(x):x\in\mathcal{X}_{\alpha}\}, \tag{4}\] \[X_{\alpha}: = \{x\in\mathcal{X}_{\alpha}:J(x)=J_{\alpha}^{*}\}, \tag{5}\]
respectively. By solving Problem \(Q_{\alpha}\), we can obtain an optimal deterministic decision \(x^{*}\in X_{\alpha}\).
Notice that if there exists \(\underline{\alpha}\in[0,1]\) such that \(\mathcal{X}_{\underline{\alpha}}\neq\emptyset\), then \(X_{\alpha}\neq\emptyset\) for all \(\alpha\in[\underline{\alpha},1]\). Hence, without loss of generality, we assume that \(X_{\alpha}\neq\emptyset\) for all \(\alpha\in[0,1]\) in this paper.
### Heuristic example of probabilistic decision
Compared to a deterministic decision, a probabilistic decision can improve performance when chance constraints exist. A simple example is presented here to show the potential improvement of the performance using the probabilistic decision.
**Example 1**.: _The compact set \(\mathcal{X}\) is defined by \(\mathcal{X}=[-2,2]\subset\mathbb{R}\). The cost function \(J(x)\) is_
\[J(x)=-(x+0.6)^{2}+2. \tag{6}\]
_The constraint function \(h(x,\xi)\) is_
\[h(x,\xi)=x-1.4+\xi \tag{7}\]
_where \(\xi\sim\mathcal{N}(m_{\xi},\Sigma_{\xi})\) with \(m_{\xi}=0\), and \(\Sigma_{\xi}=1\). The probability threshold \(\alpha\) is \(0.25\). The profiles of the objective function \(J(x)\) and the violation probability \(1-\mathbb{P}(x)\) are plotted in Figure 1 (a) and (b), respectively. In this example, as \(x\) increases, the violation probability \(1-\mathbb{P}(x)\) increases while the objective function \(J(x)\) decreases. Therefore, the optimal solution \(x_{\alpha}^{*}\) is at the boundary with \(1-\mathbb{P}(x_{\alpha}^{*})=0.25\). Since \(\xi\) is a random variable that obeys normal distribution and \(h(x,\xi)\) is linear, we can obtain \(x_{\alpha}^{*}=0.7259\) from checking the standard normal distribution table, and the corresponding optimal objective value is \(J_{\alpha}^{*}=0.2420\)._
_Then, we explain how probabilistic decisions can further improve the expected objective value. Let \(\mathcal{C}_{2}=\{x^{(1)},x^{(2)}\}\) be a set with two samples from \(\mathcal{X}=[-2,2]\), where \(x^{(1)}=-1\) and \(x^{(2)}=2\). The objective function values of these two samples are \(J(x^{(1)})=1.84\) and \(J(x^{(2)})=-4.76\). The violation probabilities are \(\mathbb{P}_{\mathsf{vio}}(x^{(1)})=1-\mathbb{P}(x^{(1)})=0.008\)
Figure 1: Example in one dimension: (a) profile of \(J(x)\) and optimal solution of chance constrained optimization; (b) profile of \(1-\mathbb{P}(x)\); (c) optimized discrete measure.
_and \(\mathbb{P}_{\mathsf{vio}}(x^{(2)})=1-\mathbb{P}(x^{(2)})=0.729\), respectively. Define a discrete probability measure \(\mu\) on \(\mathcal{X}=[-2,2]\) with_
\[\left\{\begin{array}{l}\mu\left(\left\{x^{(1)}\right\}\right)=0.67,\\ \mu\left(\left\{x^{(2)}\right\}\right)=0.33.\end{array}\right. \tag{8}\]
_If we choose input probabilistically according to \(\mu\), then \(x^{(1)}\) is chosen with probability \(0.67\) and \(x^{(2)}\) is chosen with probability \(0.33\). The expected objective value satisfies that_
\[J(x^{(1)})\mu\left(\left\{x^{(1)}\right\}\right)+J(x^{(2)})\mu\left(\left\{x^ {(2)}\right\}\right)=-0.338<J_{\alpha}^{*},\]
_and the expected violation probability is_
\[\mathbb{P}_{\mathsf{vio}}(x^{(1)})\mu\left(\left\{x^{(1)}\right\}\right)+ \mathbb{P}_{\mathsf{vio}}(x^{(2)})\mu\left(\left\{x^{(2)}\right\}\right)=0.2459 <\alpha,\]
_which implies a probabilistic decision that satisfies the chance constraint can improve the expected control performance compared with the deterministic optimal decision \(x_{\alpha}^{*}\)._
### Problem formulation of CCPMO
Let \(\mathscr{B}(\mathcal{X})\) be Borel \(\sigma\)-algebra on \(\mathcal{X}\). This paper uses \(\mathscr{B}(\cdot)\) to denote the Borel \(\sigma\)-algebra on a metric space. Notice that \((\mathcal{X},\mathscr{B}(\mathcal{X}))\) is a measurable space (Borel space). Let \(\mu\) be a Borel probability measure on \(\mathscr{B}(\mathcal{X})\). Besides, let \(d\) be Euclidean distance. The ordered pair \((\mathcal{X},d)\) represents a metric space. Let \(M(\mathcal{X})\) be the space of Borel probability measures on metric space \((\mathcal{X},d)\).
Define a space of continuous \(\mathbb{R}\)-valued functions on a compact set \(\mathcal{X}\) by
\[\mathscr{C}(\mathcal{X}):=\{f:\mathcal{X}\rightarrow\mathbb{R}\,|f\text{ is continuous}\}. \tag{9}\]
It is able to define a metric on \(\mathscr{C}(\mathcal{X})\) by
\[\tau(f,f^{\prime}):=\|f-f^{\prime}\|_{\infty},\;\forall f,f^{\prime}\in \mathscr{C}(x), \tag{10}\]
where \(\|f\|_{\infty}\) is defined as \(\|f\|_{\infty}:=\sup_{x\in\mathcal{X}}|f(x)|.\) The metric \(\tau(\cdot,\cdot)\) turns \(\mathscr{C}(\mathcal{X})\) into a complete metric space. Define the pseudo-distance (pseudo-metric) between any two probability measures \(\mu,\nu\in M(\mathcal{X})\) associated with \(f\in\mathscr{C}(\mathcal{X})\) by
\[\mathcal{W}_{f}(\mu,\nu)=\left|\int_{\mathcal{X}}f(x)\mathsf{d}\mu-\int_{ \mathcal{X}}f(x)\mathsf{d}\nu\right|. \tag{11}\]
The weak\({}^{*}\) convergence of probability measures is defined as follows [30].
**Definition 1**.: _Let \(\{\mu_{k}\}_{k=0}^{\infty}\) be a sequence in \(M(\mathcal{X})\). We say that \(\{\mu_{k}\}_{k=0}^{\infty}\) converges weakly\({}^{*}\) to \(\mu\) if_
\[\lim_{k\rightarrow\infty}\mathcal{W}_{f}(\mu_{k},\mu)=0,\;\forall f\in\mathscr{ C}(\mathcal{X}). \tag{12}\]
_Besides, we say that \(\{\mu_{k}\}_{k=0}^{\infty}\) converges to \(\mu\) associated with \(f\in\mathscr{C}(\mathcal{X})\) if_
\[\lim_{k\rightarrow\infty}\mathcal{W}_{f}(\mu_{k},\mu)=0. \tag{13}\]
Notice that \(h(x,\xi)\) is a random variable for a given \(x\in\mathcal{X}\). We introduce an assumption on \(h(x,\xi)\) for the continuity analysis of \(\mathbb{P}(x)\). Let \(\mathsf{cl}\{Z\}\) be the closure of a set \(Z\). Let \(\mathsf{supp}\,p:=\mathsf{cl}\{\xi\in\Xi:p(\xi)>0\}\), and, for each \(x\in\mathcal{X}\),
\[\Xi^{\mathsf{supp}}(x):=\{\xi\in\mathsf{supp}\,p:\bar{h}(x,\xi)=0\}.\]
The following assumption holds throughout the paper.
**Assumption 1**.: _For each \(x\in\mathcal{X}\), suppose the following holds:_
\[\mathsf{Pr}_{\xi}\{\Xi^{\mathsf{supp}}(x)\}=0.\]
Assumption 1 implies the continuity of \(\mathbb{P}(x)\)[21, 31], which guarantees that the feasible set \(\mathcal{X}_{\alpha}\) defined by (3) and optimal solution set \(X_{\alpha}\) given by (5) are measurable sets for all \(\alpha\in[0,1]\).
With the above notation and discussion, a Chance-Constrained Probability Measure Optimization (CCPMO) problem \(P_{\alpha}\) associated with Problem \(Q_{\alpha}\) can be well formulated as:
\[\underset{\mu\in M(\mathcal{X})}{\text{min}} \int_{\mathcal{X}}J(x)\mathsf{d}\mu\] ( \[P_{\alpha}\] ) \[\text{s.t.} \int_{\mathcal{X}}\mathbb{P}(x)\mathsf{d}\mu\geq 1-\alpha,\]
where \(\mathbb{P}(x)\) is defined by (1). In Problem \(P_{\alpha}\), the uncertainty of \(h(x,\xi)\) comes from both \(x\) and \(\xi\), since \(x\) obeys the distribution dominated by probability measure \(\mu\). Let \(M_{\alpha}(\mathcal{X})\) be the feasible set of Problem \(P_{\alpha}\) by
\[M_{\alpha}(\mathcal{X}):=\Big{\{}\mu\in M(\mathcal{X}):\int_{\mathcal{X}} \mathbb{P}(x)\text{d}\mu\geq 1-\alpha\Big{\}}. \tag{14}\]
The optimal objective value and the optimal solution set of Problem \(P_{\alpha}\) are written as
\[\mathcal{J}_{\alpha}^{*} := \inf_{\mu\in M_{\alpha}(\mathcal{X})}\int_{\mathcal{X}}J(x)\text {d}\mu \tag{15}\] \[A_{\alpha} := \Big{\{}\mu\in M_{\alpha}(\mathcal{X}):\int_{\mathcal{X}}J(x) \text{d}\mu=\mathcal{J}_{\alpha}^{*}\Big{\}}. \tag{16}\]
A probability measure \(\mu_{\alpha}^{*}\in A_{\alpha}\) is called an optimal probability measure for Problem \(P_{\alpha}\). The optimal probabilistic decision generates a decision at every trial from \(\mathcal{X}\) probabilistically according to the optimal probability measure \(\mu_{\alpha}^{*}\). If the decision process repeats many times, the mean value of the objective value is minimal.
## 3 Existence of the optimal solution
In this section, we show that there exists at least an optimal solution to Problem \(P_{\alpha}\).
**Theorem 1**.: _Suppose that Assumption 1 holds. Then, we have \(A_{\alpha}\neq\emptyset\)._
Proof.: The optimal objective value \(\mathcal{J}_{\alpha}^{*}\) of Problem \(P_{\alpha}\) is defined by (15). We want to prove that there exists at least one \(\mu^{*}\in M_{\alpha}(\alpha)\) such that \(\int_{\mathcal{X}}J(x)\text{d}\mu^{*}=\mathcal{J}_{\alpha}^{*}\).
For any \(\mu\in\mathcal{X}\),
\[\int_{\mathcal{X}}J(x)\text{d}\mu\geq\int_{\mathcal{X}}\min_{x\in\mathcal{X}} \;J(x)\text{d}\mu=\min_{x\in\mathcal{X}}\;J(x),\]
which implies that \(\mathcal{J}_{\alpha}^{*}>-\infty\). Then, there exists a sequence \(\{\mu_{k}\}_{k=1}^{\infty}\subset M_{\alpha}(\mathcal{X})\) such that \(\int_{\mathcal{X}}J(x)\text{d}\mu_{k}\to\mathcal{J}_{\alpha}^{*}\). Since \(\mathcal{X}\) is compact, by Prokhorov Theorem (Theorem 8.6.2 of [32]), there exists a subsequence \(\{\mu_{k_{s}}\}_{s=1}^{\infty}\subset\{\mu_{k}\}_{k=1}^{\infty}\) such that converges weakly\({}^{*}\) to \(\mu^{*}\in M(\mathcal{X})\) in the sense of Definition 1. Since \(\int_{\mathcal{X}}J(x)\text{d}\mu_{s_{k}}\to\int_{\mathcal{X}}J(x)\text{d}\mu^ {*}\) by \(J(\cdot)\in\mathscr{C}(\mathcal{X})\), we have
\[\int_{\mathcal{X}}J(x)\text{d}\mu^{*}=\lim_{s\to\infty}\int_{\mathcal{X}}J(x) \text{d}\mu_{n_{s}}=\mathcal{J}_{\alpha}^{*}. \tag{17}\]
We have \(\mathbb{P}(\cdot)\in\mathscr{C}(\mathcal{X})\) by Assumption 1. In addition, by \(\{\mu_{k}\}_{k=1}^{\infty}\subset M_{\alpha}(\mathcal{X})\), we have \(\int_{\mathcal{X}}\mathbb{P}(x)\text{d}\mu_{k_{s}}\geq 1-\alpha\), for all \(s\geq 1\). As a result, since \(\mu_{k_{s}}\) converges weakly\({}^{*}\) to \(\mu^{*}\),
\[\int_{\mathcal{X}}\mathbb{P}(x)\text{d}\mu^{*}(x)=\lim_{s\to\infty}\int_{ \mathcal{X}}\mathbb{P}(x)\text{d}\mu_{k_{s}}(x)\geq 1-\alpha, \tag{18}\]
which implies \(\mu^{*}\in M_{\alpha}(x)\). By (17) and (18), \(\mu^{*}\) is an optimal solution of Problem \(P_{\alpha}\) and \(A_{\alpha}\neq\emptyset\) holds.
Theorem 1 shows that the optimal solution set \(A_{\alpha}\) of Problem \(P_{\alpha}\) is not an empty set. The "\(\inf\)" in (15) can be replaced by "\(\min\)". Furthermore, we give the following proposition about \(\mathcal{J}_{\alpha}^{*}\) and \(J_{\alpha}^{*}\).
**Proposition 1**.: _Suppose that Assumption 1 holds. Then,_
\[\mathcal{J}_{\alpha}^{*}\leq J_{\alpha}^{*},\]
_where \(\mathcal{J}_{\alpha}^{*}\) and \(J_{\alpha}^{*}\) are the optimal objective values of Problems \(P_{\alpha}\) and \(Q_{\alpha}\), respectively._
Proof.: Since the optimal solution set \(X_{\alpha}\), defined by (5), of Problem \(Q_{\alpha}\) is measurable, it is possible to find a probability measure \(\tilde{\mu}_{\alpha}\in M(\mathcal{X})\) such that \(\tilde{\mu}_{\alpha}(X_{\alpha})=1\). Then, we have
\[\int_{\mathcal{X}}J(x)\text{d}\tilde{\mu}_{\alpha} = \int_{X_{\alpha}}J(x)\text{d}\tilde{\mu}_{\alpha}=J_{\alpha}^{*}, \tag{19}\] \[\int_{\mathcal{X}}\mathbb{P}(x)\text{d}\tilde{\mu}_{\alpha} = \int_{X_{\alpha}}\mathbb{P}(x)\text{d}\tilde{\mu}_{\alpha}\geq 1-\alpha. \tag{20}\]
Thus, \(\tilde{\mu}_{\alpha}\) is a feasible solution for \(P_{\alpha}\) with objective value as \(J_{\alpha}^{*}\), namely, \(\tilde{\mu}_{\alpha}\in M_{\alpha}(\mathcal{X})\) and \(\mathcal{J}_{\alpha}^{*}\leq J_{\alpha}^{*}\)
Proposition 1 shows that the optimal objective value \(\mathcal{J}_{\alpha}^{*}\) of Problem \(P_{\alpha}\) might be smaller than the optimal objective value \(J_{\alpha}^{*}\) of Problem \(Q_{\alpha}\). It is necessary to formally investigate in what condition we have \(J_{\alpha}^{*}=\mathcal{J}_{\alpha}^{*}\) and how to solve Problem \(P_{\alpha}\) when \(J_{\alpha}^{*}>\mathcal{J}_{\alpha}^{*}\). We will present corollary 2 in Section 4.1, which gives two sufficient condition for \(J_{\alpha}^{*}=\mathcal{J}_{\alpha}^{*}\). For solving Problem \(P_{\alpha}\), we first give an equivalent reduction by Theorem 3 in Section 4.1. Then, we extend sample-based smooth approximation to solve the reduced problem in Section 5.
## 4 Problem Reduction
In this section, we show that there exists an optimal probabilistic decision for Problem \(P_{\alpha}\) whose probability measure is concentrated on two points, which leads to an equivalently reduced problem of Problem \(P_{\alpha}\).
### Reduced problem of Problem \(P_{\alpha}\)
We give an assumption on the optimal solution set \(A_{\alpha}\).
**Assumption 2**.: _There exists an optimal solution \(\mu^{*}\in A_{\alpha}\) of Problem \(P_{\alpha}\) such that for any \(\delta>0\) there exists a probability measure \(\mu\), different from \(\mu^{*}\), such that \(\int_{\mathcal{X}}\mathbb{P}(x)\mathrm{d}\mu>1-\alpha\) and \(\mathcal{W}_{J}(\mu,\mu^{*})\leq\delta\). Here, \(\mathcal{W}_{J}(\mu,\mu^{*})\) is the pseudo-distance (pseudo-metric) between \(\mu\) and \(\mu^{*}\) associated with \(J\)._
Assumption 2 implies that that there exists a sequence \(\{\mu_{k}\}_{k=1}^{\infty}\subseteq M(\mathcal{X})\) that converges to an optimal probability solution \(\mu^{*}\) of Problem \(P_{\alpha}\) associated with \(J\), such that \(\int_{\mathcal{X}}\mathbb{P}(x)\mathrm{d}\mu_{k}(x)>1-\alpha\).
Let \(\mathcal{C}_{S}=\big{(}x^{(1)},...,x^{(S)}\big{)}\) be an element of the augmented space \(\mathcal{X}^{S}\), where \(S\in\mathbb{N}\). For an arbitrary \(\mathcal{C}_{S}\), we can define a set of discrete probability measures by
\[\mathcal{U}_{S}:=\Big{\{}\mu_{S}\in[0,1]^{S}:\sum_{i=1}^{S}\mu_{S}(i)=1\Big{\}}. \tag{21}\]
The set \(\mathcal{C}_{S}\) becomes a sample space with finite samples if it is equipped with a discrete probability measure \(\mu_{S}\in\mathcal{U}_{S}\), where the \(i\)-th element \(\mu_{S}(i)\) denotes the probability of taking decision \(x^{(i)}\), i.e., \(\mu_{S}(x^{(i)})=\mu_{S}(i),i\in[S]\). Thus, by choosing a set \(\mathcal{C}_{S}\) and assigning a discrete probability measure \(\mu_{S}\in\mathcal{U}_{S}\) to \(\mathcal{C}_{S}\), a probabilistic decision is determined, which chooses decision variables from \(\mathcal{C}_{S}\) randomly according to \(\mu_{S}\). For the probabilistic decision associated with \(\mathcal{C}_{S}\) and \(\mu_{S}\), the expectations of objective and probability values are \(\sum_{i=1}^{S}J(x^{(i)})\mu_{S}(i)\) and \(\sum_{i=1}^{S}\mathbb{P}(x^{(i)})\mu_{S}(i)\), respectively. We can optimize \(\sum_{i=1}^{S}J(x^{(i)})\mu_{S}(i)\) under a chance constraint \(\sum_{i=1}^{S}\mathbb{P}(x^{(i)})\mu_{S}(i)\geq 1-\alpha\) taking \((\mu_{S},\mathcal{C}_{S})\) as decision variable, which is formulated as
\[\min_{\mu_{S}\in\mathcal{U}_{S},\mathcal{C}_{S}\in\mathcal{X}^{S }}\sum_{i=1}^{S}J(x^{(i)})\mu_{S}(i)\] ( \[\tilde{P}_{\alpha}(S)\] ) \[\text{s.t.}\quad\sum_{i=1}^{S}\mathbb{P}(x^{(i)})\mu_{S}(i)\geq 1-\alpha,\] \[x^{(i)}\in\mathcal{C}_{S},\;\forall i\in[S].\]
Define the feasible set \(\tilde{\mathcal{U}}_{\alpha}(S)\) of Problem \(\tilde{P}_{\alpha}(S)\) by
\[\tilde{\mathcal{U}}_{\alpha}(S):=\Big{\{}(\mu_{S},\mathcal{C}_{S}):\sum_{i=1}^ {S}\mathbb{P}(x^{(i)})\mu_{S}(i)\geq 1-\alpha\Big{\}}. \tag{22}\]
Since the objective function \(\sum_{i=1}^{S}\mathbb{P}(x^{(i)})\mu_{S}(i):\mathcal{U}_{S}\times\mathcal{X}^{ S}\to R\) is continuous, and its domain \(\mathcal{U}_{S}\times\mathcal{X}^{S}\) is compact, we have the feasible set \(\tilde{\mathcal{U}}_{\alpha}(S)\) of Problem \(\tilde{P}_{\alpha}(S)\) is also a compact set. As a result, problem \(\tilde{P}_{\alpha}(S)\)'s optimal solution exists. Define the optimal objective value \(\tilde{\mathcal{J}}_{\alpha}(S)\) of Problem \(\tilde{P}_{\alpha}(S)\) by
\[\tilde{\mathcal{J}}_{\alpha}(S):=\min_{(\mu_{S},\mathcal{C}_{S})\in\tilde{ \mathcal{U}}_{\alpha}(S)}\sum_{i=1}^{S}J(x^{(i)})\mu_{S}(i). \tag{23}\]
The optimal solution set of Problem \(\tilde{P}_{\alpha}(S)\) is written as
\[\tilde{A}_{\alpha}(S):=\Big{\{}(\mu_{S},\mathcal{C}_{S})\in\tilde{\mathcal{U} }_{\alpha}(S):\sum_{i=1}^{S}J(x^{(i)})\mu_{S}(i)=\tilde{\mathcal{J}}_{\alpha}(S )\Big{\}}.\]
For \(\tilde{\mathcal{J}}_{\alpha}(S)\), we have the following theorem.
**Theorem 2**.: _Suppose that Assumptions 1 and 2 hold. We have_
\[\lim_{S\to\infty}\tilde{\mathcal{J}}_{\alpha}(S)=\mathcal{J}_{ \alpha}^{*}. \tag{24}\]
The proof of Theorem 2 is presented in Appendix A. If the number \(S\to\infty\), the optimal solution \(\tilde{\mu}_{S}\in\tilde{A}_{\alpha}(S)\) of Problem \(\tilde{P}_{\alpha}(S)\) can be used as an approximate optimal solution of Problem \(P_{\alpha}\). However, solving Problem \(\tilde{P}_{\alpha}(S)\) becomes computationally impractical when \(S\to\infty\). We will further show that the optimal objective value \(\tilde{\mathcal{J}}_{\alpha}(S)\) equals \(\mathcal{J}_{\alpha}^{*}\) when \(S=2\).
Let
\[\mathcal{U}_{\mathsf{m}}:=\Big{\{}\mu_{\mathsf{m}}\in[0,1]^{2}: \sum_{i=1}^{2}\mu_{\mathsf{m}}(i)=1\Big{\}}. \tag{25}\]
Let \(\mathbf{z}_{\mathsf{m}}:=(\mu_{\mathsf{m}}(1),\mu_{\mathsf{m}}(2),\mathbf{x}_{\mathsf{ m}}(1),\mathbf{x}_{\mathsf{m}}(2))\) be a variable in the set \(\mathcal{Z}_{\mathsf{m}}:=\mathcal{U}_{\mathsf{m}}\times\mathcal{X}^{2}\). Then, consider an optimization problem on \(\mathbf{z}_{\mathsf{m}}\) written as
\[\min_{\mathbf{z}_{\mathsf{m}}\in\mathcal{Z}_{\mathsf{m}}} \sum_{i=1}^{2}J(x^{(i)})\mu_{\mathsf{m}}(i)\] ( \[W_{\alpha}\] ) s.t. \[\sum_{i=1}^{2}\mu_{\mathsf{m}}(i)\mathbb{P}(x^{(i)})\geq 1-\alpha.\]
Define the feasible set \(Z_{\mathsf{m},\alpha}\) of Problem \(W_{\alpha}\) by
\[\mathcal{Z}_{\mathsf{m},\alpha}:=\Big{\{}\mathbf{z}_{\mathsf{m}}\in \mathcal{Z}_{\mathsf{m}}:\sum_{i=1}^{2}\mathbb{P}(x^{(i)})\mu_{\mathsf{m}}(i) \geq 1-\alpha\Big{\}}.\]
In addtion, define the optimal objective value \(\mathcal{J}_{\alpha}^{\mathsf{w}}\) and optimal solution set \(D_{\alpha}\) of Problem \(W_{\alpha}\) by
\[\mathcal{J}_{\alpha}^{\mathsf{w}} :=\min\Big{\{}\sum_{i=1}^{2}J(x^{(i)})\mu_{\mathsf{m}}(i):\mathbf{z}_ {\mathsf{m}}\in\mathcal{Z}_{\mathsf{m},\alpha}\Big{\}}, \tag{26}\] \[D_{\alpha} :=\Big{\{}\mathbf{z}_{\mathsf{m}}\in\mathcal{Z}_{\mathsf{m},\alpha}: \sum_{i=1}^{2}J(x^{(i)})\mu_{\mathsf{m}}(i)=\mathcal{J}_{\alpha}^{\mathsf{w}} \Big{\}}, \tag{27}\]
respectively. Notice that Problem \(W_{\alpha}\) is a special case of Problem \(\tilde{P}_{\alpha}(S)\) with \(S=2\). We redefine it as Problem \(W_{\alpha}\) to simplify the notation in Section 5 for the convenience of discussing the approximate problem established by extracting samples of \(\xi\). The following theorem shows that the optimal objective values of Problem \(W_{\alpha}\) and Problem \(\tilde{P}_{\alpha}\) are equal.
**Theorem 3**.: _Suppose Assumptions 1-2 hold. Then, we have \(\mathcal{J}_{\alpha}^{*}=\mathcal{J}_{\alpha}^{\mathsf{w}}\)._
The proof of Theorem 3 is summarized in Section 4.2. Theorem 3 is the main result for establishing a reduced problem \(W_{\alpha}\) equivalent to Problem \(P_{\alpha}\). Namely, to obtain an optimal probabilistic decision, instead of solving Problem \(P_{\alpha}\), an optimization problem in an infinite-dimensional space, we could solve problem \(W_{\alpha}\) whose domain is \([0,1]^{2}\times\mathcal{X}^{2}\) in the \((2n+2)\)-dimension space.
Let \(\mathbf{z}_{\mathsf{m}}^{*}\in D_{\alpha}\) be an optimal solution of Problem \(W_{\alpha}\). The solution \(\mathbf{z}_{\mathsf{m}}^{*}\) is written as \(\mathbf{z}_{\mathsf{m}}^{*}:=\Big{(}\mu_{\mathsf{m}}^{*}(1),\mu_{\mathsf{m}}^{*}(2 ),x_{*}^{(1)},x_{*}^{(2)}\Big{)}\), where \(\mu_{\mathsf{m}}^{*}(1),\mu_{\mathsf{m}}^{*}(2)\) are assigned to \(x_{*}^{(1)},x_{*}^{(2)}\), respectively. Define a mapping from the optimal solution set \(D_{\alpha}\) of Problem \(W_{\alpha}\) to the domain \(M(\mathcal{X})\) of Problem \(P_{\alpha}\), as \(\Phi:D_{\alpha}\to M(\mathcal{X})\), for any \(\mathbf{z}_{\mathsf{m}}^{*}\in D_{\alpha}\), we obtain a probability measure
\[\tilde{\mu}_{\mathbf{z}_{\mathsf{m}}^{*}}=\Phi(\mathbf{z}_{\mathsf{m}}^{*})\in M( \mathcal{X})\]
that satisfies \(\tilde{\mu}_{\mathbf{z}_{\mathsf{m}}^{*}}(\{x_{*}^{(1)}\})=\mu_{\mathsf{m}}^{*}(1)\) and \(\tilde{\mu}_{\mathbf{z}_{\mathsf{m}}^{*}}(\{x_{*}^{(2)}\})=\mu_{\mathsf{m}}^{*}(2)\). By Theorem 3, we have
\[\int_{\mathcal{X}}J(x)\text{d}\tilde{\mu}_{\mathbf{z}_{\mathsf{m}}^{*} }(x)=\sum_{i=1}^{2}J(x_{*}^{(i)})\mu_{\mathsf{m}}^{*}(i)=\mathcal{J}_{\alpha}^{ *}. \tag{28}\]
In addition, if \(\mathbf{z}_{\text{m}}^{*}\in D_{\alpha}\) is an optimal solution of Problem \(W_{\alpha}\), we have
\[\int_{\mathcal{X}}\mathbb{P}(x)\text{d}\tilde{\mu}_{\mathbf{z}_{\text{m}}^{*}}(x)= \sum_{i=1}^{2}\mathbb{P}(x_{*}^{(i)})\mu_{\text{m}}^{*}(i)\geq 1-\alpha. \tag{29}\]
From (28) and (29), we have that \(\tilde{\mu}_{\mathbf{z}_{\text{m}}^{*}}\) is within the optimal set \(A_{\alpha}\) of Problem \(P_{\alpha}\). Namely, by solving Problem \(W_{\alpha}\), we could obtain an optimal solution \(\tilde{\mu}_{\mathbf{z}_{\text{m}}^{*}}\) of Problem \(P_{\alpha}\) through the mapping \(\Phi\).
Besides, we summarize the properties of the optimal solutions of Problem \(W_{\alpha}\) in the following corollary.
**Corollary 1**.: _Let \(\mathbf{z}_{\text{m}}^{*}=\left(\mu_{\text{m}}^{*}(1),\mu_{\text{m}}^{*}(2),x_{*}^ {(1)},x_{*}^{(2)}\right)\in D_{\alpha}\) be an optimal solution of Problem \(W_{\alpha}\). Let \(\tilde{\alpha}^{(1)}=1-\mathbb{P}(x_{*}^{(1)})\) and \(\tilde{\alpha}^{(2)}=1-\mathbb{P}(x_{*}^{(2)})\). We have \(x_{*}^{(1)}\in X_{\tilde{\alpha}^{(1)}},x_{*}^{(2)}\in X_{\tilde{\alpha}^{(2)}}\)._
The proof of Corollary 1 is presented in Appendix B. By Corollary 1, the optimal solution of Problem \(W_{\alpha}\) contains two points in \(\mathcal{X}\) such that are optimal solutions of \(Q_{\tilde{\alpha}^{(1)}}\) and \(Q_{\tilde{\alpha}^{(2)}}\), respectively. The following corollary summarizes the connections between Problem \(P_{\alpha}\) and Problem \(Q_{\alpha}\).
**Corollary 2**.: _If \(J_{\tilde{\alpha}}^{*}\) is a convex function of \(\tilde{\alpha}\) or when \(\alpha=0\), we have \(\mathcal{J}_{\alpha}^{*}=J_{\alpha}^{*}\)._
The proof of Corollary 2 is presented in Appendix C. Corollary 2 shows that it is not necessary to consider chance constrained probability measure optimization for improving the expectation of the performance under the chance constraint when \(J_{\tilde{\alpha}}^{*}\) is a convex function of \(\tilde{\alpha}\) or \(\alpha=0\). We cannot verify whether \(J_{\tilde{\alpha}}^{*}\) is a convex function of \(\tilde{\alpha}\) in advance. When \(\alpha=0\), the constraint \(\bar{h}(x,\xi)\) should be satisfied with probability 1 (w.p.1). In this case, the deterministic decision can achieve the optimal solution, and the probabilistic decision is unnecessary.
### Proof of Theorem 3
For a given number \(S\in\mathbb{N}\), let \(\mathcal{E}_{S}:=\left(\tilde{\alpha}^{(1)},...,\tilde{\alpha}^{(S)}\right)\) be an element of \([0,1]^{S}\), defining as a set of violation probabilities, where each \(\tilde{\alpha}^{(i)}\) is a threshold of violation probability in Problem \(Q_{\tilde{\alpha}^{(i)}}\) (Problem \(Q_{\alpha}\) when \(\alpha=\tilde{\alpha}^{(i)}\)). For a violation probability set \(\mathcal{E}_{S}\), we have a corresponding optimal objective value set \(\{J_{\tilde{\alpha}^{(i)}}^{*},...,J_{\tilde{\alpha}^{(i)}}^{*},...,J_{\tilde{ \alpha}^{(S)}}^{*}\}\), where \(J_{\tilde{\alpha}^{(i)}}^{*}\) is the optimal objective value \(J_{\tilde{\alpha}^{(i)}}^{*}\) of Problem \(Q_{\tilde{\alpha}^{(i)}}\), for a given \(\tilde{\alpha}^{(i)}\), by (4). Let \(\mathcal{V}_{S}:=\{\nu_{S}\in[0,1]^{S}:\sum_{i=1}^{S}\nu_{S}(i)=1\}\) be a set of discrete probability measures that defined on \(\mathcal{E}_{S}\). By determining a violation probability set \(\mathcal{E}_{S}\) and assigning a discrete probability \(\nu_{S}\) to \(\mathcal{E}_{S}\), we get a probabilistic decision in which the threshold of violation probability is randomly extracted from \(\mathcal{E}_{S}\) obeying the discrete probability \(\nu_{S}\). The corresponding expectation of the optimal objective value is \(\sum_{i=1}^{S}J_{\tilde{\alpha}^{(i)}}^{*}\nu_{S}(i)\). The expectation of the objective value can be optimized under a chance constraint \(\sum_{i=1}^{S}(1-\tilde{\alpha}^{(i)})\nu_{S}(i)\geq 1-\alpha\) taking \((\nu_{S},\mathcal{E}_{S})\) as the decision variable, which is formulated as
\[\min_{\nu_{S}\in\mathcal{V}_{S},\mathcal{E}_{S}\in[0,1]^{S}} \sum_{i=1}^{S}J_{\tilde{\alpha}^{(i)}}^{*}\nu_{S}(i)\] ( \[\tilde{V}_{\alpha}(S)\] ) \[\text{s.t.}\quad\sum_{i=1}^{S}(1-\tilde{\alpha}^{(i)})\nu_{S}(i) \geq 1-\alpha,\] \[\tilde{\alpha}^{(i)}\in\mathcal{E}_{S},\;\forall i\in[S].\]
In this cost function of Problem \(\tilde{V}_{\alpha}(S)\), the optimal objective value \(J_{\tilde{\alpha}^{(i)}}^{*}\) of Problem \(Q_{\tilde{\alpha}^{(i)}}\) is used for each \(\tilde{\alpha}^{(i)}\), which is the same as \(J(x^{(i)})\) for each \(x^{(i)}\) in Problem \(\tilde{P}_{\alpha}(S)\). For the simplicity of writing, we use the notations \(\mathbf{\theta}_{S}:=(\nu_{S},\mathcal{E}_{S})\) and \(\Theta_{S}:=\mathcal{V}_{S}\times[0,1]^{S}\). Define the feasible set \(\Theta_{S,\alpha}\) of \(\tilde{V}_{\alpha}(S)\) by
\[\Theta_{S,\alpha}:=\Big{\{}\mathbf{\theta}_{S}\in\Theta_{S}:\sum_{i=1}^{S}(1- \tilde{\alpha}^{(i)})\nu_{S}(i)\geq 1-\alpha\Big{\}}.\]
Different from Problem \(\tilde{P}_{\alpha}(S)\), Problem \(\tilde{V}_{\alpha}(S)\) optimizes finite combinations of violation probability thresholds to achieve the minimal mean optimal objective value with chance constraint. We summarize the connection between \(\tilde{P}_{\alpha}(S)\) and \(\tilde{V}_{\alpha}(S)\) in Proposition 2.
**Proposition 2**.: _Suppose Assumption 1 holds. For all \(S\in\mathbb{N}\), there exists a feasible solution \(\mathbf{\theta}_{S}:=(\nu_{S},\tilde{\alpha}^{(1)},...,\tilde{\alpha}^{(S)})\in \Theta_{S,\alpha}\) of Problem \(\tilde{V}_{\alpha}(S)\) that satisfies_
\[\sum_{i=1}^{S}J_{\tilde{\alpha}^{(i)}}^{*}\nu_{S}(i)=\tilde{\mathcal{J}}_{\alpha}( S), \tag{30}\]
_where \(\tilde{\mathcal{J}}_{\alpha}(S)\) is the optimal objective value of Problem \(\tilde{P}_{\alpha}(S)\) defined by (23)._
The proof of Proposition 2 is presented in Appendix D. By using Proposition 2, we can further obtain the following lemma.
**Lemma 1**.: _Suppose Assumptions 1-2 both hold. There exists a feasible solution \(\mathbf{\theta}_{\text{m}}:=\{\nu_{\text{m}}(1),\nu_{\text{m}}(2),\tilde{\alpha}_{ \text{m}}^{(1)},\tilde{\alpha}_{\text{m}}^{(2)}\}\in\Theta_{2,\alpha}\) of Problem \(\tilde{V}_{\alpha}(2)\) that satisfies_
\[\sum_{i=1}^{2}J_{\tilde{\alpha}_{\text{m}}^{(i)}}^{*}\nu_{\text{m}}(i)= \mathcal{J}_{\alpha}^{*}. \tag{31}\]
The proof of Lemma 1 is presented in Appendix E. With Theorem 2 and Lemma 1 as the preparation, the proof of Theorem 3 is summarized as follows:
Proof.: By (43) in the Proof of Theorem 2 (Appendix A), we have \(\mathcal{J}_{\alpha}^{*}\leq\mathcal{J}_{\alpha}^{\text{w}}\), where \(\mathcal{J}_{\alpha}^{\text{w}}=\tilde{J}_{\alpha}(S)\) with \(S=2\). Then, we need to show that \(\mathcal{J}_{\alpha}^{*}\geq\mathcal{J}_{\alpha}^{\text{w}}\). By Lemma 1, we can find a feasible solution \(\mathbf{\theta}_{\text{m}}:=\{\nu_{\text{m}}(1),\nu_{\text{m}}(2),\tilde{\alpha}_ {\text{m}}^{(1)},\tilde{\alpha}_{\text{m}}^{(2)}\}\in\Theta_{2,\alpha}\) of Problem \(\tilde{V}_{\alpha}(S)\) with \(S=2\) that satisfies (31), namely, \(\mathcal{J}_{\alpha}^{*}=\sum_{i=1}^{2}J_{\tilde{\alpha}_{\text{m}}^{(i)}}^{* }\nu_{\text{m}}(i)\). Choose \(\mu_{\text{m}}=\nu_{\text{m}}\) and \(x_{*}^{(1)}\in X_{\tilde{\alpha}_{\text{m}}^{(1)}}\), \(x_{*}^{(2)}\in X_{\tilde{\alpha}_{\text{m}}^{(2)}}\). Since \(\mathbf{\theta}_{\text{m}}\) is a feasible solution of Problem \(\tilde{V}_{\alpha}(2)\), we have
\[\sum_{i=1}^{2}\mathbb{P}(x_{*}^{(i)})\mu_{\text{m}}(i)\geq\sum_{i=1}^{2}\left( 1-\tilde{\alpha}_{\text{m}}^{(i)}\right)\nu_{\text{m}}(i)\geq 1-\alpha. \tag{32}\]
Notice that (32) implies that \(\left(\mu_{\text{m}}(1),\mu_{\text{m}}(2),x_{*}^{(1)},x_{*}^{(2)}\right)\) is a feasible solution of Problem \(W_{\alpha}\). By Lemma 1, we have
\[\mathcal{J}_{\alpha}^{*}=\sum_{i=1}^{2}J_{\tilde{\alpha}_{\text{m}}^{(i)}}^{ *}\nu_{\text{m}}(i)=\sum_{i=1}^{2}J(x_{*}^{(i)})\mu_{\text{m}}(i)\geq\mathcal{ J}_{\alpha}^{\text{w}}. \tag{33}\]
Thus, we have \(\mathcal{J}_{\alpha}^{\text{w}}=\mathcal{J}_{\alpha}^{*}\).
Here, we give a simple numerical example to demonstrate the result of Theorem 3 as follows:
**Example 2**.: _The cost function \(J(x)\) and the constraint function \(h(x,\xi)\) are given by (6) and (7) in Example 1, respectively. For \(\alpha=0.25\), we obtain \(\tilde{J}_{\alpha}(S)\) for \(S=1,2,5,10,20,30,50\). The result is plotted in Figure 2. As illustrated by Figure 2, the optimal probabilistic decision with the probability measure concentrated on two points achieves the same cost performance as the ones with the probability measure concentrated on more points, consistent with Theorem 3. The problem \(\tilde{P}_{\alpha}(S)\) is solved by the sample-based smooth approximation method presented in Section 5. Although the case with \(S=2\) is discussed in Section 5, the conclusions are consistent for any finite integer \(S\)._
Figure 2: Demonstration of the result of Theorem 3.
## 5 Sample-based smooth approximation for Solving Problem \(W_{\alpha}\)
Section 4 has shown that Problem \(P_{\alpha}\) has an optimal probabilistic decision whose probability measure is concentrated on two points. By solving Problem \(W_{\alpha}\) whose domain is \((2n+2)\)-dimensional, we can obtain an optimal solution of Problem \(P_{\alpha}\). However, due to chance constraints from the random variable \(\xi\) (refer to (1) and (2)), Problem \(W_{\alpha}\) is still an intractable problem. This section presents the approximate method for solving Problem \(W_{\alpha}\) by extracting samples from \(\Xi\).
### Uniform convergence of optimal objective values
As with [20], we have the following assumption on problem \(Q_{\alpha}\) for every \(\alpha\in(0,1]\).
**Assumption 3**.: _For all \(\alpha\in(0,1]\), there exists a globally optimal solution of \(Q_{\alpha}\), \(\bar{x}_{\alpha}\), such that for any \(\eta>0\) there is \(x\in\mathcal{X}\) such that \(\|x-\bar{x}_{\alpha}\|\leq\eta\) and \(\mathbb{P}(x)>1-\alpha\)._
Assumption 3 implies that there exists a sequence \(\{x_{k}\}_{k=1}^{\infty}\subset\mathcal{X}\) that converges to an optimal solution \(\bar{x}_{\alpha}\) of Problem \(Q_{\alpha}\) such that \(\mathbb{P}(x_{k})>1-\alpha\). Recall that the constraint function \(h(x,\xi)\) of Problem \(Q_{\alpha}\) involves the random variable \(\xi\). The sample space of \(\xi\) is \(\Xi\). Let \(\mathcal{D}_{N}=\{\xi^{(1)},...,\xi^{(N)}\}\) be a set of samples, where \(\xi^{(j)},j\in[N]\) is extracted from \(\Xi\), the support of \(\xi\). With data set \(\mathcal{D}_{N}\), we can define a sample-based approximation of \(\mathbb{P}(x)\) by
\[\hat{\mathbb{P}}_{N}(x):=\frac{1}{N}\sum_{j=1}^{N}\mathbb{I}\left(h(x,\xi^{(j) })\right). \tag{34}\]
There is one drawback to using the sample-based approximation. The sample-based approximation is not differentiable due to the discontinuous indicator function \(\mathbb{I}(\cdot)\).
As in [20], for a positive parameter \(\varepsilon\), a function \(\Lambda_{\varepsilon}(\cdot)\) is defined
\[\Lambda_{\varepsilon}(y)=\left\{\begin{array}{ll}0,&y\geq \varepsilon,\\ \lambda_{\varepsilon}(y),&-\varepsilon<y<\varepsilon,\\ 1,&\text{if }y\leq-\varepsilon,\end{array}\right. \tag{35}\]
where \(\lambda_{\varepsilon}:[-\varepsilon,\varepsilon]\to[0,1]\) is a symmetric and strictly decreasing smooth function that makes \(\Lambda_{\varepsilon}(\cdot)\) a continuously differentiable function. For a smoothing parameter \(\varepsilon>0\), sample number \(N>0\), a positive parameter \(\gamma>0\), we define \(\hat{\mathbb{P}}_{N}^{\gamma,\varepsilon}(x)\) by
\[\hat{\mathbb{P}}_{N}^{\gamma,\varepsilon}(x):=\frac{1}{N}\sum_{j=1}^{N} \Lambda_{\varepsilon}\left(h(x,\xi^{(j)})+\gamma\right). \tag{36}\]
The function \(\hat{\mathbb{P}}_{N}^{\gamma,\varepsilon}(x)\) is a sample-based smooth approximation of \(\mathbb{P}(x)\). Compared to the sample-based approximation \(\hat{\mathbb{P}}_{N}(x)\), \(\hat{\mathbb{P}}_{N}^{\gamma,\varepsilon}(x)\) is differentiable and gives a smooth approximation of feasible region's boundary.
For a given set \(\mathcal{D}_{N}\), positive parameters \(\alpha^{\prime}\in[0,\alpha),\gamma,\varepsilon>0\), a sample-based smooth approximate problem of \(W_{\alpha}\) is formulated as:
\[\min_{\mathbf{z}_{\mathbf{m}}}\sum_{i=1}^{2}J(x^{(i)})\mu_{ \mathbf{m}}(i) (\tilde{W}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N}))\] \[\text{s.t.}\quad\sum_{i=1}^{2}\tilde{\mathbb{P}}_{N}^{\gamma, \varepsilon}(x^{(i)})\mu_{\mathbf{m}}(i)\geq 1-\alpha^{\prime}.\]
Let \(\mathcal{J}_{\mathbf{m}}(\mathbf{z}_{\mathbf{m}}):=\sum_{i=1}^{2}J(x^{(i)})\mu _{\mathbf{m}}(i)\) and \(\mathcal{P}_{\mathbf{m}}(\mathbf{z}_{\mathbf{m}}):=\sum_{i=1}^{2}\tilde{ \mathbb{P}}_{N}^{\gamma,\varepsilon}(x^{(i)})\mu_{\mathbf{m}}(i)\). The feasible region \(\mathcal{Z}_{m,\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\), the optimal objective value \(\tilde{\mathcal{J}}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\), and the optimal solution set \(\tilde{D}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) of Problem \(\tilde{W}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) are defined by
\[\mathcal{Z}_{m,\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N}) =\left\{\mathbf{z}_{\mathbf{m}}\in\mathcal{Z}_{m}:\mathcal{P}_{ \mathbf{m}}(\mathbf{z}_{\mathbf{m}})\geq 1-\alpha^{\prime}\right\} \tag{37}\] \[\tilde{\mathcal{J}}_{\alpha^{\prime}}^{\gamma,\varepsilon}( \mathcal{D}_{N}) =\min_{\mathbf{z}_{\mathbf{m}}\in\mathcal{Z}_{m,\alpha^{\prime}}^{ \gamma,\varepsilon}(\mathcal{D}_{N})}\mathcal{J}_{\mathbf{m}}(\mathbf{z}_{ \mathbf{m}})\] (38) \[\tilde{D}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N}) =\left\{\mathbf{z}_{\mathbf{m}}\in\mathcal{Z}_{m,\alpha^{\prime}}^{ \gamma,\varepsilon}(\mathcal{D}_{N}):\mathcal{J}_{\mathbf{m}}(\mathbf{z}_{ \mathbf{m}})=\tilde{\mathcal{J}}_{\alpha^{\prime}}^{\gamma,\varepsilon}( \mathcal{D}_{N})\right\}.\]
The uniform convergence in the sense of almost everywhere on \(\tilde{\mathcal{J}}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) and \(\tilde{D}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) is summarized in the following theorem.
**Theorem 4**.: _Suppose Assumptions 1-3 hold. If \(\gamma=0\), as \(N\rightarrow\infty\), and \(\varepsilon\to 0\), we have \(\tilde{\mathcal{J}}_{\alpha}^{\gamma,\varepsilon}(\mathcal{D}_{N}) \rightarrow\mathcal{J}_{\alpha}^{*}\) and \(\mathbb{D}\left(\tilde{D}_{\alpha}^{\gamma,\varepsilon}(\mathcal{D}_{N}),D_{ \alpha}\right)\to 0\) w. p. 1, where_
\[\mathbb{D}(Z_{1},Z_{2}):=\sup_{z\in Z_{1}}\inf_{z^{\prime}\in Z_{2}}\|z-z^{ \prime}\|\]
_is the deviation of set \(Z_{1}\) from set \(Z_{2}\)._
The proof of Theorem 4 is presented in Appendix F.
### Feasibility with finite samples
Then, now consider the condition under which an optimal solution for Problem \(\tilde{W}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) is feasible for Problem \(W_{\alpha}\). Define \(\hat{\mathbb{P}}^{\gamma,\varepsilon}(x)\) for \(x\in\mathcal{X}\) by
\[\hat{\mathbb{P}}^{\gamma,\varepsilon}(x):=\int_{\Xi}\Lambda_{ \varepsilon}\left(h(x,\xi)+\gamma\right)p(\xi)\text{d}\xi. \tag{39}\]
We have the probabilistic feasibility guarantee as stated in the following theorem.
**Theorem 5**.: _Let \(\tilde{\mathbf{z}}_{\text{m},\alpha^{\prime}}\in\tilde{D}_{\alpha^{\prime}}^{ \gamma,\varepsilon}(\mathcal{D}_{N})\) be an optimal solution of Problem \(\tilde{W}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\), which is specified as \(\tilde{\mathbf{z}}_{\text{m},\alpha^{\prime}}=\left(\tilde{\mu}_{\text{m}}(1), \tilde{\mu}_{\text{m}}(2),\tilde{x}^{(1)},\tilde{x}^{(2)}\right)\). Then, if \(\alpha^{\prime}<\alpha\) we have_
\[\mathsf{Pr}\{\tilde{\mathbf{z}}_{\text{m},\alpha^{\prime}}\notin \mathcal{Z}_{\text{m},\alpha}\}\leq\exp\{-2N\left(R_{\alpha^{\prime}}^{\gamma, \varepsilon}\right)^{2}\}, \tag{40}\]
_where \(R_{\alpha^{\prime}}^{\gamma,\varepsilon}\) is defined by_
\[R_{\alpha^{\prime}}^{\gamma,\varepsilon} =\inf_{\mathbf{z}_{\text{m}}\in\mathbb{Z}_{\text{m}}}\sum_{i=1}^{2} \left(\mathbb{P}(x^{(i)})-\hat{\mathbb{P}}^{\gamma,\varepsilon}(x^{(i)}) \right)\mu_{\text{m}}(i)+(\alpha-\alpha^{\prime}). \tag{41}\]
The proof of Theorem 5 is presented in Appendix G. Theorem 5 shows that if the violation probability threshold of the approximate Problem \(\tilde{W}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) is decreased appropriately to \(\alpha^{\prime}<\alpha\), then an optimal solution \(\tilde{\mathbf{z}}_{\text{m},\alpha^{\prime}}\) of the approximate problem \(\tilde{W}_{\alpha^{\prime}}^{\gamma,\varepsilon}(\mathcal{D}_{N})\) is not feasible for Problem \(W_{\alpha}\) with a probability that decreases exponentially with the size of the sample \(N\).
## 6 Numerical Validation
In this section, numerical validations have been implemented to compare the performances of the proposed probabilistic decision method and several existing deterministic decision methods. The chosen application case study is a chance-constrained trajectory planning problem with obstacles, a very common industrial problem in trajectory planning of autonomous driving [34]. Note that we use a simplified model used in [35] for this demonstration.
### Model and simulation settings
The numerical example considers a quadrotor system control problem in turbulent conditions. The control problem is expressed as follows:
\[\min_{\mu\in M(\mathcal{U}^{N})}\mathcal{J}(\mu)=\mathbb{E}\{ \ell^{s}(s)+\ell^{u}(u)\}\] ( \[P_{\text{QSC}}\] ) \[\text{s.t.}\quad s_{t+1}=As_{t}+B(m)u_{t}+d(s_{t},\varphi)+\omega_ {t},\;u\sim M(\mathcal{U}^{N}),\] \[\quad\quad t=0,1,...,N-1,\] \[\quad\quad\mathsf{Pr}\{\left(\wedge_{t=1}^{N-1}s_{t}\notin \mathcal{O}\right)\wedge(s_{N}\in\mathcal{S}_{\text{goal}})\}\geq 1-\alpha,\]
where \(A\), \(B(m)\), \(d(s_{t},\varphi)\) are written by
\[A=\begin{bmatrix}1&\Delta t&0&0\\ 0&1&0&0\\ 0&0&1&\Delta t\\ 0&0&0&1\end{bmatrix},\;B(m)=\frac{1}{m}\begin{bmatrix}\frac{\Delta t^{2}}{2}& 0\\ \Delta t&0\\ 0&\frac{\Delta t^{2}}{2}\\ 0&\frac{\Delta t}{2}\end{bmatrix},\]
\(d(s_{t},\varphi)=-\varphi[\frac{\Delta t^{2}|v_{x}|v_{x}}{2}\ \Delta t|v_{x}|v_{x}\ \frac{\Delta t^{2}|v_{v}|v_{x}}{2}\ \Delta t|v_{y}|v_{y}]^{\top}\) and \(\Delta t=1\) is the sampling time, the state of the system is denoted as \(s_{t}=[p_{x,t},v_{x,t},p_{y,t},v_{y,t}]\in\mathbb{R}^{4}\), the control input of the system is \(u_{t}=\{u_{x},u_{y}\}\), and the state and control trajectories are denoted as \(s=(s_{t})_{t=1}^{N}\) and \(u=(u_{t})_{t=1}^{N-1}\). The system starts from an initial point \(s_{0}=[-0.5,0,-0.5,0]\). The system is expected to reach the destination set \(\mathcal{S}_{\text{goal}}=\{(p_{x},v_{x},p_{y},v_{y})\|(p_{x}-10,p_{y}-10)\| \leq 2,v_{x}=0,v_{y}=0\}\) at time \(N=10\) while avoiding two polytopic obstacles \(\mathcal{O}\) shown in Fig. 3. The dynamics are parametrized by uncertain parameter vector \(\xi_{t}=[m,\varphi]^{\top}\), where \(m>0\) represents the system's mass and \(\varphi>0\) is an uncertain drag coefficient. The parameter vector \(\xi\) of the system is uncorrelated random variables such that \((m-0.75)/0.5\sim\text{Beta}(2,2)\) and \((\varphi-0.4)/0.2\sim\text{Beta}(2,5)\), where \(\text{Beta}(a,b)\) denotes a Beta distribution with shape parameters \((a,b)\). \(\omega_{t}\in\mathbb{R}^{4}\) is the uncertain disturbance at time step \(t\), which obeys multivariate normal distribution with zero means and a diagonal covariance matrix \(\Sigma\) with the diagonal elements as \(\Sigma(1,1)=0.01,\ \Sigma(2,2)=0.75,\ \Sigma(3,3)=0.01,\ \Sigma(4,4)=0.75\). The cost function includes
* \(\ell^{s}(s)=\frac{1}{N}\sum_{t=0}^{N-1}\big{(}(p_{x,t+1}-p_{x,t})^{2}+(p_{y,t+ 1}-p_{y,t})^{2}\big{)}\);
* \(\ell^{u}(u)=\frac{0.1}{N}\sum_{t=0}^{N-1}\big{(}u_{1,t}^{2}+u_{2,t}^{2}\big{)}\).
By solving Problem \(P_{\text{QSC}}\), we could get probability measure \(\mu\) to implement a probabilistic decision. For deterministic decision, the following problem should be solved instead.
\[\underset{u\in\mathcal{U}^{N}}{\text{min}}\ J(u)=\mathbb{E}\{\ell^{s}(s)+\ell^{u}(u)\}\] ( \[P_{\text{QSC}}^{\text{det}}\] ) \[\text{s.t.}\ \ \ s_{t+1}=As_{t}+B(m)u_{t}+d(s_{t},\varphi)+\omega_{t},\ u \sim M(\mathcal{U}^{N}),\] \[\ \
### Results and discussions
Results of one specific case are shown in Fig. 3 for four different methods by setting \(\alpha\) as \(15\%\), the number of \(\omega_{t}\) samples as \(2000\) for all four methods. Besides, the parameters \(\varepsilon\) and \(\gamma\) are both \(0.01\) for **SCA** and **Proposed**. Fig. 3 shows \(10,000\) Monte-Carlo (MC) simulations of the quadrotor system using the open-loop policies computed by four different methods. The deterministic decision generated by **Scenario** leads to a very safe trajectory with only \(0.31\%\) in the MC simulations. However, the expected cost \(\mathcal{J}\) is high. **Scenario** is originally developed for robust optimization and thus gives a much more conservative solution than the required violation probability threshold. The deterministic
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Methods & \(N\) & (a) \(\mathbb{E}\{\mathcal{J}\}\) & (b) \(\mathsf{Pr}\{\alpha>0.15\}\) & (c) \(\mathbb{E}\{T_{c}\}\) (s) \\ \hline \multirow{8}{*}{**Scenario**} & \(100\) & 4763.1 & 0.023 & **0.048** \\ & \(200\) & 4928.1 & 0.0042 & **0.079** \\ & \(500\) & 5189.1 & 0.0012 & **0.128** \\ & \(800\) & 5281.7 & 0 & **0.176** \\ & \(1000\) & 5402.3 & 0 & **0.288** \\ & \(1500\) & 5562.9 & 0 & **0.378** \\ & \(2000\) & 5693.5 & 0 & **0.496** \\ \hline \multirow{8}{*}{**SAA**} & \(100\) & 4088.2 & 0.0394 & 9.76 \\ & \(200\) & 4176.9 & 0.0129 & 14.67 \\ & \(500\) & 4257.4 & 0.0029 & 23.62 \\ & \(800\) & 4319.8 & 0.0023 & 34.22 \\ & \(1000\) & 4352.6 & 0.0016 & 49.38 \\ & \(1500\) & 4398.3 & 0 & 68.92 \\ & \(2000\) & 4432.1 & 0 & 97.84 \\ \hline \multirow{8}{*}{**SCA**} & \(100\) & 4092.3 & 0.0379 & 0.063 \\ & \(200\) & 4198.7 & 0.0125 & 0.085 \\ & \(500\) & 4268.5 & 0.0018 & 0.142 \\ & \(800\) & 4342.9 & 0.0015 & 0.213 \\ & \(1000\) & 4381.1 & 0.0011 & 0.327 \\ & \(1500\) & 4429.2 & 0 & 0.479 \\ & \(2000\) & 4465.3 & 0 & 0.631 \\ \hline \multirow{8}{*}{**Proposed**} & \(100\) & **3785.1** & 0.0387 & 0.153 \\ & \(200\) & **3895.5** & 0.0128 & 0.221 \\ \cline{1-1} & \(500\) & **3979.1** & 0.0022 & 0.365 \\ \cline{1-1} & \(800\) & **4025.9** & 0.0016 & 0.578 \\ \cline{1-1} & \(1000\) & **4089.7** & 0.0012 & 0.867 \\ \cline{1-1} & \(1500\) & **4106.1** & 0 & 1.151 \\ \cline{1-1} & \(2000\) & **4120.8** & 0 & 1.720 \\ \hline \end{tabular}
\end{table}
Table 1: Statistical analysis of the performance. Results (mean) of 1000 trials are reported.
Figure 3: Solutions from different methods for the tolerable failure probability threshold \(\alpha=15\%\). Blue trajectories from Monte-Carlo (MC) simulations denote feasible trajectories that reach the goal set \(\mathcal{S}_{goal}\) and avoid obstacles \(\mathcal{O}\). Red trajectories mean that either constraint is violated in the simulation. The expression \(\mathsf{MC}=14.22\%\) means the probability of having red trajectories in the MC simulations. (a) **Scenario**; (b) **SAA**; (c) **SCA**; (d) **Proposed**.
control policies generated by **SAA**, and **SCA** give almost the same trajectories which optimize the cost \(\mathcal{J}\) much more than **Scenario** and still satisfy the required chance constraint. Since a mixed integer program needs to be solved in **SAA**, the computation time \(T_{c}\) is much longer than **SCA** and **Scenario**. When the probabilistic decision generated by **Proposed** is implemented, the cost \(\mathcal{J}\) is further improved with some sacrifice in the computation time while keeping the risk at the same level. This shows that stochastic policies can improve the expectations of the objective function.
A more comprehensive statistical analysis has been implemented by conducting the Monter-Carlo trials 1000 times when the parameters in the simulation are fixed. In each Monter-Carlo trial, \(10,000\) Monte-Carlo (MC) simulations of the quadrotor system are implemented to calculate the violation probability and the expected cost. In this validation, the sample number of \(\alpha^{(i)}\) is set as 50 for **Proposed**. The results are summarized in Table I and are plotted in Figure 4 for visualization.
The deterministic policy obtained by **Scenario** shows high robustness to uncertainties but also costs more. Note that the deterministic decision obtained by **Scenario** corresponds to the completely robust decision, which cannot be improved if complete robustness is required. However, in our setting, we allow a probability of risk as \(15\%\), and it is possible to obtain better performance using measurable policy. The measurable policy obtained by **Proposed** sacrifices some computation time to obtain a better mean objective function satisfying the chance constraint. The above observations are the same as the examples in Figure 3.
The computation time of the proposed method is still too high for implementation in implicit MPC in real-time applications. However, we could embed the proposed method in explicit MPC, in which the policy is calculated offline, and then apply the calculated policy online. Regarding the violation probability, even with 100 samples, the probability of violating the chance constraints is very low for all methods. Besides, Table I shows that the sample number of \(\xi\) is a key factor of the computational time. For future work, we will work on how to improve the computation efficiency by reducing sample size. A sample-free method for solving chance-constrained optimization is also a possible way.
In this paper, we give a trajectory planning and control problem to show that the proposed probabilistic decision method outperforms the existing deterministic decision methods. It is effective not only for trajectory planning and control problems. Applications for planning problems with uncertainties can also be addressed by the proposed probabilistic decision method, including optimal economic dispatch with renewable energy generation, such as photovoltaic power generation and wind power generation. We leave the extensions of our method for future work.
## 7 Conclusions and Future Work
In a probabilistic decision-making problem under chance constraint, decision variables are probabilistically chosen to optimize the expected objective value under chance constraint. We formulate the probabilistic decision-making problem under chance constraints as the chance-constrained probability measure optimization. We prove the existence of the optimal solution to the chance-constrained probability measure optimization. Furthermore, we show that there exists an optimal solution that the probability measure is concentrated on two decisions. Then, an equivalent reduced problem of the chance-constrained probability measure optimization is established. The sample-based smooth approximation method has been extended to solve the reduced problem. The analysis of uniform convergence and feasibility is given. A numerical example of a quadrotor system control problem has been implemented to validate the performance of the
proposed probabilistic decision-making under chance constraints. Future work will be focused on implementing the proposed method to design optimal feedback probabilistic control policy with the satisfaction of the chance constraints.
## Acknowledgment
The authors would like to thank Professor Xiaoping Xue from the Department of Mathematics at Harbin Institute of Technology for helping to organize the proof of Theorem 1, and discussing the proof of Theorems 2 and 3.
|
2309.12269 | The Cambridge Law Corpus: A Dataset for Legal AI Research | We introduce the Cambridge Law Corpus (CLC), a dataset for legal AI research.
It consists of over 250 000 court cases from the UK. Most cases are from the
21st century, but the corpus includes cases as old as the 16th century. This
paper presents the first release of the corpus, containing the raw text and
meta-data. Together with the corpus, we provide annotations on case outcomes
for 638 cases, done by legal experts. Using our annotated data, we have trained
and evaluated case outcome extraction with GPT-3, GPT-4 and RoBERTa models to
provide benchmarks. We include an extensive legal and ethical discussion to
address the potentially sensitive nature of this material. As a consequence,
the corpus will only be released for research purposes under certain
restrictions. | Andreas Östling, Holli Sargeant, Huiyuan Xie, Ludwig Bull, Alexander Terenin, Leif Jonsson, Måns Magnusson, Felix Steffek | 2023-09-21T17:24:40Z | http://arxiv.org/abs/2309.12269v4 | # The Cambridge Law Corpus:
###### Abstract
We introduce the Cambridge Law Corpus (CLC), a corpus for legal AI research. It consists of over 250 000 court cases from the UK. Most cases are from the 21st century, but the corpus includes cases as old as the 16th century. This paper presents the first release of the corpus, containing the raw text and meta-data. Together with the corpus, we provide annotations on case outcomes for 638 cases, done by legal experts. Using our annotated data, we have trained and evaluated case outcome extraction with GPT-3, GPT-4 and RoBERTa models to provide benchmarks. We include an extensive legal and ethical discussion to address the potentially sensitive nature of this material. As a consequence, the corpus will only be released for research purposes under certain restrictions.
## 1 Introduction
In recent years, transformer-based neural networks (Vaswani et al., 2017) have transformed the field of textual data analysis. These models have reached or surpassed human performance on many classical natural language processing tasks (Devlin et al., 2019; Liu et al., 2019). These recent developments have made it possible to train token prediction models at a larger scale than ever before, using extensive textual corpora gathered from sources such as social media, books, newspapers, web links from Reddit, and Wikipedia (Brown et al., 2020; Zhang et al., 2022). In late 2022, OpenAI released ChatGPT, a chatbot based on the GPT-3.5 language model, to the public. Just a few months later, OpenAI made the more powerful GPT-4 model available, both via ChatGPT and through an API (OpenAI, 2023). GPT-4 has shown promising results on a large variety of tests designed for humans, such as AP tests, the LSAT and the Uniform Bar Exam (Martinez, 2023).
_Legal artificial intelligence_ is emerging as a rapidly-developing area of machine learning (Zhong et al., 2020), which focuses on problems such as case outcome prediction (O'Sullivan and Beel, 2019; Chalkidis et al., 2019), legal entity recognition (Leitner et al., 2019; Dozier et al., 2010), prediction of relevant previous cases or relevant statutes (Liu et al., 2015), contract reviews (Hendrycks et al., 2021), and more recently, passing legal exams (Choi et al., 2023; Katz et al., 2023). Using machine learning models to solve such tasks automatically presents a new way for citizens and businesses to interact with many aspects of the legal system.
The release of sizeable, specialised datasets has been pivotal to machine learning research progress. Most famously, Deng et al. (2009) introduced the ImageNet dataset, containing over 1 million images spanning 1000 classes, crucial for the early development of deep convolutional neural networks (LeCun et al., 2015). Similarly, large datasets such as the Google Book corpus (Davis, 2011) have played an important role in the development of transformers and the popularisation of bread-and-butter techniques like byte-pair encoding (Gage, 1994). Just as critically, in parallel with these large-scale
general-interest resources, many other specialised datasets have proven fruitful for research (Rasmy et al., 2021; Deng et al., 2009; Deng, 2012; Lin et al., 2015). Therefore, high-quality datasets for legal artificial intelligence research are needed to facilitate research progress.
Legal language tends to be more specialised than other kinds of natural language. Legal terms and idioms, which are often difficult for a lay person to understand, have strong semantics and are often the keystone of legal reasoning (Nazarenko and Wyner, 2017; Dale, 2017). For example, the term _stare decisis_ is a legal "term of art" that has a specific legal meaning. The translation from Latin is "to stand by things decided": this term refers to the legal principle of _precedent_, that is, that courts must adhere to previous judgments of the same court (or judgments of higher courts) while resolving a case with comparable facts. We discuss this example and related points in Section 2.1.
Specialised large language models, such as recent legal versions of the BERT model (Chalkidis et al., 2020; Masala et al., 2021; Zheng et al., 2021), have been shown to perform better when fine-tuned on legal corpora. Suitable training data is needed in order to make the development of such models possible. At present, large research-quality legal corpora are not as widely available as analogous resources in other areas of machine learning. There is a lack of general large legal corpora focused on the United Kingdom, especially in machine-readable formats and with relevant annotations to facilitate the development of methods for solving key machine-learning tasks.
Legal data involves unique challenges, including the need to comply with regulations such as the GDPR legislation and to appropriately address privacy and other ethical matters. These challenges also include the need to digitise cases whose presentation is not in any manner standardised, such as those that are hundreds of years old but which constitute a legal precedent and whose content defines the law in use today. Handling details such as these creates a need for maintaining an appropriate level of dataset quality.
This work introduces a corpus containing, in its present form, 258,146 legal cases from the UK, available for research. We release the corpus and annotations of case outcomes for 638 cases.
#### Prior Work and Current State of Affairs
Corpora containing legal decisions in English-language jurisdictions other than the UK are becoming increasingly available, with examples such as MultiLegalPile (Niklaus et al., 2023) and LEXTREME (Chalkidis et al., 2023) providing court decisions in the United States, Canada, and Europe. Notably, LeXFiles (Chalkidis et al., 2023) contains legislation and case law from six countries--including the UK, which is our focus--albeit at a limited scale. Complementing case judgments, corpora such as CUAD (Hendrycks et al., 2021) provide expert-annotated legal contracts.
The availability of legal judgment data varies by jurisdiction and is influenced by (i) the degree to which legal judgements are anonymised, (ii) the degree to which they are standardised, and (ii) the degree of privacy protection offered by the jurisdiction. For example, Hwang et al. (2022) have recently released a corpus of Korean legal judgments. These judgments are anonymised when they are written, in accordance with the Korean legal system, making such a release more straightforward compared to other jurisdictions which do not anonymise. Other examples of court decisions include, Xiao et al. (2018) who released a large corpus of Chinese criminal cases, and Poudyal et al. (2020), who published a corpus from the European Court of Human Rights.
Compared to these examples, the availability of UK data is more limited. In terms of the aforementioned factors, the UK (i) does not anonymise its legal decisions or (ii) present them in a standardised format while simultaneously offering (iii) comparatively strong privacy protections. Recently, the UK's National Archives have started a service making case law available. At present, however, this service is limited to cases decided after 2003, prohibits bulk download and does not allow computational analysis by its applicable license (The National Archives, 2023).
## 2 The Cambridge Law Corpus
The Cambridge Law Corpus (CLC) is a corpus containing legal cases, also called judgments or decisions, from the combined United Kingdom jurisdiction, and England and Wales. Courts from Scotland and Northern Ireland are currently excluded. Further details on content are available in Appendix A, and in this work's companion datasheet (Gebru et al., 2021). Before summarising the
corpus's content, we begin by reviewing some of the legal background needed to understand the material. Legal and ethical implications are addressed in Section 3.
### The United Kingdom's Legal System
The UK does not have one single unified legal system--instead, it has three distinct jurisdictions: England and Wales, Scotland and Northern Ireland. There are exceptions to this rule, where some courts or tribunals hear cases from across the three jurisdictions. The common law of England and Wales is one of the major global legal traditions. It has been adopted or adapted in numerous countries worldwide.
Each jurisdiction is a common law jurisdiction-- that is, authoritative law includes both legislation and court judgments. Legislation comprises Acts of Parliament (under the authority of the Crown) and regulations, that is, rules that govern how we must or must not behave and contain the consequences of our actions. Judges apply legislation and case law principles decided in previous cases to explain their decisions. Judges in different courts may decide cases with different principles across these three jurisdictions. Court judgments may be appealed to the highest court--the UK Supreme Court.
Court decisions can operate as an independent primary source of substantive law (case law), meaning that courts can legally justify their decisions by applying case law in the same way they apply legislation (Raz, 2009). Lawyers often call past cases _precedents_, and in many contexts, precedents at least influence the decisions of courts when relevantly similar disputes arise. In Anglo-American legal systems, this legal principle is known as _stare decisis_. Courts in the UK apply _stare decisis_ relatively strictly.
The authority of the court will depend on the judgment court's place in the court hierarchy (see Figure 1). Precedent may have binding authority if the judgment has been upheld by a higher court, or persuasive authority if the judgment is from a lower court (Cross and Harris, 1991; Lewis, 2021). For example, the Supreme Court (formerly the House of Lords) is the highest court in the United Kingdom, and its decisions bind all courts in the jurisdiction (Lewis, 2021). This makes common law jurisdictions different from civil law jurisdictions. In the civil law jurisdiction, prior judicial decisions are not considered "law" (Merryman and Perez-Pedomo, 2019).
A person or party may only be judged by the law and punished for a breach of the law (Dicey, 1915). Therefore, judges are not able to refuse to apply legislation; only Parliament can change the substantive rules in legislation. However, judges are able to interpret the appropriateness of law as it has evolved in light of the aims of previous lawmakers and may acknowledge that they make a policy-influenced decision.
Different courts deal with criminal and civil law (civil law cases are different to civil jurisdictions). Civil cases consider remedies to a party who has an enforceable legal right arising from tortious claims, contract disputes, family matters, company articles and similar. Criminal cases concern the law of crime and punishment where the Crown prosecutes an accused whom the police have investigated.
### Corpus Content
The corpus consists of 258 146 cases, spanning 53 courts, with cases as old as the 16th century such as for instance _Walsingham's Case_ (1573). Most cases are from the 20th and 21st centuries. Detailed figures on the development of the number of cases can be found in Appendix A. In total, these cases include approximately 5GB of legal text, consisting of roughly 0.8B tokens.
Figure 1: A simplified view of the UK court and tribunal structure (Judiciary UK, 2023).
In the Cambridge Law Corpus, each case is stored as a single XML file by court and year. We chose XML in order to be able to annotate both whole cases and parts of cases in a structured, easy-to-use fashion, and in order to support many character encodings, comments, user-defined tags and namespaces. Appendix B shows an example of what a stored file might look like.
In addition to the case files, we also store additional relevant information in separate tables. For example, the categorical outcome of an individual case. In this way, the XML files mainly store the textual content of the case and the CSV files store additional features separately. This simplifies adding new information and features to the corpus. Further, the corpus comes with a Python library for quickly converting the XML files into formats commonly used in machine learning settings such as the Hugging Face datasets class.
The corpus contains both legal information and additional technical information. Each case contains a case header that contains general information on the case, such as the judge, the claimant and defendant, the date of the judgment, and similar. The case header also includes information on the court and the date of either the final hearing or judgment delivery.
In addition to the case header, the corpus also contains the case body, which contains the body text of the case. The case body contains complete sentences of legal text. Usually, it starts with a summary of the facts of the case, followed by arguments and a decision, although there is not necessarily any formal structure, and therefore the content of the body varies between cases and over time.
The case outcomes are stored separately from the cases and follow a hierarchical structure with aggregate outcomes, such as claimant wins or claimant loses, and more detailed case outcomes, such as damages awarded or permission to appeal granted. For details on definitions of case outcomes and a detailed list, see Appendix D.
Each case in the corpus is assigned a unique identifier (Cambridge Law Corpus identifier, CLC-ID). For this, we have created universally unique identifiers (UUID), which are identifiers that can guarantee uniqueness across space and time (Leach et al., 2005). In addition, a case's metadata can contain legal identifiers, which are used by the legal community to identify cases. These allow cases to be found in a manner mirroring their indexing in law reports. We also include neutral case citations, which were introduced in the United Kingdom in January 2001 to facilitate identifying a case in a manner independent from citations assigned by the commercial providers of the law reports. For example, the cases decided by the Civil Division of the England and Wales Court of Appeal are neutrally cited as follows: [year] EWCA Civ [number].
### Corpus Creation and Curation Process
The original cases of the Cambridge Law Corpus were supplied by the legal technology company _CourtCorrect_1 in raw form, including Microsoft Word and PDF files. The Word files were cleaned and transformed into an XML format. PDF files were converted to textual form via optical character recognition (OCR) using the _Tesseract_ engine (Kay, 2007, v4.1.1)--an estimate of the OCR error rate is given in Appendix A. The resulting text files were then converted to the XML standard format of the corpus. The original documents are stored separately for quality control purposes.
Footnote 1: [https://courtcorrect.com](https://courtcorrect.com).
Due to the size of the Cambridge Law Corpus, it is not feasible to annotate or curate the whole corpus manually. Instead, we use a process inspired by Voormann and Gut (2008), whose principles can be summarised as follows:
1. Replace sequential corpus creation phases with an encompassing, cyclic process model and employ a query-driven approach.
2. Recognise general error potentials and take measurements for immediate modifications in each part of the corpus creation process.
3. Minimal effort principle: slim annotation schema and little upfront annotation.
These principles have inspired our corpus creation and curation process in two ways. First, our process focused on improving the corpus in many small iterative steps. These include adding new annotations, new metadata, new cases and correcting identified errors, such as OCR errors. Second, we adopt a release model consisting of many small corpus releases, following the general ideas of
semantic versioning (Preston-Werner, 2013). We treat both the actual content of the corpus--the XML and CSV files--and the accompanying Python library as two complementary parts of the corpus's application programming interface (API).
These two principles have multiple benefits both for us and for the users of the corpus. First, rapid releases keep the corpus up to date in that new releases are made as soon as corrections or additions have been made to the corpus. This also encourages users to report errors in the data back to us in a structured way. Second, semantic versioning helps communicate the effect of the release to the users of the corpus and when to expect a change in the corpus API. Changes to the corpus are quality controlled through a random sample of the edits made.
### Case Outcome Annotations
To enable researchers to study case outcomes, we added manual annotations tagging the tokens which contain the outcomes for a subset of cases. A case's outcome describes which party or parties are (partly) successful with their application(s) and which parties are not (partly) successful. In addition, the outcome contains the legal consequence decided by the court or tribunal. For example, a party may be ordered to pay a certain sum of money to the other party, or a person may be sentenced to a certain number of months in prison. Legal research and legal practice are particularly interested in the outcome of court cases. For instance, before going to court, it is helpful for parties to understand the expected outcome. After courts have decided a case, parties, advisers and researchers may be interested in knowing whether the decision is correct.
While it is easy to identify the outcome in the text of some cases, other cases are more difficult to understand. Some judgments report the speeches of multiple judges and it may be difficult to identify the final decision of the collective. As there are no formal rules or conventions on the writing of court decisions, the words and sentences defining the outcome vary considerably. In some cases, readers may even struggle to understand which legal outcome the judge aimed to lay down.
We started the annotation process by annotating a stratified random sample (by court) of legal cases. The annotations were made by legal experts at the Faculty of Law of the University of Cambridge following annotation instructions that can be found in Appendix D. The annotation instructions distinguish between the general outcome (in particular, whether a party is successful in court) and the detailed case outcome (for example, what remedy one party owes the other party). Following the ideas of Voormann and Gut (2008), we continuously updated the schema and instructions during the annotation process as problems arose.
The annotation process was undertaken by four lawyers (all of whom have a law degree, and three of whom are qualified solicitors and have graduate degrees in law) and one further legal expert (a Professor at the University of Cambridge's Faculty of Law, who also has higher qualifications in law) overseeing the annotation process. The four annotators and the legal expert met after a certain number of cases had been annotated to discuss problems that had arisen in the meantime. The main challenge was to align the annotation practice of the four annotators and to deal with new kinds of detailed case outcomes arising from the cases. As the corpus contains cases from various courts and tribunals as well as various areas of law, very specific detailed outcomes needed to be integrated into the annotation guidelines to capture such variability. While there is some research into formalising court outcomes, there is currently no accepted standard taxonomy of court outcomes the annotations could have referred to. Against this background, another contribution of this research is the first set of standardised annotation guidelines for UK case outcomes.
## 3 Legal and Ethical Considerations
The legality and ethics of collecting, processing and releasing the corpus is of paramount importance. Therefore, we have considered relevant safeguards to ensure legal compliance and the ethical design of this project. We now discuss these matters in more detail.
UK legislation and court decisions are subject to Crown copyright and licensed for use under the Open Government Licence. The Open Government Licence grants a worldwide, royalty-free, perpetual and non-exclusive licence (The National Archives, 2014).
The Data Protection Act 2018 (_DPA_) complements the UK implementation of the European Union's General Data Protection Regulation (_GDPR_), here referred to as the UK GDPR. Compliance with the DPA and UK GDPR has been the basis of the legality of this project as affirmed by the ethics approval from the University of Cambridge.
There are limitations on when personal data can be collected, processed and stored. The personal data in this corpus was not collected directly from data subjects and was only undertaken for research purposes in the public interest. Both these circumstances offer exemptions from obligations in the UK GDPR (DPA sch 2, pt 6, paras 26, 27; UK Information Commissioner's Office, 2022).
Given the practically impossible and disproportionate task of informing all individuals mentioned in this corpus and that these cases are publicly available and being processed exclusively for research purposes, makes this corpus exempt from notification requirements (DPA sch 2, pt 6, paras 26, 27; GDPR art 14(5)). Further, research in the public interest is privileged as regards secondary processing and processing of sensitive categories of data restrictions (GDPR art 6, rec 47, 157, 159). In particular, this aids the protection of the integrity of research datasets.
There are still important safeguards and considerations relevant to processing data of this sensitive nature. We apply safeguards in compliance with legal and ethical requirements and ensure that:
* Appropriate technical and organisational safeguards exist to protect personal data (DPA s 19; GDPR art 89)
* Processing will not result in measures being taken in respect of individuals and no automated decision-making takes place (DPA s 14; GDPR art 22).
* There is no likelihood of substantial damage or distress to individuals from the processing.
* Users who access the corpus must agree to comply with the DPA and the UK GDPR in addition to any local jurisdiction.
* Any individual may request the removal of a case or certain information which will be immediately removed (GDPR art 17; DPA s 47).
* The corpus will not pose any risks to people's rights, freedoms or legitimate interests (DPA s 19; GDPR art 89).
* Access to the corpus will be restricted to researchers based at a university or other research institution whose Faculty Dean (or equivalent authority) confirms, inter alia, that ethical clearance for the research envisaged has been received.
* Researchers using the corpus must agree to not undertake research that identifies natural persons, legal persons or similar entities. They must also guarantee they will remove any data if requested to do so.
We have considered the potential impact of processing such data. Given the public availability of all cases in the dataset in other repositories, the principle of open justice, the prohibition of research identifying individuals, the requirement of ethical clearance and our mechanisms for the erasure of data, we believe there is unlikely to be any substantial damage to individuals.
In the UK, court cases are not anonymised. Parties should, therefore, expect to be named in judgments because courts uphold the principle of open justice, promote the rule of law and ensure public confidence in the legal system (Judiciary UK, 2022). However, the court will anonymise a party if the non-discusive is necessary to secure the proper administration of justice and to protect the interests of that party (CPR 39.2(4)). This decision includes weighing up the interests and rights of a party against the need for open justice. For example, individual names in asylum or other international protection claims are often anonymised by the court (Judiciary UK, 2022). Also, legislation sometimes prohibits naming certain individuals, such as victims of a sexual offence or trafficking (SOAA s 1, 2), or children subject to family law proceedings (CA s 97(2)).
While considering the ethics of this project, we examined the approaches of other jurisdictions to evaluate any relevant ethical and comparative legal considerations. Drawing from a jurisdiction with the opposite approach to the UK, in 2019, France amended its Justice Reform Act to prohibit the publication of judge analytics (Law 2019-22). The French restriction is based on express rules and a legal culture that varies considerably from the English common law system. France is a civil law system with different rules on how judges engage in cases, the binding nature of decisions and anonymisation (Cornu, 2014). For example, in the French legal system, individuals' names are
usually anonymised. The French Justice Reform Act extends this restriction such that the "identity data of judges and court clerks may not be reused for the purpose or effect of evaluating, analysing, comparing or predicting their real or supposed professional practices" (Tashea, 2019). We have not identified any similar rules in the UK.
At present, the UK GDPR protects against autonomous decision-making that produces a legal or substantive effect on any individual. Researchers have to agree to comply with these and all GDPR rules to access the corpus, as well as any relevant obligations in their local jurisdiction. Taking a cautious approach, access to our corpus requires the researchers' promise not to use it for research that identifies natural persons, legal persons and similar entities.
We have considered the importance of using this corpus in a way that is in the public interest. Ethics approval has helped shape our safeguards and consider relevant risks. The emerging field of AI ethics evidences a growing interest in ensuring fair and just outcomes, in addition to legal compliance, as more personal data is used in algorithmic processes. Based on our legal and ethical evaluation of the project, it does not raise material risks to individuals' rights, freedoms or interests. Our safeguards also act to protect against any risks in further use of the corpus. The purpose, design and method of the corpus avoids such risks and prioritises positive outcomes that may be achieved through improving access to and research of case law. We will evaluate the requirements for access to the dataset on an ongoing basis to potentially further widen access, in particular, if the national and international legal framework becomes more permissive.
## 4 Experiments
As part of the curation of the Cambridge Law Corpus, building upon the corpus creation process and the additional annotations, we have undertaken a thorough exploration of two research tasks using English judgments, namely case outcome extraction and topic model analysis. These tasks offer a comprehensive range of possible use cases for leveraging the CLC in legal analysis.
### Case Outcome Extraction
Court rulings can be complex and exceedingly lengthy, making them less accessible to non-legal professionals. To explore how the corpus can potentially improve such accessibility, we formulated a text classification task to identify text segments within a court decision that explicitly indicate the judges' decision regarding the final case outcome. This is done in two stages: we first classify whether or not each token in the case concerns the outcome, and then, given the result of this classification, we extract what the actual outcome is. A full, detailed description of this task is given in Appendix C.
Quality annotations on case outcomes are essential for this task in order to produce and evaluate output that is useful for legal analysis. We further provide manually annotated labels regarding case outcomes, including general case outcomes and detailed outcomes. The fine-grained annotations enable the extraction of tokens that enclose case outcomes from variable lengths of legal ruling texts. An example of the case outcome annotations is illustrated in Appendix D.
We sampled a total number of 638 cases (173,654 sentences) within the CLC to be manually annotated by legal experts, which are split into train/validation/test sets consisting of 388/100/150 cases, respectively. The number of cases and segments for each data split is summarised in Table 5.
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Model** & **Acc** & **F1** & **WER** & **BERTScore** & **BLEU** & **ROUGE\({}_{L}\)** \\ \hline End-to-end RoBERTa & 0.997 & 0.185 & 0.536 & 0.207 & 0.010 & 0.185 \\ RoBERTa pipeline & 0.739 & 0.012 & 0.776 & 0.030 & 0.015 & 0.028 \\ GPT-3.5 & - & - & 4.281 & 0.788 & 0.217 & 0.300 \\ GPT-4 & - & - & 3.396 & 0.840 & 0.282 & 0.517 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Evaluation results for the case outcome extraction task. The RoBERTa-based models are fine-tuned on the CLC annotations, while the GPT-based models are zero-shot—that is, they are evaluated without fine-tuning. We calculate accuracy and F1 scores for the RoBERTa-based models, and word error rate (WER), BERTScore, BLEU and ROUGE\({}_{L}\) for all baseline models.
To extract case outcomes, we explored a number of approaches: (i) end-to-end RoBERTa, which uses the RoBERTa model to directly predict tokens containing case outcomes, (ii) a two-step RoBERTa pipeline, where we first extract the true sentences containing case outcome, then apply token-level RoBERTa classification on only those sentences, (iii) zero-shot GPT-3.5-turbo, which asks OpenAI's GPT-3.5-turbo language model to extract the exact text containing the case outcome from its full text, and (iv) zero-shot GPT-4, similar to (iii) except that we use the larger, more-powerful GPT-4 model. Note that the RoBERTa models are fine-tuned using our annotations, whereas the GPT models are not. Further details on our setup can be found in Appendix F.
Table 1 presents the evaluation results of the four baselines. Both RoBERTa-based models obtain relatively high accuracies, but comparatively low F1 scores, with end-to-end RoBERTa favouring a tradeoff more in the direction of higher accuracy. Both GPT models have a word error rate higher than \(3\), indicating that they produce too much text. Based on annotator feedback, this can be preferable, since it can be easier to work with output which contains the correct case outcome together with irrelevant sentences, compared to output that does not capture the outcome in enough detail.
Since, in real-world court ruling statements, the case outcome is described with very few words in relation to the total number of words in a case, the case outcome extraction task involves a large class imbalance. Moreover, the variability in how judgments are presented is very high: an ablation study, which uses different train-test splits to understand some of the variability, is given in Appendix C. Given these properties, models can simply learn to classify every sentence as not containing case outcome information, thus resulting in a very high accuracy but a very low F1 score. This may explain why, compared to the GPT models, the RoBERTa-based models exhibit lower scores in metrics such as BERScore, BLEU, and ROUGE\({}_{L}\). This imbalance is an inherent challenge in the case outcome extraction task.
GPT-based models do not directly perform classification for individual tokens, but instead, given a prompt, produce a text in natural language. We constructed a prompt for case outcome extraction as follows. First, we set the _system message_ so that it asks the model to be a legal research assistant. Then, in the main prompt, we asked the model to carefully read and then quote back exact text from the case text given, with certain extra formatting details. Full details on prompts can be found in Appendix F.
To evaluate the natural-language output produced by the GPT-based models, we use the word error rate (Morris et al., 2004) evaluation metric to measure how close the generated text is to the text segmented by annotators. Examples of the models' outputs and our gold standard annotations are demonstrated in Table 2, showing strong performance, especially for the larger GPT-4 model. Our results therefore suggest that GPT-based models can produce output which mirrors human annotators.
\begin{table}
\begin{tabular}{p{142.3pt} p{142.3pt} p{142.3pt}} \hline \hline
**GPT-3.5-turbo** & **GPT-4** & **Annotator Reference** \\ \hline Lord Justice: I also agree. & _I would dismiss these appeals._ & I would dismiss these appeals. \\ \hline \begin{tabular}{p{142.3pt} p{142.3pt}} _The Claimant did comply with the service requirements of the Share agreements of the Share_ \\ _Purchase Agreement._ \\ \end{tabular} & \begin{tabular}{p{142.3pt} p{142.3pt}} _The Claimant did comply with the service requirements of the Share_ \\ _Purchase Agreement._ \\ \end{tabular} & \begin{tabular}{p{142.3pt} p{142.3pt}} _The Claimant did comply with the service requirements of the Share_ \\ _Purchase Agreement._ \\ \end{tabular} \\ \hline \begin{tabular}{p{142.3pt} p{142.3pt}} _For the reasons I have_ \\ _given on each of the questions charterers_ \\ _are unsuccessful in their appeal._ \\ \end{tabular} & \begin{tabular}{p{142.3pt} p{142.3pt}} _For the reasons I have_ \\ _given on each of the questions charterers_ \\ _are unsuccessful in their appeal._ \\ \end{tabular} & \begin{tabular}{p{142.3pt} p{142.3pt}} _For the reasons I have_ \\ _given on each of the questions charterers_ \\ _are unsuccessful in their appeal._ \\ \end{tabular} &
\begin{tabular}{p{142.3pt} p{142.3pt}} _For the reasons I have_ \\ _given on each of the questions charterers_ \\ _are unsuccessful in their appeal._ \\ \end{tabular} \\ \hline \hline \end{tabular}
\end{table}
Table 2: Examples of outputs from GPT-3.5-turbo and GPT-4 and annotated references. Text highlighted in _green italics_ denotes model output content that correctly correspond to annotations.
These examples illustrate that CLC can be a useful resource for benchmarking, fine-tuning and potentially pre-training large language models for legal tasks.
### Topic Model Analysis
The areas of law courts deal with change over time, reflecting social, technological, economic and other societal changes. Understanding the areas of law courts are dealing with can enable one to understand, for instance, where conflicts in society arise and what resources and competencies are needed in the administration of justice. Practically, understanding the areas and numbers of cases going to court supports court managers in creating the necessary infrastructure. At the same time, a better understanding of the types of cases decided by the judges helps to identify areas where legal remedies offered in the court system are not effective. This would be the case, for example, if legal conflicts relevant to a large number of citizens and businesses are not reflected in court cases.
To illustrate a possible use case for how the CLC can be used to follow and analyse changes in topics of law over time, we ran a latent Dirichlet allocation (Blei et al., 2003) topic model using the parallel partially-collapsed Gibbs sampling algorithm (Magnusson et al., 2017; Terenin et al., 2019). We trained for 1200 iterations with hyper-parameters \(\alpha=0.1\), \(\beta=0.1\), \(K=100\) topics, a rare word threshold of \(20\) occurrences, and the default stop word list provided by Mallet (McCallum, 2002). The model reached a log-likelihood of \(-3.0832\times 10^{9}\) after \(1200\) iterations, at which point we obtained a sample of topic indicators from its output.
To analyse the topics over time, we aggregated the topic indicators produced on a per-year basis, which were then labelled by legal scholars of the Faculty of Law at the University of Cambridge. Figure 2 shows the development of four areas of law distinguishing three time periods: 1573-2020, 1950-2020, and 2000-2020. The top word tokens for these topics can be found in Appendix E.
From these results, we see that the _contract_ topic is generally relevant at all times. This is expected, as a contract is a fundamental legal mechanism which allows natural and legal persons to privately order their cooperation. Interestingly, the relevance of contractual disputes in the courts drops somewhat from around 1990. This may reflect that businesses and consumers increasingly turn to other mechanisms than court litigation, such as alternative dispute resolution and in-house solutions to settle contractual conflicts.
The _financial services_ topic, on the other hand, was hardly present in the courts until the 1990s. Its growing relevance in the courts may be caused by both increased public financial investment and the introduction of legal rules protecting consumers in financial services. The more recent drop in cases in the courts may also be a result of the rise of alternative dispute resolution, in particular, the dispute resolution services offered by the Financial Ombudsman Service.
One can similarly map rises and falls in the _immigration_ and _employment_ topics to historical events in the respective time periods. In total, this short analysis shows the potential of the CLC's use not only for legal AI, but also for legal research using computational methods.
As a final note, the cases available within the CLC are those cases that are available from the courts. In the UK, not all judgments are consistently published or otherwise made available. Courts have some discretion as to what to publish, may decide whether or not to publish in a manner that is not
Figure 2: Proportion of words in documents belonging to the listed topics. A word can belong to more than one topic. Left: Aggregated to a one-year period spanning 1950-2020. Centre: Aggregated to a one-year period spanning 2000-2020. Right: Aggregated to a ten-year period spanning 1573-2020.
fully transparent and may change their practice over time. This introduces a selection bias into the corpus: we provide additional details and discussion on this in Appendix A. From this viewpoint, the topic model analysis presented exemplifies a general way to determine what cases are available within the corpus.
## 5 Conclusion
We present the Cambridge Law Corpus, a collection of over 250 000 cases. We provide the corpus in an easy-to-work-with format, collected and converted from multiple sources and release the corpus using semantic versioning to simplify continuous improvement of the corpus. Due to the sensitive nature of the material, great care has been and will be taken in the ethical management and distribution of the corpus. We also provide a discussion on how to legally and ethically treat the data. Finally, we give two examples of how to use the corpus for research both in legal AI and legal research, case outcome extraction and topic model analysis, and provide a first benchmark on case outcome extraction.
## Acknowledgements
The work on the corpus is part of the UK Economic and Social Research Council (ESRC) and JST (Japan Science and Technology Agency) funded project on _Legal Systems and Artificial Intelligence_. The support of the ESRC and JST is gratefully acknowledged. We are grateful to Narine Lalafaryan, Joana Ribeiro de Faria and Lucy Thomas for excellent research assistance. We thank the participants of the Faculty of Law Research Seminar at the University of Cambridge for their helpful comments.
## General References
* Blei et al. (2003) D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet Allocation. _Journal of Machine Learning Research_, 2003. Cited on page 1.
* Brown et al. (2020) T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language Models are Few-shot Learners. In _Advances in Neural Information Processing Systems_, 2020. Cited on page 1.
* Byrom (2023) NIBRS Deepening Unequal Access to Legal Information. Financial Times, July 2023. URL: [https://www.FT.COM/content/2aba82C0-A24b-485F-82D9-eED72D2B1011](https://www.FT.COM/content/2aba82C0-A24b-485F-82D9-eED72D2B1011). Cited on page 1.
* Chalkidis et al. (2020) I. Chalkidis, I. Androutsopoulos, and N. Aletras. Neural Legal Judgment Prediction in English. In _Association for Computational Linguistics_, 2019. Cited on page 1.
* I. Chalkidis, M. Fergadiotis, P. Malakasiotiis, N. Aletras, and I. Androutsopoulos. LEGAL-BERT: The Muppets Straight out of Law School. In _Conference on Empirical Methods in Natural Language Processing_, 2020. Cited on page 1.
* I. Chalkidis, N. Garneau, C. Goanta, D. M. Katz, and A. Sogaard. LeXFiles and LegalLAMA: Facilitating English Multinational Legal Language Model Development. In _Association for Computational Linguistics_, 2023. Cited on page 1.
* J. Choi, K. Hickman, A. Monahan, and D. Schwarcz. ChatGPT Goes to Law School. _Journal of Legal Education (Forthcoming)_, 2023. Cited on page 1.
* G. Cornu. _Vocabulaire juridique_. Presses universitaires de France, 2014. Cited on page 1.
* R. Cross and J. W. Harris. _Precedent in English Law_. Clarendon Press, 1991. Cited on page 1.
* K. Dale. Legal Corpus Linguistics: Gambling to Gaming Language Powers and Probabilities Notes. _UNLV Gaming Law Journal_, 8(2):233-252, 2017. Cited on page 1.
* K. Dale, J. W. Harris, and D. Schwarcz. ChatGPT Goes to Law School. _Journal of Legal Education (Forthcoming)_, 2023. Cited on page 1.
* K. Dale, J. W.
M. Davis. Google Books (American English) Corpus, 2011. URL: [https://varrieng.helsinki.fi/CoRD/corpora/GoogleBooks/](https://varrieng.helsinki.fi/CoRD/corpora/GoogleBooks/). Cited on page 1.
* J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei. Imagenet: A Large-scale Hierarchical Image Database. In Computer Vision and Pattern Recognition, Cited on pages 1, 2.
* L. Deng (2012)The MNIST Database of Handwritten Digit Images for Machine Learning Research. IEEE Signal Processing Magazine. Cited on page 2.
* Human Language Technologies, Cited on pages 1, 19.
* A. V. Dicey (1915)Introduction to the study of the law of the constitution. Macmillan. Cited on page 3.
* C. Dozier, R. Kondadadi, M. Light, A. Vachher, S. Veeramachaneni, and R. Wudali (2010)Named Entity Recognition and Resolution in Legal Text. In Semantic Processing of Legal Texts, Cited on page 1.
* P. Gage (1994)A New Algorithm for Data Compression. C Users Journal. Cited on page 1.
* T. Gebru, J. Morgenstern, B. Vecchione, J. W. Vaughan, H. Wallach, H. D. Iii, and K. Crawford (2021)Datasheets for Datasets. Communications of the ACM. Cited on page 2.
* D. Hendrycks, C. Burns, A. Chen, and S. Ball (2021)CUAD: An Expert-Annotated NLP Dataset for Legal Contract Review. In Advances in Neural Information Processing Systems, Cited on pages 1.
* W. Hwang, D. Lee, K. Cho, H. Lee, and M. Seo (2022)A Multi-task Benchmark for Korean Legal Language Understanding and Judgement Prediction. Advances in Neural Information Processing Systems. Cited on page 2.
* Anonymisation of Parties to Asylum and Immigration Cases in the Court of Appeal. Cited on page 3.
* Judiciary UK (2023)Structure of Courts and Tribunals System. Cited on page 3.
* D. M. Katz, M. J. Bommarito, S. Gao, and P. Arredondo (2023)GPT-4 Passes the Bar Exam. SSRN Preprint. Cited on page 1.
* A. Kay (2007)Tesseract: An Open-Source Optical Character Recognition Engine. Linux Journal. Cited on page 4.
* P. J. Leach, R. Salz, and M. H. Mealling (2005)A Universally Unique IDentifier (UUID) URN Namespace. External Links: [https://www.ietf.org/RFC/RFC4122.txt](https://www.ietf.org/RFC/RFC4122.txt) Cited on page 4.
* Y. LeCun, Y. Bengio, and G. Hinton (2015)Deep Learning. Nature. Cited on page 1.
* E. Leitner, G. Rehm, and J. Moreno-Schneider (2019)Fine-Grained Named Entity Recognition in Legal Documents. In International Conference on Semantic Systems, Cited on page 1.
* S. Lewis (2021)Precedent and the Rule of Law. Oxford Journal of Legal Studies41 (4), pp. 873-898. Cited on page 3.
* T. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollar (2015)Microsoft COCO: Common Objects in Context. In European Conference on Computer Vision, Cited on page 2.
* Y. Liu, Y. Chen, and W. Ho (2015)Predicting Associated Statutes for Legal Problems. Information Processing & Management. Cited on page 1.
* Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy, M. Lewis, L. Zettlemoyer, and V. Stoyanov (2019)RoBERTa: A Robustly Optimized BERT Pretraining Approach. Cited on page 1.
M. Magnusson, L. Jonsson, M. Villani, and D. Broman. Sparse Partially Collapsed MCMC for Parallel Inference in Topic Models. _Journal of Computational and Graphical Statistics_, 2017. Cited on page 9.
* E. Martinez (2023)Re-Evaluating GPT-4's Bar Exam Performance. _SSRN Preprint_, 2023. Cited on page 1.
* M. Masala, R. C. A. Iacob, A. S. Uban, M. Cidota, H. Veliciu, T. Rebedea, and M. Popescu. jurBERT: A Romanian BERT Model for Legal Judgement Prediction. In _Natural Legal Language Processing Workshop_, 2021. Cited on page 2.
* A. K. McCallum (2002)MALLET: A Machine Learning for Language Toolkit. External Links: [https://mimm.o.github.io/Mallet/](https://mimm.o.github.io/Mallet/). Cited on page 9.
* J. H. Merryman and R. Perez-Perdomo (2019)The Civil Law Tradition: An Introduction to the Legal Systems of Europe and Latin America. Stanford University Press. Cited on page 3.
* A. Morris, V. Maier, and P. Green (2004)From WER and RIL to MER and WIL: Improved Evaluation Measures for Connected Speech Recognition. In 2004. Cited on page 8.
* A. Nazarenko and A. Wyner (2017)Legal NLP Introduction. _Traitement Automatique des Langues_, 58(2):7-19, 2017. Cited on page 2.
* J. Niklaus, V. Matoshi, M. Sturmer, I. Chalkidis, and D. E. Ho (2023)MultiLegalPile: A 689GB Multilingual Legal Corpus. In _ICML Workshop on Data-centric Machine Learning Research_, 2023. Cited on page 2.
* C. O'Sullivan and J. Beel (2019)Predicting the Outcome of Judicial Decisions Made by the European Court of Human Rights. In _Irish Conference on Artificial Intelligence and Cognitive Science_, 2019. Cited on page 1.
* OpenAI (2023)GPT-4 Technical Report. Cited on page 1.
* P. Poudyal, J. Savelka, A. Ieven, M. F. Moens, T. Goncalves, and P. Quaresma (2020)ECHR: Legal Corpus for Argument Mining. In _Workshop on Argument Mining_, 2020. Cited on page 2.
* T. Preston-Werner (2013)Semantic Versioning. External Links: [http://semver.org/](http://semver.org/) Cited on page 5.
* C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. Liu (2019)Exploring the Limits of Transfer Learning with a Unified Text-to-text Transformer. External Links: 1905.0002 Cited on page 1.
* L. Rasmy, Y. Xiang, Z. Xie, C. Tao, and D. Zhi (2021)Med-BERT: Pretrained Contextualized Embeddings on Large-scale Structured Electronic Health Records for Disease Prediction. _npj Digital Medicine_, 2021. Cited on page 2.
* J. Raz (2009)The Authority of Law: Essays on Law and Morality. Oxford University Press. Cited on page 3.
* C. Somers-Joee, D. Hoadley, and E. Nemsic (2022)How Public is Public Law? The Current State of Open Access to Administrative Court Judgments. _Judicial Review_, 27(2):95-98, 2022. Cited on page 1.
* J. Tashea (2019)France Bans Publishing of Judicial Analytics and Prompts Criminal Penalty. American Bar Association Journal. Cited on page 7.
* A. Terenin, M. Magnusson, L. Jonsson, and D. Draper (2019)Polya Urm Latent Dirichlet Allocation: A Doubly Sparse Massively Parallel Sampler. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2019. Cited on page 9.
* The National Archives (2023). External Links: [https://caselaw.nationalarchives.gov.uk/](https://caselaw.nationalarchives.gov.uk/) Cited on page 2.
* The National Archives (2014). Open Government Licence. External Links: [https://www.nationalarchive.s.gov.uk/doc/open-government-licence/version/3/](https://www.nationalarchive.s.gov.uk/doc/open-government-licence/version/3/). Cited on page 5.
* UK Information Commissioner's Office (2022)UK Information Commissioner's Office. Guide to Data Protection: Research Provisions. Cited on page 6.
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is All You Need. In _Advances in Neural Information Processing Systems_, 2017. Cited on page 1.
* H. Voormann and U. Gut (2008)Agile Corpus Creation. _Corpus Linguistics and Linguistic Theory_, 2008. Cited on pages 4, 5.
* C. Xiao, H. Zhong, Z. Guo, C. Tu, Z. Liu, M. Sun, Y. Feng, X. Han, Z. Hu, H. Wang, and J. Xu (2018)CAIL2018: A Large-scale Legal Dataset for Judgment Prediction. _arXiv Preprint 1807.02478v1_, 2018. Cited on page 2.
* S. Zhang, S. Roller, N. Goyal, M. Artetxe, M. Chen, S. Chen, C. Dewan, M. Diab, X. Li, X. V. Lin, T. Mihaylov, M. Ott, S. Shleifer, K. Shuster, D. Simig, P. S. Koura, A. Sridhar, T. Wang, and L. Zettlemoyer (2022)OPT: Open Pre-trained Transformer Language Models. Technical report, Meta AI, 2022. Cited on page 1.
* L. Zheng, N. Guha, B. R. Anderson, P. Henderson, and D. E. Ho (2021)When Does Pretraining Help? Assessing Self-supervised Learning for Law and the Casehold Dataset of 53,000+ Legal Holdings. In International Conference on Artificial Intelligence and Law, Cited on page 2.
* H. Zhong, C. Xiao, C. Tu, T. Zhang, Z. Liu, and M. Sun (2020)How Does NLP Benefit Legal System: A Summary of Legal Artificial Intelligence. In Association for Computational Linguistics, Cited on page 1.
|
2309.05715 | Cubic* criticality emerging from a quantum loop model on triangular
lattice | Quantum loop and dimer models are archetypal examples of correlated systems
with local constraints. Obtaining generic solutions for these models is
difficult due to the lack of controlled methods to solve them in the
thermodynamic limit. Nevertheless, these solutions are of immediate relevance
to both statistical and quantum field theories, as well as the rapidly growing
experiments in Rydberg atom arrays and quantum moir\'e materials, where the
interplay between correlation and local constraints gives rise to a plethora of
novel phenomena. In a recent work [X. Ran, Z. Yan, Y.-C. Wang, et al,
arXiv:2205.04472 (2022)], it was found through sweeping cluster quantum Monte
Carlo (QMC) simulations and field theory analysis that the triangular lattice
quantum loop model (QLM) hosts a rich ground state phase diagram with lattice
nematic, vison plaquette (VP) crystals, and the $\mathbb{Z}_2$ quantum spin
liquid (QSL) close to the Rokhsar-Kivelson point. Here, we focus on the
continuous quantum critical point separating the VP and QSL phases and
demonstrate via both static and dynamic probes in QMC simulations that this
transition is of the (2+1)D cubic* universality. In this transition, the
fractionalized visons in QSL condense to give rise to the crystalline VP phase,
while leaving their trace in the anomalously large anomalous dimension exponent
and pronounced continua in the dimer and vison spectra compared with those at
the conventional cubic or O(3) quantum critical points. | Xiaoxue Ran, Zheng Yan, Yan-Cheng Wang, Junchen Rong, Yang Qi, Zi Yang Meng | 2023-09-11T18:00:05Z | http://arxiv.org/abs/2309.05715v3 | # Cubic* criticality emerging from quantum loop model on triangular lattice
###### Abstract
Quantum loop and dimer models are archetypal examples of correlated systems with local constraints, whose generic solutions for different lattice geometries and parameter regimes are difficult to obtain due to the lack of controlled methods to solve them in the thermodynamic limit, yet their solutions are of immediate relevance towards both statistical and quantum field theories and the fast-growing experiments in Rydberg atom arrays and quantum moire materials where the interplay between correlation and local constraints give rise to a plethora of novel phenomena. In a recent work [1], it was found via the sweeping cluster quantum Monte Carlo (QMC) simulations and field theory analysis that the triangular lattice quantum loop model (QLM) host a rich ground state phase diagram with lattice nematic (LN), vison plaquette (VP) crystals and the \(\mathbb{Z}_{2}\) quantum spin liquid (QSL) close to the Rokhsar-Kivelson (RK) point. Here, we focus on the continuous quantum critical point separating the VP and QSL phases, and demonstrate via both the static and dynamic probes in QMC simulations that this transition is of the (2+1)d Cubic* universality in which the fractionalized visons in QSL condense to give rise to the crystalline VP phase, meanwhile leaves its trace in the anomalously large anomalous dimension exponent and pronounced continua in the dimer and vison spectra compared with those at the conventional Cubic or O(3) quantum critical points. The experiment proposal of the detection of such phenomena is discussed.
_Introduction.--_ In our recent work [1], the ground state phase diagram of quantum loop model (QLM) on triangular lattice [2; 3; 4; 5] is mapped out with the sweeping cluster quantum Monte Carlo (QMC) algorithm [6; 7; 8; 9; 10; 11] (shown in Fig. 1). The physics questions revealed therein are profound and manifolded, such as the nature of the hidden vison plaquette (VP) phase - invisible from dimer correlations - sandwiched between the lattice nematic order and \(\mathbb{Z}_{2}\) quantum spin liquid (QSL) close to the RK point [12; 13; 14], and the structure of the phase diagram when connected to the finite temperature - expected to be richer compared with its square lattice loop or dimer model cousins [15; 16; 17], etc, but perhaps the most interesting one related with the quantum critical aspect of the model, is the (2+1)d Cubic* transition from the VP phase to the \(\mathbb{Z}_{2}\) QSL, where the VP order parameter emerged from the underlying resonance of the dimer pairs fractionalizes into vison order parameter which are the O(3)/Cubic CFT primary field operator, and the condensation of such fractionalized excitations, in return, gives rise to a strong enhancement of the scaling dimension of the VP order parameter at the transition, in a unconventional manner [18; 19; 20; 21; 22; 23].
It is therefore our motivation in this work, to elucidate the precise nature of such intriguing unconventional Cubic* quantum critical point (QCP), both from static and dynamic probes, and achieve its full understanding in a combined manner of field-theoretical interpretation and the state-of-the-art QMC simulations. Here we show via the large-scale sweeping cluster QMC, the Cubic* transition acquires anomalously large anomalous dimension if one view the critical scaling from the \(t\)-term (the resonance term in the QLM Hamiltonian, will be explained below) correlation functions, as they are the composite object of the fractionalized vison and correspond to the rank-2 tensor (or tensorial magnetization) of the (2+1)d O(3)/Cubic universality with large scaling dimension \(\eta_{T}\approx 1.42\)[24; 25; 26; 27; 28; 29; 30]. On the other hand, if one measures the correlation of vison operator, the observed anomalous dimension is consistent with the conventional small values of the \(\eta\approx 0.04\) for (2+1)d O(3)/Cubic universality [24; 25; 26; 27; 28; 29; 30]. Such sharp contrast clearly reveal the unconventional nature of the Cubic* transition separating the unconventional VP phase hidden from the dimer measurement and the \(\mathbb{Z}_{2}\) QSL in which the visons are the anyonic particles of the underlying topological order.
Besides these purely theoretical motivations, the QLM we studied here has been widely treated as the low-energy effective model for many frustrated magnets and block-aded cold-atom arrays [2; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54] in condensed matter and cold atom experiments. In the Rydberg array, the static characteristics can be obtained easily via snapshot technique [47; 55; 56; 57], while the dynamic in
formation can be measured through the real-time evolution [58; 59; 60; 61]. The similar static and dynamic information for quantum magnets can be detected by neutron scattering or nuclear magnetic resonance experiments [33; 37; 38; 62; 63; 33], and our computational scheme of the QMC + stochastic analytic continuation (SAC) [64; 65; 66; 67] for the frustrated spin model, QDM and QLM models have provided well consistent static and dynamic information that have been used to explain experiments [9; 10; 11; 22; 23; 36; 37; 68; 69; 70]. Based on these previous experiences, in this work, our QMC static correlations reveal the different scaling dimensions at the Cubic* QCP corresponding to the different constituent operators in the CFT data for \(\eta_{T}\) and \(\eta\), and at the same time our QMC+SAC dynamic measurements exhibit the continua of the dimer and vison spectra as the dynamic signature of the \(\mathbb{Z}_{2}\) topological order and its associated vison condensation, in the vicinity of the Cubic* QCP, in a experimentally detectable manner.
_Model and Methods.--_ The Hamiltonian of the QLM on a triangular lattice is defined as
\[H= -t\ \ \sum_{\alpha}\left(\left|\raisebox{-1.72pt}{\includegraphics[ named=1.72pt]{qMC}}\right\rangle\left\langle\raisebox{-1.72pt}{\includegraphics[ named=1.72pt]{qMC}}\right|+\text{h.c.}\right) \tag{1}\] \[+V\ \ \sum_{\alpha}\left(\left|\raisebox{-1.72pt}{\includegraphics[ named=1.72pt]{qMC}}\right\rangle\left\langle\raisebox{-1.72pt}{\includegraphics[ named=1.72pt]{qMC}}\right|+\left|\raisebox{-1.72pt}{\includegraphics[ named=1.72pt]{qMC}}\right\rangle\left\langle\raisebox{-1.72pt}{\includegraphics[ named=1.72pt]{qMC}}\right|\right),\]
where \(\alpha\) denotes all the rhombi (with three orientations) on the triangular lattice as shown in Fig. 1(a). The local constraint of the fully packed QLM requires two dimers to touch every site in any configuration. The kinetic term is controlled by \(t\), which generates the dimer pair resonance on every flippable plaquette while respecting the local constraint, and \(V\) is the repulsion (\(V>0\)) or attraction (\(V<0\)) between dimers facing each other. The RK point is located at \(V=t=1\) and has an exact \(\mathbb{Z}_{2}\) QSL solution [2]. We set \(t=1\) as the energy unit and perform the simulations for system sizes \(L=8,12,16,20,24\) with the inverse temperature \(\beta=\frac{1}{T}=L\) via the sweeping cluser QMC methods [6; 71; 8; 9; 10; 67], and utilize the SAC scheme [9; 10; 19; 23; 36; 37; 68; 69; 70; 72] to obtain both the dimer and vison spectral functions in real frequency for \(L=12\) system from imaginary time correlation functions with \(\tau\in[0,\beta=100]\).
According to Ref. [1], the order parameter of the VP phase is given by the real space \(t\)-term correlation function
\[\langle T(0)T(\mathbf{r})\rangle=\frac{1}{3}[\langle t_{1}(0)t_{1}(\mathbf{r} )\rangle+\langle t_{2}(0)t_{2}(\mathbf{r})\rangle+\langle t_{3}(0)t_{3}( \mathbf{r})\rangle], \tag{2}\]
where \(\langle t_{\alpha}(0)t_{\alpha}(\mathbf{r})\rangle\) (\(\alpha=1,2,3\)) represent correlators on the three rhombus directions in our triangular lattice with distance \(\mathbf{r}\) between two rhombi. The vison correlation function, constructed from the dimer configurations, is
\[\langle V(0)V(\mathbf{r})\rangle=\frac{1}{2}[\langle v_{1}(0)v_{1}(\mathbf{r })\rangle+\langle v_{2}(0)v_{2}(\mathbf{r})\rangle], \tag{3}\]
where \(v_{\gamma}\) (\(\gamma=1,2\)) for the A (lower triangle) and B (upper triangle) sublattices in one rhombus. To obtain the vison configuration from dimer configuration, one needs to fix a gauge with the reference vison in the plaquette \((0,0)\) and sublattice A as \(v_{1}(0)=1\), as shown in the schematic plot of Fig. 1 (b). Then we map the dimer pattern to the vison configuration through \(v_{1}(0)v_{\gamma}(\mathbf{r})\!=\!(-1)^{N_{P}}\), with \(N_{P}\) being the number of dimers cut along the path \(P\) between triangle at \(0\) and \(\mathbf{r}\), which refer to the green dashed line in the Fig. 1 (b). Therefore, the vison in each triangle holds the value \(\pm 1\), as denoted by the red (\(+1\)) and grey (\(-1\)) triangles in the schematic plots of Fig. 1 (b).
In the field theoretical description [1], the Cubic* CFT of the VP-QSL transition can be described with three scalars coupled together. The Lagrangian is
\[\mathcal{L}_{int}=m^{2}(\sum_{i}\phi_{i}^{2})++u(\sum_{i}\phi_{i}^{2})^{2}+v( \sum_{i}\phi_{i}^{4})+\cdots, \tag{4}\]
together with kinetic terms for the scalars, where the scalar order parameter describing the vison modes [73; 74; 5; 75] is given by
\[\phi_{j}=\sum_{\mathbf{r}}(v_{1}(\mathbf{r}),v_{2}(\mathbf{r}))\cdot\mathbf{ u}_{j}e^{i\mathbf{M}_{j}\cdot\mathbf{r}},\quad j=1,2,3, \tag{5}\]
with \(\mathbf{M}_{j=1,2,3}\) the three \(\mathbf{M}\) points of the Brillouin zone as shown in the inset of Fig. 4 (b) and \(v_{1,2}(\mathbf{r})\) the vison fields in Eq. (3). The vector \(\boldsymbol{\phi}\!=\!(\phi_{1},\phi_{2},\phi_{3})\) encapsulates the (2+1)d Cubic order parameters of the visons. The mass term can be roughly identified as \(m^{2}\sim V-V_{c}\), and the phase transition happens at \(m^{2}=0\). Conformal field theory tells us the correlation of \(\phi\) fields follows a power law behavior. At the phase transition, the
Figure 1: **Quantum loop model on the triangular lattice and its phase diagram.** (a) \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) are the triangular lattice primitive vectors. The \(t\) and \(V\) terms are the kinetic and potential terms in the Hamiltonian Eq. (1), respectively. (b) The transition between the LN and VP crystals is first order [1]. The transition between VP and the \(\mathbb{Z}_{2}\) QSL is continuous and of (2+1)d Cubic* universality [1]. The correlation functions of the VP and vison order parameters around the Cubic* QCP (\(V_{c}=0.59(2)\)) are shown in Figs. 2 and 3. The schematic plot in the VP phase is the real-space vison correlations with the red (grey) color conveys its positive (negative) value in each triangle. The schematic plot in the QSL phase shows two visons connected by a string (the green dashed line), which represents the path \(P\) of the vison-vison correlation function \(v_{\gamma}(0)v_{\gamma}(r)\!=\!(-1)^{N_{P}}\), with \(N_{P}\) the number of dimers cut along \(P\). Here we set the vison in the lower triangle \(v_{1}(0)=1\) as the reference to fix the gauge.
quantum fluctuation of the vison field is dominated by their modes at the \(\mathbf{M}\) points. The vison correlation in Eq. (3) therefore will follow the same power law (with spatial modulation). Similarly, the \(t\)-term operator \(t_{i}\) can be identified as the field theory operators. In particular, \(\{t_{1},t_{2},t_{3}\}\sim\{\phi_{1}\phi_{2},\phi_{2}\phi_{3},-\phi_{1}\phi_{3}\}\). The symmetry group of the CFT is the Cubic(3)=\(S_{3}\rtimes(Z_{2})^{3}\) group, the group elements of Cubic(3) can be identified with lattice symmetries. The precise identification of \(t\)-operators is fixed by the symmetries that they break. Here we are following the convention of Ref. [1]. The Cubic(3) group is a subgroup of O(3). It is known, based on various theoretical works [24; 30; 76], that the O(3) CFT and the Cubic(3) are connected by a very short renormalizations group flow, therefore their operators have similar anomalous dimensions. In particular, the O(3) group has a rank-2 symmetric traceless tensor representation, formed by \(\{\phi_{1}\phi_{2},\phi_{2}\phi_{3},-\phi_{1}\phi_{3}\}\) and \(\{\phi_{1}^{2}-\phi_{2}^{2},\phi_{2}^{2}-\phi_{3}^{2}\}\), which is five dimensional. In the view of the subgroup Cubic(3), the triple \(\{\phi_{1}\phi_{2},\phi_{2}\phi_{3},-\phi_{1}\phi_{3}\}\) forms a three dimensional irreducible representation of the Cubic(3) group. We can safely use the well-known value of the critical exponents \(\eta_{T}\approx 1.42\) of O(3) CFT to approximate its value at the Cubic CFT. The subscript "\(T\)" reminds us that it corresponds to the rank-2 tensor of O(3). Interestingly, the off-diagonal correlator \(\langle t_{1}(0)t_{2}(\mathbf{r})\rangle\) decay much faster than the diagonal ones \(\langle t_{1}(0)t_{1}(\mathbf{r})\rangle\), which is a also a CFT prediction and we show these results in the Supplemental Materials (SM) [77]. The anomalous dimension of scalar \(\{\phi_{1},\phi_{2},\phi_{3}\}\) for O(3) CFT, i.e. the vison \(v_{1,2}\) correlation in Eq. (3), on the other hand, is of very small value \(\eta\approx 0.04\)[24; 30].
We also compute the dynamic dimer correlation function
\[D(\mathbf{k},\tau)=\frac{1}{3N}\sum_{\begin{subarray}{c}i,j\\ \alpha=1,2,3\end{subarray}}^{L^{2}}e^{i\mathbf{k}\cdot\mathbf{r}_{ij}}\left( \langle n_{i,\alpha}(\tau)n_{j,\alpha}(0)\rangle-\langle n_{i,\alpha}\rangle \langle n_{j,\alpha}\rangle\right), \tag{6}\]
where \(n_{i,\alpha}\) is the dimer number operator on bond \(i\) and \(\alpha\) stands for the three bond orientations, and the vison dynamic correlation function
\[v(\mathbf{k},\tau)=\frac{1}{2N}\!\!\sum_{\begin{subarray}{c}i,j\\ \gamma=1,2\end{subarray}}^{L^{2}}\!\!\!e^{i\mathbf{k}\cdot\mathbf{r}_{ij}} \left(\langle v_{i,\gamma}(\tau)v_{j,\gamma}(0)\rangle-\langle v_{i,\gamma} \rangle\langle v_{j,\gamma}\rangle\right), \tag{7}\]
which average the correlation functions of visons in A, B sublattices. Since the value of vison in each triangle is \(\pm 1\), the second term \(\langle v_{i,\gamma}\rangle\) in Eq. (7) is expected to be zero, i.e., no background needs to be subtracted.
_Numerical results._-- Figs. 2 and 3 show the \(\langle T(0)T(\mathbf{r})\rangle\) and \(\langle V(0)V(\mathbf{r})\rangle\) across the Cubic* QCP with system size up to \(L=24\). The distance is along the \(\mathbf{r}=(x,0)\) with \(x\) up to 12 for the periodic boundary condition. The real-space decay behaviors is observed for both correlators in three regions: (i) VP phase with \(V=0.3\). (ii) The Cubic* QCP \(V_{c}=0.59(2)\). (iii) The RK point \(V=1\).
In the VP phase, both \(\langle T(0)T(\mathbf{r})\rangle\) and \(\langle V(0)V(\mathbf{r})\rangle\) exhibit strong even-odd oscillations and with amplitude decaying with the distance \(x\). The oscillations derive from the hidden vison order and eventually vanish as \(V\) goes to 1 as shown in Figs. 2 (a) and 3 (a). We note the even-odd oscillation still exists at the transition point due to the finite-size effect. The oscillations of all \(V\) are symmetric with respect to \(\langle T(0)T(\mathbf{r})\rangle=0\), therefore, we illustrate \(|\langle T(0)T(\mathbf{r})\rangle|\) in log-log scale in Fig. 2 (b) and (c). Moreover, due to the gauge choice we set manually to construct the vison configuration, the oscillations of the vison correlation are asymmetrical with respect to \(\langle V(0)V(\mathbf{r})\rangle=0\) for different values of \(V\). Thus we only use the odd value of the distance to fit the data of \(|\langle V(0)V(\mathbf{r})\rangle|\) in log-log scale, as shown in Fig. 3 (b) and (c).
We found in the VP phase both correlation functions decay to a constant value, while exhibit power-law decay at the Cubic* QCP. Interestingly, these two correlators decay with obvious different exponents. For the \(t\)-term correlation \(\langle T(0)T(\mathbf{r})\rangle\sim 1/x^{1+\eta_{T}}\) is consistent
Figure 2: **The static \(t\)-term correlation.** (a) \(\langle T(0)T(\mathbf{r})\rangle\) as a function of the distance \(\mathbf{r}=(x,0)\) with the system size \(L=24\), the largest system size achieved. The log-log plot for the absolute values of the data in (a) is shown in (b). We also show the log-log plot of the \(t\)-term correlators with different system sizes in (c) to demonstrate the finite-size effect of the decay behavior. In (a) and (b), we show the correlators in the VP phase with \(V=0.3\), at the transition point with \(V_{c}\) (we use \(V=0.6\) here), and at the RK point when \(V=1\). The dark solid line shown in (b) and (c) is proportional to \(1/x^{1+\eta_{T}}\) with the large anomalous dimension \(\eta_{T}=1.42\), which corresponds to the rank-2 tensor field of the (2+1)d O(3)/Cubic universality [24; 30].
with an anomalously large anomalous dimension of the rank-2 tensor of Cubic CFT with \(\eta_{T}=1.42\); and for the vison correlation \(\langle V(0)V({\bf r})\rangle\sim 1/x^{1+\eta}\) is consistent with \(\eta=0.04\) which is the (2+1)d O(3)/Cubic value of \(\eta\) for the order parameter. To access the thermodynamic limit, we depict correlators with different system sizes at the Cubic* QCP in Figs. 2 (c) and 3 (c), and put the small system sizes data of other values of \(V\) in SM [77], all these results reveal the \(\eta_{T}=1.42\) for \(\langle T(0)T({\bf r})\rangle\) and the \(\eta=0.04\) for the \(\langle V(0)V({\bf r})\rangle\). On the other hand, inside the QSL phase such as \(V=1\) the RK point, both the correlators decay exponentially as shown in Figs. 2 (b) and 3 (b).
Large anomalous dimension means a large scaling dimension as \(\Delta_{T}=\frac{1+\eta_{T}}{2}\) for the rank-2 tensor and \(\Delta=\frac{1+\eta}{2}\) for the scalar operators of the Cubic/O(3) CFT, our results therefore mean that at the Cubic* QCP, the \(t_{1},2,3\)-term is a composite of the fractionalized visons \(v_{1,2}\), instead of a well-defined critical mode, and it is the proliferated visons \(v\) give rise to the large anomalous dimension of \(t\), which serves as a defining signature of the Cubic* transition, different from the conventional Cubic/O(3) QCPs. Similar behavior have been observed in the (2+1)d XY* transition between \(\mathbb{Z}_{2}\) QSL and U(1) symmetry-breaking superfluid phase [23; 20; 18; 21].
Such fractionalization signature is also vividly seen from the dynamic probes. We measure the dynamic correlation functions in Eqs. (6) and Eq. (7) and obtain the dimer and vison spectra via QMC+SAC (details of the scheme is given in SM [77]). Fig. 4 show the obtained spectra across the Cubic* transition. Inside the QSL phase denoted by panels (b) and (d), both spectra exhibit gapped behavior and substantial continua in a large fraction of the momenta along the high-symmetry path. It is interesting to note that the minimal dimer gap is larger than the minimal vison gap due to the fact that dimer is the composite of a pair of visons [78; 9].
At the Cubic* QCP, the dimer spectra remains gapped as shown in Fig. 4 (a), especially at the \(\mathbf{M}_{j=1,2,3}\) points (the seemingly small gap at \(\Gamma\) point is due to the finite-size effect), but as shown in Fig. 4 (c), the vison spectra developed a clear gapless mode close to the \(\mathbf{M}\) points, as \(\mathbf{M}\) points are the ordered wavevector of the VP phase (as explained in Eq. (5)), this critical and gapless vison mode is the dynamic signature of the vison condensation at the Cubic* transition. The contrast between Fig. 4 (a) and (c) explaines the reason why the dimer correlation cannot see the "hidden" VP order and only the vison
Figure 3: **The static vison correlation.** (a) \(\langle V(0)V({\bf r})\rangle\) as a function of the distance \({\bf r}=(x,0)\) with the fixed system size \(L=24\). (b) The log-log plot of only the odd value of \(x\) in (a). Similar with the \(t\)-term correlators, We show the vison correlators in the VP phase (\(V=0.3\)), at the transition point \(V_{c}\) (use \(V=0.61\) here), and at the RK point when \(V=1\). Panel (c) shows the outstanding critical decay behavior with different system sizes. Different from the \(t\)-term correlators in Figs. 2 (b) and (c), the dark solid line shown in (b) and (c) here is proportional to \(1/x^{1+\eta}\) with the anomalous dimension \(\eta=0.04\) for the (2+1)d O(3)/Cubic scalar order parameter [24; 30].
Figure 4: **The dynamic dimer and vison spectra.** We display the spectra at the Cubic* QCP \(V_{c}\) with \(V=0.6\), and at the RK point with \(V=1\) for \(L=12\) system. The \(\beta\) used in the simulations is 100 and we employ QMC+SAC scheme to generate the real frequency data. The inset in (b) shows the high-symmetry path in the Brillouin zone along which the spectra are presented. For dimer spectra in the upper row (a) and (b), all the spectra exhibit broad continua due to the fractionalization. At the \(\mathbf{M}\) point, the spectra are gapped, indicating dimer correlator is unable to detect the transition between the VP and QSL phase. While for vison spectra, the gap at \(\mathbf{M}\) point closes at the Cubic* QCP in panel (c), and reopens at the RK point in panel (d). The vison gap at the RK point in (d) is smaller than that of the dimer gap in (b).
spectra reveal the translational symmetry breaking of the VP phase. Similar dynamic signature of the \(\mathbb{Z}_{2}\) topological order in QSL and the condensation of fractionalized anyons has also been shown in the (2+1)d XY* transition [18; 19; 20; 22; 79].
_Discussions._-- Via the combined numeric and analytic approaches, we find static and dynamic signatures of the Cubic* transition from \(\mathbb{Z}_{2}\) QSL to VP crystal in the QLM on triangular lattice. Both correlations and spectra show that at the transition the fractionalized vison in QSL condenses to give rise to the crystalline VP phase, meanwhile leaves its trace in the anomalously large anomalous dimension exponent and pronounced continua in the dimer and vison spectra compared with those at the conventional Cubic or O(3) quantum critical points. Many new directions are opened from here such as what is the underlying reason that the \(t\)-term correlation is exactly the rank-2 symmetric traceless tensor of the Cubic/O(3) CFT and why the VP phase is invisible from dimer after the vison condensation process, etc. But our static and dynamic signatures discovered here can already be used to guide further experiments in frustrated quantum magnets and the blocked cold-atom arrays, in which the unconventional quantum matter and quantum phase transitions are being realized in an astonishing speed [10; 11; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 49].
_Acknowledgements_--We thank Ning Su and Fabien Alet for insightful discussions on the O(3) and cubic fixed points and the phase diagram of triangular lattice QLM. XXR, ZY and ZYM acknowledge support from the Research Grants Council (RGC) of Hong Kong Special Administrative Region (SAR) of China (Projects Nos. 17301420, 17301721, AoE/P-701/20, 17309822 and HKU C7037-22G), the ANR/RGC Joint Research Scheme sponsored by the RGC of Hong Kong SAR of China and French National Research Agency (Project No. A_HKU703/22). The K. C. Wong Education Foundation (Grant No. GJTD-2020-01). YQ acknowledges support from the the National Natural Science Foundation of China (Grant Nos. 11874115 and 12174068). JR is supported by Huawei Young Talents Program at IHES. Y.C.W. acknowledges support from Zhejiang Provincial Natural Science Foundation of China (Grant Nos. LZ23A040003). We thank the High performance Computing Centre of Beihang Hangzhou Innovation Institute Yuhang, HPC2021 system under the Information Technology Services and the Blackbody HPC system at the Department of Physics, University of Hong Kong for providing computational resources that have contributed to the research results in this paper.
|
2303.18079 | Dispersion entropy: A Measure of Irregularity for Graph Signals | We introduce a novel method, called Dispersion Entropy for Graph Signals,
$DE_G$, as a powerful tool for analysing the irregularity of signals defined on
graphs. We demonstrate the effectiveness of $DE_G$ in detecting changes in the
dynamics of signals defined on synthetic and real-world graphs, by defining
mixed processing on random geometric graphs or those exhibiting with
small-world properties. Remarkably, $DE_G$ generalises the classical dispersion
entropy for univariate time series, enabling its application in diverse domains
such as image processing, time series analysis, and network analysis, as well
as in establishing theoretical relationships (i.e., graph centrality measures,
spectrum). Our results indicate that $DE_G$ effectively captures the
irregularity of graph signals across various network configurations,
successfully differentiating between distinct levels of randomness and
connectivity. Consequently, $DE_G$ provides a comprehensive framework for
entropy analysis of various data types, enabling new applications of dispersion
entropy not previously feasible, and revealing relationships between graph
signals and its graph topology. | John Stewart Fabila-Carrasco, Chao Tan, Javier Escudero | 2023-03-31T14:16:08Z | http://arxiv.org/abs/2303.18079v1 | # Dispersion entropy: A Measure of Irregularity for Graph Signals
###### Abstract
We introduce a novel method, called Dispersion Entropy for Graph Signals, \(\mathrm{DE_{G}}\), as a powerful tool for analysing the irregularity of signals defined on graphs. We demonstrate the effectiveness of \(\mathrm{DE_{G}}\) in detecting changes in the dynamics of signals defined on synthetic and real-world graphs, by defining mixed processing on random geometric graphs or those exhibiting with small-world properties. Remarkably, \(\mathrm{DE_{G}}\) generalises the classical dispersion entropy for univariate time series, enabling its application in diverse domains such as image processing, time series analysis, and network analysis, as well as in establishing theoretical relationships (i.e., graph centrality measures, spectrum). Our results indicate that \(\mathrm{DE_{G}}\) effectively captures the irregularity of graph signals across various network configurations, successfully differentiating between distinct levels of randomness and connectivity. Consequently, \(\mathrm{DE_{G}}\) provides a comprehensive framework for entropy analysis of various data types, enabling new applications of dispersion entropy not previously feasible, and revealing relationships between graph signals and its graph topology.
**Introduction.** Entropy is a fundamental tool for assessing irregularity and non-linear behaviour in data. Permutation entropy (PE) is an effective algorithm for capturing dynamics in time series (1D data) [1] and has been widely used in finance, physics, and biology [2]. However, PE considers only the order of values, discarding important amplitude information. Dispersion Entropy (DE) was introduced to overcome this limitation [3], and has since been applied to EEG analysis [4] and rotary machines [5].
The growing availability of data defined on complex networks, such as social networks [6], transportation systems [7], and industrial processes [8], has driven interest in extending entropy metrics from time series to more general domains. Recently, PE has been extended to analyse images (2D data) [9] and irregular domains (graphs) [10]. While DE has been defined for 2D data [11], there is no existing DE algorithm for analysing data defined on graphs. Such an extension would enable analysis of real-world systems with graph-based structure where classical DE was not previously applicable, providing a powerful framework for data analysis across a wide range of applications in Graph Signal Processing (GSP) [7].
Smoothness is a fundamental property extensively studied in GSP [7; 12; 13], typically through the use of the combinatorial Laplacian's quadratic form. Intuitively, a graph signal is considered smooth if connected vertices exhibit similar values [13]. Nonetheless, this definition may not fully capture the complex dynamics of graph signals due to its relationship with the spectrum [14]. To address this limitation, we propose in this letter a novel method, based on classical DE for time series, which effectively captures the irregularity of graph signals, providing critical insights into the underlying graph structure and data.
To evaluate our method's performance, we employed synthetic and real-world graphs, including random geometric graphs (used to model wireless sensor networks [15]) and small-world networks (observed widely in biological systems [16], social networks [17], and complex systems [18]). In our analysis, we generalised the mix process \(\mathrm{MIX}(p)\), a stochastic process combining a sinusoidal signal with random dynamics controlled by the parameter \(p\in[0,1]\). This process has been employed to assess the performance of various entropy metrics in time series [19; 20] and images [21]. Moreover, we analyse centrality measures, which assign ranking values to the graph's vertices based on their position or importance within the graph. Centrality measures play a crucial role in social network analysis for evaluating the importance of vertices in communication [22; 23].
_Contribution._ In this letter, we propose a method for defining Dispersion Entropy for Graph Signals, denoted as \(\mathrm{DE_{G}}\). Our approach generalises the classical univariate definition of DE by incorporating topological information through the adjacency matrix. We demonstrate the effectiveness of \(\mathrm{DE_{G}}\) on synthetic and real-world datasets, and characterise the relationship between graph topology and signal dynamics. Our results indicate that \(\mathrm{DE_{G}}\) is a promising technique for analysing graph data, holding potential for numerous applications in fields such as biomedicine and social sciences.
_Notation._ A _simple undirected graph_\(G\) is defined as a triple \(G=(\mathcal{V},\mathcal{E},\mathbf{A})\), where \(\mathcal{V}\) is a finite set of vertices (without isolated vertices), \(\mathcal{E}\) is the set of edges, and \(\mathbf{A}\) is the corresponding adjacency matrix. A _graph signal_ is a real function defined on the vertices \(\mathbf{X}\colon\mathcal{V}\to\mathbb{R}\), represented as an \(N\)-dimensional column vector, \(\mathbf{X}=[x_{1},x_{2},\ldots,x_{N}]^{T}\in\mathbb{R}^{N\times 1}\), with the same indexing as the vertices. The combinatorial Laplacian and normalised Laplacian are denoted by \(\Delta\) and \(L\), respectively.
A _\(d\)-dimensional Random Geometric Graph_ (RGG) is a graph in which each vertex \(v_{i}\in\mathcal{V}\) is assigned a random \(d\)-dimensional coordinate \(v_{i}\to\mathbf{x}_{i}=(x_{i}^{1},\ldots,x_{i}^{d})\in[0,1]^{d}\). Two vertices \(v_{i},v_{j}\in\mathcal{V}\) are connected by an edge if the
distance between their assigned coordinates is below a predefined threshold \(r>\mathbf{d}(v_{i},v_{j})\) (see [24]).
**Dispersion Entropy for Graph Signals (\(\mathrm{DE_{G}}\)).** Let \(\mathbf{X}\) be a graph signal defined on \(G\), \(2\leq m\in\mathbb{N}\) be the _embedding dimension_, \(L\in\mathbb{N}\) be the _delay time_ and \(c\in\mathbb{N}\) be the _class number_. The \(\mathrm{DE_{G}}\) is defined as follows:
1. The _embedding matrix_\(\mathbf{Y}\in\mathbb{R}^{N\times m}\) is given by \(\mathbf{Y}=[\mathbf{y}_{0},\mathbf{y}_{1},\cdots,\mathbf{y}_{m-1}]\), defined by
\[\mathbf{y}_{k}=D\mathbf{A}^{kL}\mathbf{X}\in\mathbb{R}^{N\times 1}\;,\quad k=0,1, \ldots,m-1\;,\]
where \(D\) is the diagonal matrix \(D_{ii}=1/\sum_{j=1}^{N}(\mathbf{A}^{kL})_{ij}\).
2. _Map function_. Each entry of the embedding matrix \(\mathbf{Y}\) is mapped to an integer number from \(1\) to \(c\), called a class. The function \(F\colon\mathbb{R}\to\mathbb{N}_{c}\) where \(\mathbb{N}_{c}=\{1,2,\ldots,c\}\) is applied element-wise on the matrix \(\mathbf{Y}\), i.e. \(F(\mathbf{Y})\in\mathbb{N}_{c}^{N\times m}\) where \(F(\mathbf{Y})_{ij}=F(y_{ij})\).
3. _Dispersion patterns._ Each row of the matrix \(F(\mathbf{Y})\), called an _embedding vector_, is mapped to a unique dispersion pattern. Formally, the _embedding vectors_ consist of \(m\) integer numbers (ranged from \(1\) to \(c\)) corresponding to each row of the matrix \(F(\mathbf{Y})\), i.e., \(\mathrm{row}_{i}(F(\mathbf{Y}))=\left(F(y_{ij})\right)_{j=1}^{m}\) for \(i=1,2,\ldots,N\). The set of dispersion patterns is \(\Pi=\{\left.\pi_{v_{1}v_{2}\ldots v_{m}}\right|v_{i}\in\mathbb{N}_{c}\,\}\). Each embedding vector is uniquely mapped to a dispersion pattern, i.e., \(\mathrm{row}_{i}(F(\mathbf{Y}))\to\pi_{v_{1}v_{2}\ldots v_{m}}\) where \(v_{1}=F(y_{i1}),v_{2}=F(y_{i2}),\ldots,v_{m}=F(y_{im})\).
4. _Relative frequencies._ For each dispersion pattern \(\pi\in\Pi\), its relative frequency is obtained as:
\[p\left(\pi\right)=\frac{\left|\left\{i\;\middle|\;i\in\mathcal{V},\mathrm{ row}_{i}(F(\mathbf{Y}))\;\text{has type $\pi$}\right\}\right|}{N}\;.\]
5. The _Dispersion Entropy for Graph Signals_\(\mathrm{DE_{G}}\) is computed as the normalised Shannon's entropy for the distinct dispersion patterns as follows:
\[\mathrm{DE_{G}}(\mathbf{X},m,L,c)=-\frac{1}{\log(c^{m})}\sum_{\pi\in\Pi}p( \pi)\ln p(\pi)\;.\]
The \(\mathrm{DE_{G}}\) algorithm offers several unique features and properties. The _embedding matrix_ is a key component that captures the topological relationships between the graph and signal. With a chosen embedding dimension \(3\leq m\leq 7\), and delay time commonly set to \(L=1\) (values suggested [1]), the embedding matrix \(\mathbf{Y}\in\mathbb{R}^{N\times m}\) is constructed. Each column vector \(\mathbf{y}_{k}\) is calculated by averaging the signal values of neighbouring vertices, i.e. \(\mathbf{y}_{k}=D\mathbf{A}^{kL}\mathbf{X}\), where the power of the adjacency matrix \(\mathbf{A}^{kL}\) denotes the number of \(kL\)-walks between two vertices. Additionally, the diagonal matrix \(D\) serves as a normalisation factor. The first column of the matrix \(\mathbf{Y}\) corresponds to the original graph signal, i.e., \(\mathbf{y}_{0}=\mathbf{X}\), and the second column is related to the normalised Laplacian \(L\), specifically, \(\mathbf{y}_{1}=\mathbf{X}-L\mathbf{X}\).
_Map functions._ To address limitations in assigning the signal \(\mathbf{X}\) to only a limited number of classes, various maps functions \(F\colon\mathbb{R}\to\mathbb{N}_{c}\) have been proposed [3]. The non-linear cumulative distribution function (NCDF) is commonly utilised. The map \(G\colon(0,1)\to\mathbb{N}_{c}\) is defined as \(G(x)=round(cx+0.5)\), where rounding increases or decreases a number to the nearest digit. The map \(\mathrm{NCDF}\colon\mathbb{R}\to(0,1)\) is defined as:
\[\mathrm{NCDF}(x)=\frac{1}{\sigma\sqrt{2\pi}}\int_{-\infty}^{x}e^{\frac{-(t-x) ^{2}}{2\sigma^{2}}}\mathrm{d}t\]
where \(\mu\) and \(\sigma\) represent the mean and standard deviation of \(\mathbf{X}\), respectively. Thus, \(F=G\circ\mathrm{NCDF}\colon\mathbb{R}\to\mathbb{N}_{c}\) is the map function used in our implementation of the \(\mathrm{DE_{G}}\) algorithm.
_Dispersion patterns._ The number of possible dispersion patterns that can be assigned to each embedding vector is \(c^{m}\). Moreover, the number of embedding vectors constructed in the \(\mathrm{DE_{G}}\) algorithm is \(N\), one for each vertex. In contrast, classical \(\mathrm{DE}\) has a number of embedding vectors dependent on the parameters \(m\) and \(L\), specifically, \(n-(m-1)L\).
_Shannon's entropy_ provides a measure of irregularity that can be used to compare signals defined on different graphs. The value of Shannon's entropy ranges from \(0\) (regular behaviour) to \(1\) (irregular behaviour).
**Dispersion entropy for directed graphs.** The algorithm \(\mathrm{DE_{G}}\) provides a tool for analysing undirected graph signals, and can be extended to _directed graphs_ with minor modifications. Additionally, the algorithm can be applied to any graph signal, but for time series, it produces the same values as the classical \(\mathrm{DE}\)[3]. This is established in Proposition 1.
**Proposition 1** (_Equivalence of \(\mathrm{DE}\) and \(\mathrm{DE_{G}}\) for time series_).: _Let \(\boldsymbol{X}=\left\{x_{i}\right\}_{i=1}^{N}\) be a time series and \(\overrightarrow{G}=\overrightarrow{P}\) is the directed path on \(N\) vertices. Then, for all \(m,c\) and \(L\) the following equality holds:_
\[\mathrm{DE}(m,L,c)=\mathrm{DE}_{\overrightarrow{P}}(m,L,c)\;.\]
Proof.: Please refer to the supplemental material [25].
**MIX Processing on \(\mathrm{RG_{G}}\).** We introduce a graph-based stochastic process \(\mathrm{MIX_{G}}(p)\) defined on \(\mathrm{RG_{G}}\) to assess the performance of \(\mathrm{DE_{G}}\) in capturing complex signal dynamics. Here, \(G\) is a \(d\)-dimensional \(\mathrm{RGG}\) with \(N\) vertices, and the graph signal \(\mathrm{MIX_{G}}(p)\) is defined by:
\[\mathrm{MIX_{G}}(p)_{i}=(1-R_{i})S_{i}+R_{i}W_{i}\quad\text{for}\quad 1\leq i \leq N\;, \tag{1}\]
where \(R_{i}\) is a random variable with a probability \(p\) of taking the value \(1\) and a probability \(1-p\) of taking the value \(0\), \(W_{i}\) is uniformly distributed white noise sampled from the interval \([-\sqrt{3},\sqrt{3}]\), and \(S_{i}=\sum_{j=1}^{d}\sin(fx_{i}^{j})\) represents a sinusoidal signal with frequency \(f\).
The construction of a \(d\)-dimensional \(\mathrm{RGG}\) requires selecting two parameters, \(r\) and \(N\), while the graph signal generated by the \(\mathrm{MIX_{G}}(p)\) process incorporates random
noise (determined by \(p\)) into some values of the sinusoidal signal (determined by \(f\)). Our algorithm, \(\mathrm{DE_{G}}\), detects changes in the frequency of the signal (increasing \(f\)), the presence of white noise (increasing \(p\)), and the graph connectivity (increasing \(r\)) by increasing the entropy values of \(\mathrm{DE_{G}}\). Fig. 1 illustrates the effectiveness of \(\mathrm{DE_{G}}\) in detecting the dynamics of the \(\mathrm{MIX_{G}}\) process.
_Fixing the graph, changing the signal._ We analyse the impact of different parameter values on the irregularity of the graph signal \(\mathrm{MIX_{G}}(p)\) by fixing the underlying RGG with constant \(N=1500\) and \(r=0.06\). We employ a fixed embedding dimension of \(m=3\), the number of classes set at \(c=3\), time delay \(L=1\), and NCDF as the non-linear map (similar results are obtained for others non-linear mappings and values of \(m\), \(c\), and \(L\)).
Increasing the frequency parameter \(f\) of the \(\mathrm{MIX_{G}}(p)\) process results in a more irregular graph signal. The frequency \(f=2\pi\) and \(f=4\pi\) of the sine function in Eq. 1 are depicted in Fig.1a)-b). This increase in frequency produces more variation in the graph signal values between neighbouring vertices. Our algorithm \(\mathrm{DE_{G}}\) detects these dynamics by increasing the entropy values. Similarly, an increase in the randomness parameter \(p\) results in a more random signal. The parameters \(p=0\) and \(p=0.2\) in Eq. 1 are depicted in Fig.1a), c). The \(\mathrm{DE_{G}}\) algorithm detects the change in randomness, by increasing the entropy values.
More generally, we compute the entropy values for a range of frequencies from \(3/2\pi\) to \(16\pi\), as well as for different levels of noise, with probabilities ranging from \(0\) to \(1\). The results of \(30\) realizations are depicted in Fig. 2a, showing the mean and standard deviation. The \(\mathrm{DE_{G}}\) algorithm effectively detects the increasing irregularity of the signal by increasing the entropy values. Moreover, the algorithm can distinguish between different levels of irregularity in the \(\mathrm{MIX_{G}}(p)\) signal based on the chosen value of \(p\).
_Fixing the signal, changing the graph._ By fixing the graph signal, we investigate the behaviour of the \(\mathrm{DE_{G}}\) algorithm as the underlying graph changes. Specifically, we examine the impact of increasing the distance parameter \(r\) from \(0.01\) to \(0.3\) used for construct the RGG with \(N=1,500\) vertices. Entropy values are computed for \(20\) realisations, and the mean and standard deviation are depicted in Fig. 2b for several values of \(m\) and \(c\). As \(r\) increases, the number of edges increases, connecting more distant vertices with different values. The resulting patterns are more irregular, with more changes and a wider distribution, leading to an increase in the entropy value.
**The spectrum of the Laplacian and \(\mathrm{DE_{G}}\).** Let \(\mathbf{X}\) be a graph signal, _the smoothness of \(\mathbf{X}\)_ is given by \(\mathbf{X}^{T}\Delta\mathbf{X}\)[7]. We examine the relationship between \(\mathrm{DE_{G}}\) and the spectrum of \(\Delta\) acting on an RGGs (similar results are obtained for other random graphs).
Let \(G\) be a RGG with \(N=1,500\) vertices. The eigenvalues of \(\Delta\) and its corresponding eigenvectors are denoted by \(\sigma=\{\lambda_{1}\leq\lambda_{2}\leq\cdots\leq\lambda_{N}\}\) and \(\{f_{i}\}_{i=1}^{N}\), respectively. The smoothness of each eigenvector is evaluated and normalised based on the classical definition, i.e., \(\lambda_{i}^{-1}f_{i}^{T}\Delta f_{i}\), and the results are shown in Fig. 3. Each eigenvector \(f_{i}\) is considered as a graph signal and \(\mathrm{DE_{G}}\) is computed for \(c=2,3,4\) and \(m=2\). The results are depicted in Fig. 3. The smoothness definition is an increasing function, i.e., smaller eigenvalues correspond to smoother eigenvectors (also known as graph Fourier modes [26]). Such information is limited especially when eigenfunctions associated with equal eigenvalues (and equal smoothness) exhibit different levels of irregularity. By applying the \(\mathrm{DE_{G}}\) algorithm, we can better understand and analyse the dynamics of these eigenfunctions.
The dispersion entropy computed for different values of \(m\) and \(c\) enables us to capture abrupt changes in entropy values when the dynamics of eigenfunctions change. Fig. 4 depicts six eigenvectors \(\{f_{j}\}_{j=527}^{532}\) corresponding
Figure 1: Examples of RGGs with \(N=1,500\) and values \(r=0.06\) and \(r=0.10\). The graph signals are generated by the \(\mathrm{MIX_{G}}\) process with different parameter values.
Figure 3: Entropy values of \(\mathrm{DE_{G}}\) and smoothness based on the Laplacian \(\Delta\) for the eigenvalues as graph signals.
Figure 2: Entropy values (a) for a fixed graph, increasing the noise and for several frequencies and (b) the underlying graph is more connected.
to the eigenvalues \(\{\lambda_{j}\}_{j=527}^{532}\). The definition of smoothness of \(f_{j}\) coincides with the value \(\lambda\), and the eigenvalue \(\lambda_{528}=15\) has a multiplicity equal to four, and its eigenfunctions \(\{\lambda_{j}\}_{j=528}^{531}\) exhibit a regular behaviour, while \(f_{527}\) and \(f_{532}\) are more irregular. Hence, classical definitions are not able to fully capture the difference in dynamics within the graph signals. In contrast, the \(\mathrm{DE_{G}}\) algorithm is capable of detecting them. In particular, the entropy value of the eigenfunctions is nearly close to \(0\) if the signal exhibits a more regular dynamic and close to \(1\) for the most irregular eigenfunctions. Thus, \(\mathrm{DE_{G}}\) detects eigenvalues with high multiplicity, useful for the construction of isospectral graphs [27].
**Small-world networks and \(\mathrm{DE_{G}}\).** We evaluate the performance of \(\mathrm{DE_{G}}\) in detecting dynamics on signals defined on small-world networks, generated by the Watts-Strogatz model [17], and changing the mean degree \(k\) and rewiring probability \(p\). Let \(G\) be a small-world network with \(N=1,500\) and various graph signals, including a random signal, a recurrence relation (logistic map [1]), a stochastic process (Wiener process [28]), and a periodic signal (sine).
_Fixing \(k\), changing \(p\)._ By fixing \(k=1\), we analyse the effect of the parameter \(p\) (ranging from \(0\) to \(1\)) in the construction of the network \(G_{p}\) and the entropy values. We compute \(\mathrm{DE_{G}}\) for each graph signal for \(20\) realizations, and the mean and standard deviation are depicted in Fig. 5a. For \(p=0\), the underlying graph \(G_{p}\) is a cycle of \(N\) vertices. A path graph is a geometric perturbation of a cycle [29; 30] and due to Prop. 1, we can consider the values of \(p=0\) to be the classical \(\mathrm{DE}\). The classical \(\mathrm{DE}\) is able to detect the dynamics of various signals, but its computation does not involve the topological structure, thus it only works for the path graph. In contrast, \(\mathrm{DE_{G}}\) takes into account not only the signal information but also the graph structure. In this setting, the dynamic of the random signal is almost constant, because it is not affected by \(G_{p}\). The Wiener process and sine signals exhibit lower entropy values for \(p=0\) (e.g., the cycle), as their dynamics stem from either periodicity (sine) or stochastic processes (Wiener). However, as \(p\) increases, the underlying graph becomes more random, and hence the entropy value also increases. In any case, \(\mathrm{DE_{G}}\) is still able to distinguish the random signal from the periodic signal and the Wiener process (for all \(p<0.8\)). Two logistic map signals are generated, one with oscillatory behaviour (\(r=3.3\)) and one with chaotic behaviour (\(r=3.7\)). These characteristics are is well detected by \(\mathrm{DE_{G}}\) for all values of \(p\).
_Fixing \(p\), changing \(k\)._ By fixing \(p=0.05\), the underlying graph \(G_{k}\) where \(1\leq k\leq 6\) increases the connectivity. In Fig. 5b, we present the entropy values for each graph signal. The entropy values for the sine and Wiener signals almost remain constant, independent of \(G_{k}\), due to their periodicity and stochastic dynamics. However, the logistic map exhibits a higher degree of variability in its entropy values as \(k\) increases. This is because the logistic map is defined by a recurrence formula, where each value depends only on the previous sample, and if \(k\) increases, the underlying \(G_{k}\) has more connections between neighbourhoods, which may disrupt the recurrence relation, generating more irregular signals and resulting in higher entropy values. Conversely, the random signal shows a reduction in entropy values as \(k\) increases, as the creation of more connections leads to a more robust average value due to the law of large numbers.
**Graph Centrality Measures and \(\mathrm{DE_{G}}\).** Each centrality measure can be considered as a graph signal, allowing the application of the \(\mathrm{DE_{G}}\) algorithm to assess the irregularity of centrality measures on real and synthetic graphs (refer to Table I in the supplemental material [25]).
We used six centrality measures as graph signals, namely [22; 23]: _Eigenvector centrality_, _Betweenness_, _Closeness_, _Harmonic centrality_, _Degree_ and _Pagerank_. The \(\mathrm{DE_{G}}\) algorithm leverages the graph topology to effectively detect irregularities generated by each centrality measure, as demonstrated in Fig. 6. In particular, the _Eigenvector Centrality_ produces smooth signals [13] in most graphs, and this is reflected in low entropy values. Well-connected vertices tend to appear on the shortest paths between other vertices. When the graph has only a few such vertices, the entropy of the _Betweenness_ measure is lower. In cases where the graph has a more irregular distribution of vertices with this characteristic (e.g., in the sphere due to its symmetry), the entropy values are higher. A similar effect occurs when considering the average length of the shortest path between the vertex and all other vertices, as detected by the _Closeness_ measure. Finally, the _Degree_ and _PageRank_ mea
Figure 4: Several eigenfunctions and their entropy values.
Figure 5: Entropy values for different signals defined on a small-world network generated by the Watts-Strogatz model.
sures produce more irregular graph signals because each signal's value defined on the graph depends only on local properties (the degree or the number and importance of the other vertices connected to it) rather than global properties (such as average paths between vertices in the previous measures).
**Comparing \(\mathrm{DE_{G}}\) and \(\mathrm{PE_{G}}\) Performance** The Permutation Entropy for Graph Signals, denoted by \(\mathrm{PE_{G}}\)[10], marked the first entropy metric specifically designed for graph-based data analysis. Both methods rely on the adjacency matrix, but \(\mathrm{PE_{G}}\) primarily focuses on the order of amplitude values (local properties), which might result in the loss of valuable information regarding the amplitudes (global properties). \(\mathrm{DE_{G}}\) addresses these limitations by providing a more comprehensive way to characterize the dynamics of graph signals. We conducted the same previous analysis with \(\mathrm{PE_{G}}\) (supplemental material [25]), and found that \(\mathrm{DE_{G}}\) consistently outperforms \(\mathrm{PE_{G}}\) in all cases, highlighting the potential of our novel method for effectively analysing graph signal irregularities.
**Conclusions.** We have introduced Dispersion Entropy for Graph Signals (\(\mathrm{DE_{G}}\)), a method that enhances the analysis of irregularities in graph signals. Our approach generalises classical dispersion entropy, enabling its application to a wide array of domains, including real-world graphs, directed and weighted graphs, and unveiling novel relationships between graph signals and graph-theoretic concepts (e.g., eigenvalues and centrality measures). By overcoming the limitations of the classical smoothness definition, \(\mathrm{DE_{G}}\) offers a more comprehensive approach to analysing graph signals and holds significant potential for further research and practical applications, as it effectively captures the complex dynamics of signals across diverse topology configurations.
**Acknowledgement** This work was supported by the Leverhulme Trust via a Research Project Grant (RPG-2020-158).
Figure 6: The dispersion entropy for various centrality measures.
**Supplemental Materials for "Dispersion entropy: A Measure of Irregularity for Graph Signals"**
## I Dispersion entropy for directed graphs.
In the letter, we have introduced the Dispersion Entropy for graph signals, denoted as \(\mathrm{DE}_{\mathrm{G}}\), in the context of _undirected_ graphs. To extend this concept to _directed graphs_ or _digraphs_, the approach remains analogous, with the primary distinction being the need to incorporate specific constraints on the rows of the embedding matrix. These constraints are imposed by the well-defined vectors \(\mathbf{y}_{k}\).
Let \(\overrightarrow{G}=(\mathcal{V},\mathcal{E},\mathbf{A})\) be a digraph with \(N\) vertices, where \(\mathbf{A}\) denotes the adjacency matrix of the directed graph, and \(\mathbf{X}=\{x_{i}\}_{i=1}^{n}\) is a signal defined on \(\overrightarrow{G}\). Given an _embedding dimension_\(m\) with \(2\leq m\in\mathbb{N}\), a _delay time_\(L\in\mathbb{N}\), and a _class number_\(c\in\mathbb{N}\), the Dispersion Entropy for Directed Graphs (\(\mathrm{DE}_{\overrightarrow{G}}\)) is defined as follows:
1. _Embedding matrix._ Let \(\mathcal{V}^{*}\subset\mathcal{V}\) be the set given by:
\[V^{*}=\{\,i\in\mathcal{V}\,|\,\sum_{j=1}^{n}(\mathbf{A}^{kL})_{ij}\neq 0\text{ for all }k=0,1,\ldots,m-1\,\}.\]
The _embedding matrix_\(\mathbf{Y}^{*}\in\mathbb{R}^{|V^{*}|\times m}\) is given by:
\[\mathbf{Y}^{*}=[\mathbf{y}_{0}^{*},\mathbf{y}_{1}^{*},\cdots,\mathbf{y}_{m-1 }^{*}]\] (S0)
where \(\mathbf{y}_{k}^{*}\in\mathbb{R}^{|V^{*}|\times 1}\), given by the restriction of \(\mathbf{y}_{k}\) to the vertices in \(V^{*}\), i.e., \(\mathbf{y}_{k}^{*}=\left.\mathbf{y}^{k}\right|_{V^{*}}\).
2. _Map function._ Each element of the embedding matrix \(\mathbf{Y}^{*}\) is mapped to an integer number from \(1\) to \(c\), called a class, i.e., we define a function \(F\colon\mathbb{R}\to\mathbb{N}_{c}\) where \(\mathbb{N}_{c}=\{1,2,\ldots,c\}\) that applies element-wise on the matrix \(\mathbf{Y}^{*}\), i.e. \(F(\mathbf{Y}^{*})\in\mathbb{N}_{c}^{N\times m}\) where \(F(\mathbf{Y}^{*})_{ij}=F(y_{ij}^{*})\).
3. _Dispersion patterns._ Each row of the matrix \(F(\mathbf{Y}^{*})\), called an _embedding vector_, is mapped to a unique dispersion pattern. Formally, the _embedding vectors_ consist of \(m\) integer numbers (from \(1\) to \(c\)) corresponding to each row of the matrix \(F(\mathbf{Y}^{*})\), i.e., \(\mathrm{row}_{i}(F(\mathbf{Y}^{*}))=\left(F(y_{ij}^{*})\right)_{j=1}^{m}\) for \(i=1,2,\ldots,N\). The set of dispersion patterns is defined as \(\Pi=\{\,\pi_{v_{1}^{*}v_{2}^{*}\ldots v_{m}^{*}}\,|\,v_{i}^{*}\in\mathbb{N}_{c }\,\}\). Each embedding vector is uniquely mapped to a dispersion pattern, i.e., \(\mathrm{row}_{i}(F(\mathbf{Y}^{*}))\to\pi_{v_{1}^{*}v_{2}^{*}\ldots v_{m}^{*}}\) where \(v_{1}=F(y_{i1}^{*}),v_{2}=F(y_{i2}^{*}),\ldots,v_{m}=F(y_{im}^{*})\).
4. _Relative frequencies._ For each dispersion pattern \(\pi\in\Pi\), its relative frequency is obtained as:
\[p\left(\pi\right)=\frac{|\{i\mid i\in\mathcal{V},\mathrm{row}_{i}(F(\mathbf{ Y}))\text{ has type }\pi\}|}{|\mathcal{V}^{*}|}\;.\]
5. _Shannon's entropy._ The _dispersion entropy for graph signals_\(\mathrm{DE}_{\overrightarrow{G}}\) is computed as the normalised Shannon's entropy for the distinct dispersion patterns as follows:
\[\mathrm{DE}_{\overrightarrow{G}}(\mathbf{X},m,L,c)=-\frac{1}{\log(c^{m})}\sum _{\pi\in\Pi}p(\pi)\ln p(\pi)\;.\]
**Properties** The \(\mathrm{DE}_{\overrightarrow{G}}\) algorithm for directed graphs exhibits the following properties:
The directed graph version of \(\mathrm{DE}_{\overrightarrow{G}}\) serves as a generalization of its undirected counterpart. If \(G\) is an undirected connected (non-trivial) graph, then \(\mathcal{V}^{*}=\mathcal{V}\), and all the steps remain the same in both the directed and undirected versions of the algorithm.
The restriction process \(\mathbf{y}_{k}^{*}=\left.\mathbf{y}^{k}\right|_{V^{*}}\) is equivalent to the vertex virtualisation process presented in [31].
Similarly, the \(\mathrm{DE}_{\overrightarrow{G}}\) algorithm can be extended to weighted (directed or undirected) graphs by restricting the subset to
\[V^{*}=\{\,i\in\mathcal{V}\,|\,\sum_{j=1}^{n}(\mathbf{W}^{kL})_{ij}\neq 0\text{ for all }k=0,1,\ldots,m-1\,\}.\]
where \(\mathbf{W}\) represents the weighted adjacency matrix. This generalisation allows for a more comprehensive analysis of graph signals in various contexts.
## II Proof of proposition 1.
The classical dispersion entropy for time series was established in the literature by [3]. In the following proposition, we demonstrate that when the \(\mathrm{DE}_{\mathrm{G}}\) is restricted to time series (considering the directed path as the underlying graph), the \(\mathrm{DE}_{\mathrm{G}}\) is equivalent to the classical DE.
A _directed path_ on \(k\) vertices is a directed graph that connects a sequence of distinct vertices with all edges oriented in the same direction, denoted as \(\overrightarrow{P}\). Its vertices are given by \(\mathcal{V}=1,2,\ldots,k\) and its arcs are \((i,i+1)\) for all \(1\leq i\leq k-1\).
**Proposition 1** (_Equivalence of_ DE _and_ DE\({}_{\mathrm{G}}\) _for time series_).: _Let \(\mathbf{X}=\{x_{i}\}_{i=1}^{N}\) be a time series and consider \(\overrightarrow{G}=\overrightarrow{P}\) the directed path on \(n\) vertices, then for all \(m,c\) and \(L\), the following equality holds:_
\[\mathrm{DE}(m,L,c)=\mathrm{DE}_{\overrightarrow{P}}(m,L,c)\;.\]
Proof.: The adjacency matrix for the directed path \(\mathbf{A}\) is given by
\[\mathbf{A}_{ij}=\begin{cases}1&\text{if}\quad i=1,2,\ldots,N-1\quad\text{and} \quad j=i+1\;,\\ 0&\text{otherwise}\;\;.\end{cases}\]
For any \(k\in\mathbb{N}\), the matrix \(\mathbf{A}^{k}\) is given by
\[\left(\mathbf{A}^{k}\right)_{ij}=\begin{cases}1&\text{if}\quad i=1,2,\ldots,N-k \quad\text{and}\quad j=i+k\;,\\ 0&\text{otherwise}\;\;,\end{cases}\]
in particular, for all \(k=0,1,\ldots,m-1\)
\[\sum_{j=1}^{N}(\mathbf{A}^{kL})_{ij}=\begin{cases}1&\text{if}\quad i=1,\ldots,N-( m-1)L\;,\\ 0&\text{otherwise}\;\;.\end{cases}\]
Thus, we have
\[\mathbf{y}_{k}^{*} =\left.\mathbf{y}_{k}\right|_{V^{*}}=\left.D\mathbf{A}^{kL} \mathbf{X}\right|_{V^{*}}\] \[=\left[x_{1+kL},x_{2+kL},\ldots,x_{i+kL},\ldots,x_{N-(m-1)L} \right]^{T}\;.\]
The _embedding matrix_ is given by:
\[\mathbf{Y}^{*}=\begin{pmatrix}x_{1}&x_{1+L}&\ldots&x_{1+(m-1)L}\\ x_{2}&x_{2+L}&\ldots&x_{2+(m-1)L}\\ \vdots&\vdots&\ddots&\vdots\\ x_{N-(m-1)L}&x_{N-(m-2)L}&\ldots&x_{N}\end{pmatrix}\;,\]
and, given a map function \(F\colon\mathbb{R}\to\mathbb{N}_{c}\) defined by \(F=G\circ\text{NCDF}\colon\mathbb{R}\to\mathbb{N}_{c}\), the matrix \(F(\mathbf{Y}^{*})\) is given by:
\[F(\mathbf{Y}^{*})=\begin{pmatrix}z_{1}&z_{1+L}&\ldots&z_{1+(m-1)L}\\ z_{2}&z_{2+L}&\ldots&z_{2+(m-1)L}\\ \vdots&\vdots&\ddots&\vdots\\ z_{N-(m-1)L}&z_{N-(m-2)L}&\ldots&z_{N}\end{pmatrix}\;.\]
Subsequently, the embedding vectors are represented as \(\text{row}_{i}(F(\mathbf{Y}^{*}))=\left(z_{i},z_{i+L},\ldots z_{i+(m-1)L}\right)\). Due to the fact that \(|\mathcal{V}|=N-(m-1)L\), the _relative frequencies_ and _Shannon's entropy_ associated with the graph-based dispersion entropy (\(\text{DE}_{\text{G}}^{*}\)) and the classical dispersion entropy (DE) are identical.
## III Graphs used for analysing centrality measures.
## IV Comparing \(\text{DE}_{\text{G}}\) and \(\text{PE}_{\text{G}}\) performance
In this section, we demonstrate the superior performance of the Dispersion Entropy for Graph Signals (\(\text{DE}_{\text{G}}\)) over the Permutation Entropy for Graph Signals, denoted by \(\text{PE}_{\text{G}}\)[10]. By applying both algorithms to all the examples in the manuscript, we consistently observe that \(\text{DE}_{\text{G}}\) outperforms \(\text{PE}_{\text{G}}\), highlighting the potential and efficacy of \(\text{DE}_{\text{G}}\) for analysing graph signal irregularities.
Following the same setting used to produced Fig. 2, 3, 5 and 6 in the manuscript, we substitute \(\text{PE}_{\text{G}}\) for \(\text{DE}_{\text{G}}\). The results are depicted in Fig. S1, S2, S3 and S4, respectively.
**Random Graphs and \(\text{PE}_{\text{G}}\).** The \(\text{PE}_{\text{G}}\) algorithm is not able to detect increasing of the signal irregularity (due to frequency increments) and is unable to differentiate between distinct levels of irregularity in the \(\text{MIX}_{G}(p)\) signal based on the parameter \(p\) (Fig. S1a). Similarly, in Fig. S1b, as graph connectivity increases (by raising \(r\)) the algorithm saturates for an embedding dimension of \(m=2\). To achieve accurate characterisations, it is necessary to increase \(m>2\) and even that, the behaviour is not monotonous, whereas \(\text{DE}_{\text{G}}\) performs well with smaller embedding dimensions.
**The spectrum of the Laplacian and \(\text{PE}_{\text{G}}\).** The entropy values of \(\text{PE}_{\text{G}}\) exhibit a highly consistent and regular behaviour, with minimal variations (Fig. S2). Despite the varying degrees of irregularity in the eigenvalues (as shown in Fig. 4 of the manuscript), the \(\text{PE}_{\text{G}}\) algorithm fails to detect these differences.
**Small-world Networks and \(\text{PE}_{\text{G}}\).** The stochastic dynamics of the Wiener process are not adequately characterized by \(\text{PE}_{\text{G}}\) (Fig. S3a), as its entropy values are higher than those of random behaviour (random signal). Periodic dynamics are detected only with lower parameter values of \(p\), and the chaotic and oscillation behaviour (Logistic map) are identified by \(\text{PE}_{\text{G}}\), which is consistent with the results presented in [10]. However, as the |
2309.03370 | A few thoughts on $θ$ and the electric dipole moments | I highlight a few thoughts on the contribution to the dipole moments from the
so-called $\theta$ parameter. The dipole moments are known can be generated by
$\theta$. In fact, the renowned strong $\cal{CP}$ problem was formulated as a
result of non-observation of the dipole moments. What is less known is that
there is another parameter of the theory, the $\theta_{QED}$ which becomes also
a physical and observable parameter of the system when some conditions are met.
This claim should be contrasted with conventional (and very naive) viewpoint
that the $\theta_{\rm QED}$ is unphysical and unobservable. A specific
manifestation of this phenomenon is the so-called Witten effect when the
magnetic monopole becomes the dyon with induced electric charge $e'=-e
\frac{\theta_{QED}}{2\pi}$. We argued that the similar arguments suggest that
the electric magnetic dipole moment $\mu$ of any microscopical configuration in
the background of $\theta_{QED}$ generates the electric dipole moment $\langle
d_{\rm ind} \rangle $ proportional to $\theta_{QED}$, i.e. $\langle d_{\rm
ind}\rangle= - \frac{\theta_{\rm QED} \cdot \alpha}{\pi} \mu$. We also argue
that many $\cal{CP}$ correlations such as $ \langle \vec{B}_{\rm ext}
\cdot\vec{E}\rangle = -\frac{\alpha\theta_{\rm QED}}{\pi}\vec{B}^2_{\rm ext}$
will be generated in the background of an external magnetic field $\vec{B}_{\rm
ext} $ as a result of the same physics. | Ariel Zhitnitsky | 2023-09-06T21:39:14Z | http://arxiv.org/abs/2309.03370v1 | # A few thoughts on \(\theta\) and the electric dipole moments.
###### Abstract
I highlight a few thoughts on the contribution to the dipole moments from the so-called \(\theta\) parameter. The dipole moments are known can be generated by \(\theta\). In fact, the renowned strong \(\mathcal{CP}\) problem was formulated as a result of non-observation of the dipole moments. What is less known is that there is another parameter of the theory, the \(\theta_{QED}\) which becomes also a physical and observable parameter of the system when some conditions are met. This claim should be contrasted with conventional (and very naive) viewpoint that the \(\theta_{\rm QED}\) is unphysical and unobservable. A specific manifestation of this phenomenon is the so-called Witten effect when the magnetic monopole becomes the dyon with induced electric charge \(e^{\prime}=-e\frac{\theta_{QED}}{2\pi}\). We argued that the similar arguments suggest that the electric magnetic dipole moment \(\mu\) of any microscopical configuration in the background of \(\theta_{QED}\) generates the electric dipole moment \(\langle d_{\rm ind}\rangle\) proportional to \(Q_{QED}\), i.e. \(\langle d_{\rm ind}\rangle=-\frac{\theta_{\rm QED}\cdot\alpha}{\pi}\mu\). We also argue that many \(\mathcal{CP}\) correlations such as \(\langle\vec{B}_{\rm ext}\cdot\vec{E}\rangle=-\frac{\alpha\theta_{\rm QED}}{ \pi}\vec{B}_{\rm ext}^{2}\) will be generated in the background of an external magnetic field \(\vec{B}_{\rm ext}\) as a result of the same physics.
## I Introduction and motivation
The leitmotiv of the present work is related to the fundamental parameter \(\theta\) in Quantum Chromodynamics (QCD), as well as the axion field related to this parameter. The \(\theta\) parameter was originally introduced in the 70s. Although the \(\theta\) term can be represented as a total derivative and does not change the equation of motion, it is known that this parameter is a fundamental physical parameter of the system on the non-perturbative level. It is known that the \(\theta\neq 0\) introduces \(\mathcal{P}\) and \(\mathcal{CP}\) violation in QCD, which is most well captured by the renowned strong \(\mathcal{CP}\) problem.
In particular, what is the most important element for the present notes is that the \(\theta\) parameter generates the neutron (and proton) dipole moment which is known to be very small, \(d_{n}\lesssim 10^{-26}{\rm e\cdot cm}\), see e.g. review in Physics Today [1]. It can be translated to the upper limit for \(\theta\lesssim 10^{-10}\). The strong \(\mathcal{CP}\) problem is formulated as follows: why parameter \(\theta\) is so small in strongly coupled gauge theory? The proton electric dipole moment \(d_{p}\), similar to the neutron dipole moment \(d_{n}\) will be also generated as a result of non-vanishing \(\theta\). In particular, a future measurement of the \(d_{p}\) on the level \(d_{p}\lesssim 10^{-29}{\rm e\cdot cm}\) will be translated to much better upper limit for \(\theta\lesssim 10^{-13}\).
The strong \(\mathcal{CP}\) problem in QCD problem was resolved by promoting the fundamental parameter \(\theta\) to a dynamical axion \(\theta(x)\) field, see original papers [2; 3; 4; 5; 6; 7; 8] and review articles [9; 10; 11; 12; 13; 14]. However, the axion has not yet been discovered 45 years after its initial formulation. Still, it remains the best resolution of the strong \(\mathcal{CP}\) problem to date, which has also led to numerous proposals for direct dark matter searches.
On the other hand, one may also discuss a similar theta term in QED. It is normally assumed that the \(\theta_{\rm QED}\) parameter in the abelian Maxwell Electrodynamics is unphysical and can be always removed from the system. The arguments are based on the observation that the \(\theta_{\rm QED}\) term does not change the equation of motion, which is also correct for non-abelian QCD. However, in contrast with QCD when \(\pi_{3}[SU(3)]=\mathbb{Z}\), the topological mapping for the abelian gauge group \(\pi_{3}[U(1)]=0\) is trivial. This justifies the widely accepted view that \(\theta_{\rm QED}\) does not modify the equation of motions (which is correct) and does not affect any physical observables and can be safely removed from the theory (which is incorrect as we argue below). We emphasize here that the claim is not that \(\theta_{\rm QED}\) vanishes. Instead, the (naive) claim is that the physics cannot depend on \(\theta_{\rm QED}\) irrespective to its value.
While these arguments are indeed correct for a trivial vacuum background when the theory is defined on an infinitely large 3+1 dimensional Minkowski space-time, it has been known for quite sometime that the \(\theta_{\rm QED}\) parameter is in fact a physical parameter of the system when the theory is formulated on a non-simply connected, compact manifold with non-trivial \(\pi_{1}[U(1)]=\mathbb{Z}\), when the gauge cannot be uniquely fixed, see the original references [15; 16] and review [17]. Such a construction can be achieved, for example, by putting a system into a back
ground of the magnetic field or defining a system on a compact manifold with non-trivial topology. In what follows we treat \(\theta_{\rm QED}\) as a new fundamental (unknown) parameter of the theory.
Roughly speaking, the phenomena, in all respects, are very similar to the Aharonov-Bohm and Aharonov Casher effects when the system is highly sensitive to pure gauge (but topologically nontrivial) configurations. In such circumstances the system cannot be fully described by a single ground state1. Instead, there are multiple degenerate states which are classified by a topological index. The physics related to pure gauge configurations describing the topological sectors is highly non-trivial. In particular, the gauge cannot be fixed and defined uniquely in such systems. This is precisely a deep reason why \(\theta_{\rm QED}\) parameter enters the physical observables in the axion Maxwell electrodynamics in full agreement with very generic arguments [15; 16; 17]. Precisely these contributions lead to the explicit \(\theta_{\rm QED}\)-dependent effects, which cannot be formulated in terms of conventional propagating degrees of freedom (propagating photons with two physical polarizations).
Footnote 1: We refer to [18] with physical explanation (in contrast with very mathematical papers mentioned above) of why the gauge cannot be uniquely fixed in such circumstances. In paper [18] the so-called “modular operator” has been introduced into the theory. The \(\exp(i\theta)\) parameter in QCD is the eigenvalue of the large gauge transformation operator, while \(\exp(i\theta_{\rm QED})\) is the eigenvalue of the modular operator from [18]. This analogy explicitly shows why \(\theta_{\rm QED}\) becomes a physically observable parameter in some circumstances.
The possible physical effects from \(\theta_{\rm QED}\) have also been discussed previously [19; 20] in the spirit of the present notes. We refer to our paper [21] with explicit and detail computations of different observable effects (such as induced dipole moment, induced current on a ring, generating the potential difference on the plates, etc) when the system is defined on a nontrivial manifold, or placed in the background of the magnetic field.
It is important to emphasize that some effects can be proportional to \(\theta_{\rm QED}\), as opposed to \(\dot{\theta}_{\rm QED}\) as commonly assumed or discussed for perturbative computations. Precisely this feature has the important applications when some observables are proportional to the static time-independent \(\theta_{\rm QED}\), and, in general, do not vanish even when \(\dot{\theta}_{\rm QED}\equiv 0\), see below.
## II Axion \(\theta\) field and variety of topological phenomena
Our starting point is the demonstration that the \(\theta_{\rm QED}\) indeed does not enter the equations of motion. As a direct consequence of this observation, the corresponding Feynman diagrams at any perturbation order will produce vanishing result for any physical observable at constant \(\theta_{\rm QED}\). Indeed,
\[\vec{j}_{a}=-\dot{\theta}_{\rm QED}\ \frac{\alpha}{2\pi}\ \vec{B},\ \ \alpha\equiv\frac{e^{2}}{4\pi}, \tag{1}\]
which shows that \(\dot{\theta}_{\rm QED}\) and not \(\theta_{\rm QED}\) itself enters the equations of motion. In our analysis we ignored spatial derivatives \(\partial_{i}\theta_{\rm QED}\) as they are small for non-relativistic axions. This anomalous current (1) points along magnetic field in contrast with ordinary \(E\&M\), where the current is always orthogonal to \(\vec{B}\). Most of the recent proposals [9; 10; 11; 12; 13; 14] to detect the dark matter axions are precisely based on this extra current (1) when \(\dot{\theta}\) is identified with propagating axion field oscillating with frequency \(m_{a}\).
We would like to make a few comments on the unusual features of this current. First of all, the generation of the very same non-dissipating current (1) in the presence of \(\theta\) has been very active area of research in recent years. However, it is with drastically different scale of order \(\Lambda_{\rm QCD}\) instead of \(m_{a}\). The main driving force for this activity stems from the ongoing experimental results at RHIC (relativistic heavy ion collider) and the LHC (Large Hadron Collider), which can be interpreted as the observation of such anomalous current (1).
The basic idea for such an interpretation can explained as follows. It has been suggested by [22; 23] that the so-called \(\theta_{\rm ind}\)-domain can be formed in heavy ion collisions as a result of some non-equilibrium dynamics. This induced \(\theta_{\rm ind}\) plays the same role as fundamental \(\theta\) and leads to a number of \(\mathcal{P}\) and \(\mathcal{CP}\) odd effects, such as chiral magnetic effect, chiral vortical effect, and charge separation effect, to name just a few. This field of research initiated in [24] became a hot topic in recent years as a result of many interesting theoretical and experimental advances, see recent review papers [25; 26] on the subject.
In particular, the charge separation effect mentioned above can be viewed as a result of generating of the in
duced electric field
\[\langle\vec{E}\rangle_{\rm ind}=-\frac{\alpha\theta_{\rm QED}}{\pi}\vec{B}_{\rm ext} \tag{2}\]
in the background of the external magnetic field \(\vec{B}_{\rm ext}\) and \(\theta_{\rm QED}\neq 0\). This induced electric field \(\langle\vec{E}\rangle_{\rm ind}\) separates the electric charges, which represents the charge separation effect. Then formula (2) essentially implies that the electric field locally emerges in every location where magnetic field is present in the background of the \(\theta_{\rm QED}\neq 0\).
The effect of separation of charges can be interpreted as a generation of the electric dipole moment in such unusual background. Indeed, for a table-top type experiments it has been argued in [21] that in the presence of the \(\theta_{\rm QED}\) the electric and magnetic dipole moments of a topologically nontrivial configuration (such as a ring or torus) are intimately related:
\[\langle d_{\rm ind}\rangle=-\frac{\theta_{\rm QED}\cdot\alpha}{\pi}\langle m _{\rm ind}\rangle,\quad\alpha\equiv\frac{e^{2}}{4\pi} \tag{3}\]
which obviously resembles the Witten's effect [27] when the magnetic monopole becomes the dion with electric charge \(e^{\prime}=-(e\theta_{\rm QED}/2\pi)\).
To support this interpretation we represent the magnetic dipole moment \(\langle m_{\rm ind}\rangle\) as a superposition of two magnetic charges \(g\) and \(-g\) at distance \(L_{3}\) apart, where \(L_{3}\) can be viewed as the size of the compact manifold in construction [21] along the third direction2. As the magnetic charge \(g\) is quantized, \(g=\frac{2\pi}{e}\), formula (3) can be rewritten as
Footnote 2: This construction should be thought as a pure mathematical one. The absence of the real magnetic monopoles in Nature cannot prevent us from such fictitious theoretical construction.
\[\langle d_{\rm ind}\rangle=-\frac{\theta_{\rm QED}e^{2}}{4\pi^{2}}\frac{2\pi L _{3}}{e}=-\left(\frac{e\theta_{\rm QED}}{2\pi}\right)L_{3}=e^{\prime}L_{3} \tag{4}\]
This configuration becomes an electric dipole moment \(\langle d_{\rm ind}\rangle\) with the electric charges \(e^{\prime}=-(e\theta_{\rm QED}/2\pi)\) which precisely coincides with the Witten's expression for \(e^{\prime}=-(e\theta_{\rm QED}/2\pi)\) in terms of the \(\theta_{\rm QED}\) according to [27]. This construction is justified as long as magnetic monopole size is much smaller than the size of the entire configuration \(L_{3}\) such that the topological sectors from monopole and anti-monopole do not overlap and cannot untwist themselves. The orientation of the axis \(L_{3}\) also plays a role as it defines the \(L_{1}L_{2}\) plane with non-trivial mapping determined by \(\pi_{1}[U(1)]=\mathbb{Z}\), see below. If our arguments on justification of this formula are correct it can be applied to all fundamental particles including electrons, neutrons, and protons because the typical scale \(L_{3}\sim m_{e}^{-1}\sim 10^{-11}\)cm, while magnetic monopole itself can be assumed to be much smaller in size. In this case the expression (3) derived in terms of the path integral in [21] assumes the form
\[\langle d_{\rm ind}\rangle=-\frac{\theta_{\rm QED}\cdot\alpha}{\pi}\mu, \tag{5}\]
where \(\mu\) is the magnetic moment of any configuration, including the elementary particles: \(\mu_{e},\mu_{p},\mu_{n}\). As emphasized in [21; 28] the corresponding expression can be represented in terms of the boundary terms, which normally emerge for all topological effects.
The observed upper limit for \(d_{e}<10^{-29}{\rm e}\cdot{\rm cm}\) implies that \(\theta_{\rm QED}<10^{-16}\). We do not have a good explanation of why this parameter is so small. This question is not addressed in the present work. It is very possible that a different axion field must be introduced into the theory which drives \(\theta_{\rm QED}\) to zero, similar to conventional axion resolution of the strong \(\mathcal{CP}\) problem [2; 3; 4; 5; 6; 7; 8].
The equation similar to (5), relating the electric and magnetic dipole moments of the elementary particles was also derived in [29; 30] where it has been argued that for time-dependent axion background the electric dipole moment of the electron \(d_{e}\) will be generated3, and it must be proportional to the magnetic moment of the electron \(\mu_{e}\) and the axion field \(\theta(t)\). The absolute value for the axion field \(\theta_{0}\approx 3.7\cdot 10^{-19}\) was fixed by assuming the axions saturate the dark matter density today. While the relation (5) and the one derived in [29; 30] look identically the same (in the static limit \(m_{a}\to 0\) and proper normalization) the starting points are dramatically different: we begin with canonically defined fundamental unknown constant \(\theta_{\rm QED}\neq 0\) while computations of [29; 30] are based on assumption of time dependent axion fluctuating field saturating the DM density today, which obviously implies a different normalization for \(\theta\). Still, both expressions identically coincide in the static \(m_{a}\to 0\) limit.
Footnote 3: We also refer to paper [31] with criticism of this result and [32] responding to this criticism.
The identical expressions with precisely the same coefficients (for time dependent [29; 30] and time independent (5) formulae) in static limit \(m_{a}\to 0\) relating the electric dipole and magnetic dipole moments strongly suggest that the time dependent expression [29; 30] can be smoothly extrapolated to (5) with constant \(\theta_{\rm QED}\). This limiting procedure can be viewed as a slow adiabatic process when \(\dot{\theta}\propto m_{a}\to 0\) and the \(\theta\) becomes the time-independent parameter, \(\theta\to\theta_{\rm QED}\) when the same normalization is implemented4.
Footnote 4: A different approach on computation of the time dependent dipole moment due to the fluctuating \(\theta\) parameter was developed recently in [33]. The corresponding expression given in [33] approaches a finite non-vanishing constant value if one takes the consecutive limits \(t\to\infty\) and after that the static limit \(m_{a}\to 0\) by representing \(e/(2m)=\mu\) in terms of the magnetic moment of a fermion. In this form it strongly resembles the expression derived in [29; 30].
We want to present one more argument suggesting that the constant \(\theta_{\rm QED}\) may produce physical effects including the generating of the electric dipole moment. Indeed the \(S_{\theta}\) term in QED in the background of the uniform static magnetic field along \(z\) direction can be rewritten as follows
\[S_{\theta} \propto \theta_{\rm QED}e^{2}\int\,{\rm d}^{4}x\ \vec{E}\cdot\vec{B}=2\pi\kappa\ \theta_{\rm QED}\cdot\left[e\int dzdtE_{z}\right]. \tag{6}\] \[\mbox{where}\ \ 2\pi\kappa\equiv\left[e\int d^{2}x_{\perp}B_{z}\right]\]
The expression on the right hand side is still a total divergence, and does not change the equation of motion. In fact, the expression in the brackets is identically the same as the \(\theta\) term in 2d Schwinger model, where it is known to be a physical parameter of the system as a result of nontrivial mapping \(\pi_{1}[U(1)]=\mathbb{Z}\), see e.g. [34] for a short overview of the \(\theta\) term in 2d Schwinger model in the given context5.
Footnote 5: In this exactly solvable 2d Schwinger model one can explicitly see why the gauge cannot be uniquely fixed, and, as the consequence of this ambiguity, the \(\theta\) becomes observable parameter of the system. The same 2d Schwinger model also caches us how this physics can be formulated in terms of the so-called Kogut-Susskind ghost [35] which is the direct analog of the Veneziano ghost in 4d QCD.
The expression (6) shows once again that \(\theta_{\rm QED}\) parameter in 4d Maxwell theory becomes the physical parameter of the system in the background of the magnetic field6. In such circumstances the electric field will be induced along the magnetic field in the region of space where the magnetic field is present according to (2). This relation explains why the electric dipole moment of any configuration becomes related to the magnetic dipole moment of the same configuration as equation (5) states.
Footnote 6: The parameter \(\kappa\) which classifies our states is arbitrary real number. It measures the magnetic physical flux, which not necessary assumes the integer values.
The topological arguments for special case (6) when the external magnetic field is present in the system suggest that the corresponding configurations cannot "unwind" as the uniform static magnetic field \(B_{z}\) enforces the system to become effectively two-dimensional, when the \(\theta_{\rm QED}\) parameter is obviously a physical parameter, similar to analogous analysis in the well-known 2d Schwinger model, see footnote 5.
The practical implication of this claim is that there are some \(\theta_{\rm QED}\)-dependent contributions to the dipole moments of the particles. While the \(\theta_{\rm QED}\) does not produce any physically measurable effects for QED with trivial topology, or in vacuum, we expect that in many cases as discussed in [21] and in present work the physics becomes sensitive to the \(\theta_{\rm QED}\) which is normally "undetectable" in a typical scattering experiment based on perturbative analysis of QED. We want to list below several \(\mathcal{CP}\) odd correlations which will be generated in the presence of \(\theta_{\rm QED}\), and which could be experimentally studied by a variety of instruments.
The generation of the induced electric field (2) unambiguously implies that the following \(\mathcal{CP}\) odd correlation will be generated
\[\langle\vec{B}_{\rm ext}\cdot\vec{E}\rangle=-\frac{\alpha\theta_{\rm QED}}{\pi }\vec{B}_{\rm ext}^{2}. \tag{7}\]
Another \(\mathcal{CP}\) odd correlation which can be also studied is as follows:
\[\langle\sum_{i}\vec{\mu}_{i}\cdot\vec{E}\rangle=-\frac{\alpha\theta_{\rm QED}} {\pi}\sum_{i}\vec{B}_{\rm ext}\cdot\vec{\mu}_{i}, \tag{8}\]
where one should average over entire ensemble of particles with magnetic moments \(\vec{\mu}_{i}\), which are present in the region of a non-vanishing magnetic field \(\vec{B}_{\rm ext}\). The induced electric field (2) will coherently accelerate the charged particles along \(\vec{B}_{\rm ext}\) direction such that particles will assume on average non-vanishing momentum \(\vec{p}_{i}\) along \(\vec{B}_{\rm ext}\). As a result of this coherent behaviour the following \(\mathcal{CP}\) odd correlation for entire ensemble of particles is expected to occur
\[\langle\sum_{i}\vec{\mu}_{i}\cdot\vec{p}_{i}\rangle\propto\frac{\alpha\theta_ {\rm QED}}{\pi}\sum_{i}\vec{B}_{\rm ext}\cdot\vec{\mu}_{i}. \tag{9}\]
One should add that the dual picture when the external magnetic field \(\vec{B}_{\rm ext}\) is replaced by external electric field \(\vec{E}_{\rm ext}\) also holds. For example, instead of (2) the magnetic field will be induced in the presence of the strong external electric field \(\vec{E}_{\rm ext}\), as e.g. in the proposal [36] to measure the proton EDM when the \(\vec{E}_{\rm ext}\) is directed along the radial component,
\[\langle\vec{B}\rangle_{\rm ind}=\frac{\alpha\theta_{\rm QED}}{\pi}\vec{E}_{ \rm ext}, \tag{10}\]
such that the correlation similar to (7) will be also generated
\[\langle\vec{B}\cdot\vec{E}_{\rm ext}\rangle=\frac{\alpha\theta_{\rm QED}}{\pi} \vec{E}_{\rm ext}^{2}. \tag{11}\]
## III Conclusion and future directions
The topic of the present notes on the dipole moments of the particles and antiparticles in the presence of the \(\theta_{\rm QED}\) is largely motivated by the recent experimental advances in the field, see [36; 37]. There are many other \(\mathcal{CP}\) odd phenomena which accompany the generation of the dipole moments. All the relations discussed in the present notes, including (5) or (7) are topological in nature and related to impossibility to uniquely describe the gauge fields over entire system, as overviewed in the Introduction.
Essentially the main claim is that the \(\theta_{\rm QED}\) should be treated as a new fundamental parameter of the theory when the system is formulated on a topologically non-trivial manifold, and in particular, in the background of a magnetic field which enforces a non-trivial topology, as argued in this work.
I believe that the very non-trivial relations such as (5) or (7) which apparently emerge in the system at non-vanishing \(\theta\) and \(\theta_{\rm QED}\) is just the tip of the iceberg of much deeper physics rooted to the topological features of the gauge theories.
In particular, the \(\theta\) dependent portion of the vacuum energy could be the source of the Dark Energy today (at \(\theta=0\)) in the de Sitter expanding space as argued in [38; 39]. Furthermore, these highly non-trivial topological phenomena in strongly coupled gauge theories can be tested in the QED tabletop experiments where the very same gauge configurations which lead to the relation similar to (5) or (7) may generate an additional Casimir Forces, as well as many other effects as discussed in [28; 34; 40]. What is even more important is that many of these effects in axion electrodynamics can be in principle measured, see [41; 42; 43; 44; 45] with specific suggestions and proposals. I finish on this optimistic note.
## Acknowledgements
These notes appeared as a result of discussions with Dima Budker and Yannis Semertzidis during the conference "Axions across boundaries between Particle Physics, Astrophysics, Cosmology and forefront Detection Technologies" which took place at the Galileo Galilei Institute in Florence, June 2023. I am thankful to them for their insisting to write some notes on the dipole moments of the particles and their relations to the fundamental parameters of the theory, the \(\theta\) and the \(\theta_{\rm QED}\). I am also thankful to participants of the Munich Institute for Astro-, Particle and BioPhysics (MIAPbP) workshop on "quantum sensors and new physics" for their questions during my presentation. Specifically, I am thankful to Yevgeny Stadnik for the long discussions on topics related to refs [29; 30; 31; 32; 33]. These notes had been largely completed during the MIAPbP workshop in August 2023. Therefore, I am thankful to the MIAPbP for the organization of this workshop. The MIAPbP is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy- EXC-2094 -390783311. This research was supported in part by the Natural Sciences and Engineering Research Council of Canada.
|
2306.17580 | Timely and Massive Communication in 6G: Pragmatics, Learning, and
Inference | 5G has expanded the traditional focus of wireless systems to embrace two new
connectivity types: ultra-reliable low latency and massive communication. The
technology context at the dawn of 6G is different from the past one for 5G,
primarily due to the growing intelligence at the communicating nodes. This has
driven the set of relevant communication problems beyond reliable transmission
towards semantic and pragmatic communication. This paper puts the evolution of
low-latency and massive communication towards 6G in the perspective of these
new developments. At first, semantic/pragmatic communication problems are
presented by drawing parallels to linguistics. We elaborate upon the relation
of semantic communication to the information-theoretic problems of
source/channel coding, while generalized real-time communication is put in the
context of cyber-physical systems and real-time inference. The evolution of
massive access towards massive closed-loop communication is elaborated upon,
enabling interactive communication, learning, and cooperation among wireless
sensors and actuators. | Deniz Gündüz, Federico Chiariotti, Kaibin Huang, Anders E. Kalør, Szymon Kobus, Petar Popovski | 2023-06-30T12:01:15Z | http://arxiv.org/abs/2306.17580v2 | # Timely and Massive Communication in 6G:
###### Abstract
5G has expanded the traditional focus of wireless systems to embrace two new connectivity types: ultra-reliable low latency and massive communication. The technology context at the dawn of 6G is different from the past one for 5G, primarily due to the growing intelligence at the communicating nodes. This has driven the set of relevant communication problems beyond reliable transmission towards semantic and pragmatic communication. This paper puts the evolution of low-latency and massive communication towards 6G in the perspective of these new developments. At first, semantic/pragmatic communication problems are presented by drawing parallels to linguistics. We elaborate upon the relation of semantic communication to the information-theoretic problems of source/channel coding, while generalized real-time communication is put in the context of cyber-physical systems and real-time inference. The evolution of massive access towards massive closed-loop communication is elaborated upon, enabling interactive communication, learning, and cooperation among wireless sensors and actuators.
6G, AI-based networking, massive access, machine-to-machine communications, pragmatic communication, semantic communication
## I Introduction
The focus of wireless cellular evolution before 5G was on high-fidelity delivery of voice and reliable data transmission at increasing rates. 4G can be seen as a reliable broadband data pipe, geared towards human-operated mobile devices. 5G expanded the wireless landscape by considering autonomous machine-type devices, robots, as well as a new class of tactile, human-operated devices. This led to two new connectivity types in 5G: low latency, coupled with high reliability, and massive Internet of Things (IoT) communication. Tracing the further evolution of timely and massive communication needs to account for the 6G features that came to prominence via various technology visions. The premises in this article are:
* With the advances in hardware technology and machine learning (ML) and artificial intelligence (AI) algorithms, the intelligence at the communicating nodes increases continuously, leading to synergy between communication, computation, and learning. The transmitted data can be used to contribute to the learning process and, vice versa, communication could be aided by the intelligence at the nodes and embedded in the protocols. As the vision of networked intelligence is emerging, the problem of communication as an accurate reproduction of data bits can be generalized to the problem of communicating the intended meaning, summarized in the phrase _semantic communication_[1, 2, 3].
* Recently, there has been a growing awareness that low latency is just one example of a variety of timing constraints in a communication system [4]. In practice, the timing requirements belong to a more general category in which the usefulness of communication is gauged with respect to the attainment of a goal. This is the basis for _pragmatic communication_, where cyber-physical systems interact with each other and the environment.
* Massive IoT communication has been seen as a supply of data used for training of ML algorithms, or as input to inference procedures. Thus, massive access has traditionally been defined as a problem for uplink transmission. Nevertheless, new applications call for revision of this assumption, as various real-time systems require closed-loop communication
with a massive number of sensors and actuators. Furthermore, data acquired by IoT devices can be used for distributed training of ML models, using both uplink and downlink communication.
Based on these observations, in this paper, we will describe the evolution of 6G systems towards a _network of intelligence_ augmenting low-latency and massive communications considered in 5G. We start with a discussion on semantic and pragmatic communication by drawing parallels to linguistics. Next, we describe the relation between semantic communication and data/knowledge representation seen through the lens of source coding. This is followed by generalized real-time requirements in 6G, including both cyber-physical systems that interact with the environment as well as real-time inference. Finally, we discuss closed-loop massive communication and its relation to distributed/federated learning.
## II Semantic and Pragmatic Communication
A standard introduction to semantic communication refers to the preface that W. Weaver wrote in [5] to the original work of Shannon from 1948. Weaver defined three types of communication problems: _(Level A)_ The technical problem: accurate transmission of bits from the transmitter to the receiver; _(Level B)_ The semantic problem: how precisely do the transmitted symbols convey the desired meaning; and _(Level C)_ The effectiveness problem: how the received meaning affects conduct in the desired way. Despite the initial intuitive appeal of this classification, the distinction between semantics and effectiveness and the corresponding communication models remains blurry. This is also reflected in the often used phrase "semantic and goal-oriented communications," which treats the beyond-technical problems in a bulk, avoiding to make a clear distinction; yet, the term "goal-oriented" presumably covers the part of effectiveness.
As an attempt for a clearer distinction between these two problem types, we resort to the five linguistic domains [6], as shown in Fig. 1, and relate them to the domains in communication engineering, as well as to Weaver's classification. _Phonology_ deals with the speech sounds and phonemes as basic ingredients of a language; they can be related to the symbols or channel uses of an information-theoretic model or physical layer in communication engineering. _Morphology_ studies the minimal meaningful units of language called morphemes; such a meaningful unit in a a communication system would be a codeword or a packet. _Syntax_ deals with the rules to form sentences and can thus be related to the correct protocol operation. However, there is no clear distinction with the notion of a packet: in networking, a packet is a data unit with integrity check (error detection) and the error detection can be seen as a syntactic operation that is part of the protocol. Regardless of that, the first three linguistic domains can be mapped, as a group, to the three general topics of communication engineering (physical layer design, coding, and higher-layer protocols) and thereby to Weaver's technical problem. _Semantics_ refers to the study of combination of words to convey meaning. Finally, _pragmatics_ refers to the rules in conversation and social contexts, including implicit communication through external cues and shared context. Fig. 1 shows that these two domains, corresponding to Weaver's semantic and effectiveness levels, leave a vacuum in traditional communication engineering, and therefore, attract research activity. Against this background, we can attempt to define a distinction between semantic and pragmatic (effective) communication.
_Semantic communication_ refers to the the unidirectional transmission of a message or a signal that should be interpreted within the given context of the sender and receiver, along with the utilized source/channel coding. As such, it does not need to be related to the physical time, but can work with an abstract communication model, consisting of channel uses. Time is not explicitly a part of Shannon's mathematical model of communication; that model provides a causal relation between inputs and outputs, but says nothing about the time duration between two inputs, two outputs, or between an input and its corresponding output. Semantic communication can be put in a temporal context, such as "convey a certain meaning within a given time interval \(T\)" by, e. g., establishing a symbol duration \(T_{s}\), a codeword duration \(T\) and codeword length of \(n=\frac{T}{T_{s}}\) symbols.
_Pragmatic communication_ captures the interaction between communicating entities, machines or humans, as well as interaction with their physical environment, such
Fig. 1: The five linguistic domains and their relation to protocol layers and the types of communication problems.
as movement or actuated command. Dealing with the physical time is thus necessary in pragmatic communication, as the interaction between the entities via communication and/or action occurs under the constraints of a physical time. Note that pragmatic communication is not identical to multi-user communication, as there are multi-user channels in which the transmitting users are not interacting with each other, such as the multiple access or broadcast channels, which can also be seen as setups for unidirectional semantic communication.
## III Semantic Communication
Semantic communication has recently become a catch-phrase to describe communication systems that cater for the content of the source signal rather than focusing solely on the reliable transmission of bits. Current communication protocols are designed to create the largest capacity reliable bit pipes with minimal resources, i.e., maximize the capacity in terms of bits per signal dimension, while achieving the highest reliability with the highest energy efficiency. In that sense, the codes and modulation schemes employed at the physical layer are independent of the delivered content and transfer the packets of bits to their destinations. Content is handled at the application layer, where different types of information sources, e.g., text, image, or video, are transformed into packets of bits, where the goal is to have the most efficient representation that allows the receiver to reconstruct the information with the desired fidelity. Thus, the semantic aspects of the source are currently handled at the application layer.
Semantic communication is commonly exemplified through communication of a text or an image. Semantic-aware communication can be used to convey the text meaning without sending the exact same text. For images, semantics typically refers the objects in the image and their relative locations, without necessarily reconstructing them with a pixel-level fidelity. Extracting the meaning of a sentence or the semantic content of an image are highly challenging tasks, but data-driven ML algorithms has led to significant progress.
Despite the impression that this type of semantic communication departs from the basic information principles, these problems can still be treated within the framework of rate-distortion theory, dealing with an efficient representation of the source signal within the desired fidelity. However, unlike the rate-distortion problems that one encounters in a first course on Shannon's rate-distortion theory, many practical sources do not have independent and identically distributed (i.i.d.) samples, and we typically do not have a well-defined additive fidelity measure, such as mean square error (MSE). Indeed, a major challenge is to identify the fidelity measure itself, even before specifying a compression algorithm. Shannon was well-aware of the limitations of the assumptions in his theory, but these were necessary to obtain the succinct single-letter mathematical expressions that connect source compression with channel coding through the mutual information functional. The complexity of this setup can only be addressed through learning algorithms: this approach to semantic communications, which relies on generalized distortion measures, becomes relevant when the intelligence in the communicating nodes reaches a certain level.
In his seminal paper [5], after introducing the fidelity measure \(\rho(x,y)\) between the input signal \(x\) and its reconstruction \(y\), Shannon adds the following remark:
_"The structure of the ear and brain determine implicitly a number of evaluations, appropriate in the case of speech or music transmission. There is, for example, an 'intelligibility' criterion in which \(\rho(x,y)\) is equal to the relative frequency of incorrectly interpreted words when message \(x(t)\) is received as \(y(t)\). Although we cannot give an explicit representation of \(\rho(x,y)\) in these cases, it could, in principle, be determined by sufficient experimentation. Some of its properties follow from well-known experimental results in hearing, e.g., the ear is relatively insensitive to phase and the sensitivity to amplitude and frequency is roughly logarithmic."_
One can argue that Shannon even pointed towards a data-driven approach for measuring fidelity, as done in state-of-the-art approaches.
It is possible to extend this framework to include other types of fidelity measures. For example, rather than reconstruction, the receiver may want to only estimate certain statistics of the source signal; or infer another random variable correlated with the source signal, which can include classifying the input. Most of these problems can be considered in the context of remote source coding [7]. In the general remote source coding problem, the encoder has access to correlated observations \(z\) with the underlying source \(x\), which the receiver wants to reconstruct. For example, \(x\) may represent the class the input \(z\) belongs to1. In [7], \(x\) and \(z\) come from a known joint distribution \(p(x,z)\), and i.i.d. samples \(z^{n}=(z_{1},\ldots,z_{n})\) of \(z\) are available at the encoder, while the decoder wants to decode \(x^{n}\). This is a generalization of Shannon's rate-distortion problem, reducible to Shannon's original
formulation by an appropriate change of the distortion measure. This formulation is quite general, and can recover many statistical problems under communication constraints. For example, when the fidelity measure between the reconstruction \(y^{n}\) and the source sequence \(x^{n}\) is the _log-loss_, we can obtain a single-letter expression for the remote rate distortion function, equivalent to the well-known _information bottleneck_ function [8].
### _Joint Source-Channel Coding (JSCC)_
Separation of source and channel coding leads to an architecture that is modular and universal, where bits coming from different sources can all be delivered using the same codes. This also simplifies the code design for each subproblem, resulting in specialization of engineers and researchers; for example, code design for compressing different sources, or reliable communication of the compressed bits over different types of channels. The separate architecture also allows encryption straightforwardly: as the communication system is oblivious to the data source, the random bits obtained as a result of the compression can be encrypted with no impact on the way they can be transmitted. Finally, the theoretical basis for the separated architecture is Shannon's separation theorem [5], which shows that separation is asymptotically optimal for ergodic source and channel.
Nevertheless, there are many scenarios in which a joint design can provide performance gains and simplify the code design at the expense of modularity. A prominent example is the transmission of an i.i.d. source sequence of Gaussian samples, \(x^{n}\), over \(n\) uses of an additive white Gaussian channel under the MSE distortion measure. Here the optimal scheme consists of transmission of each source sample over a channel use, by simple scaling according to the transmitter power constraint, and estimating each sample with a minimum mean square estimator (MMSE) at the decoder. This simple uncoded scheme offers the lowest possible latency, while achieving the same average MSE performance for any \(n\) that can be achieved by the separation scheme as \(n\rightarrow\infty\). If we instead consider two receivers with different signal-to-noise ratios (SNRs), the uncoded scheme simultaneously achieves the optimal performance for both receivers as if each one is the sole receiver in the system; this is not possible with standard channel codes that operate at the capacity region of the underlying broadcast channel. Yet, until recently, no practical joint source-channel coding (JSCC) design provided significant gains compared to separation based codes. Most of the existing designs relied on separate source and channel codes, whose parameters are optimized jointly in a cross-layer fashion. While this approach retained some modularity, the gains are mainly through adapting the source/channel code parameters according to temporal variations of the underlying source and channel statistics. Despite certain performance gains offered for non-ergodic channel/source, they are not truly joint code designs.
More recently, a significant development has been enabled by employing deep learning in JSCC problems. Specfically, promising gains have been obtained for JSCC of various signals with the adoption of deep neural network (DNN)-based autoencoder architectures. Pioneered in [9], these joint designs are known as DeepJSCC, where the encoder and decoder are parameterized as a pair of DNNs, trained jointly on a given source dataset and channel distribution. DeepJSCC provides several significant advantages: First of all, even for a given fixed channel SNR, for which the right compression and channel code rates can be chosen to achieve reliable communication with high probability, DeepJSCC can achieve a better performance compared to the concatenation of state-of-the-art compression codecs (e.g., BPG or JPEG200) and channel codes, e.g., low-density parity check (LDPC). The results show that the gains of DeepJSCC are larger for low bandwidth ratio (average channel uses per source sample) and low SNR regimes. The gains also become more pronounced when a perceptually aligned distortion measure, such as the multi-scale structural similarity index measure (MS-SSIM), is considered, or the model is trained and tested on data from a specific domain, such as satellite images. This is because DeepJSCC can be trained for a specific loss function or on a specific dataset, while conventional compression algorithms are not adaptive. Moreover, DeepJSCC simplifies the coding process, and can provide significant reduction in complexity. The architecture in [9] is a simple 5-layer convolutional neural network (CNN), which can be parallelized and implemented efficiently on a dedicated hardware, whereas conventional compression codecs and iterative decoding algorithms are computationally much more demanding. On the other hand, the original design required training a separate encoder-decoder network pair for each channel condition (SNR/bandwidth ratio), which limits the practicality of DeepJSCC as it would require storing many different network parameters to be used in different scenarios. This would impose significant memory complexity.
The described utility of deep learning in JSCC problems reveals its potential for semantic communication: deep learning can handle more complex measures of distortion, interpretable as meaning in a given context.
## IV Generalized 6G Real-time Communication
This section treats the role of timing in communication systems, which is an essential component of our engineering formulation of pragmatic communications. We will explain how it generalizes the technical communication problem and highlight its connections to other timing-based performance measures.
### _Pragmatic Communication_
Formulating a communication problem of effectiveness/pragmatics (Level C) and favorably affecting an agent's conduct requires a quantifiable way to measure the effect of its actions. A general way to model series of actions and their outcomes is a Markov decision process (MDP), where an agent acts in a stochastic environment and aims to accumulate as much reward as possible. Specifically, the current state of the environment and the action of the agent affect the probability of the consequent state and the reward.
The effectiveness problem is formulated in [10] as an extension of the MDP framework to a multi-agent partially observable Markov decision process (MA-POMDP), in which two agents communicate to maximize their common reward. The agents represent different parties, such as transmitters, receivers, controllers, and robots, and partial observability means that some information is not known to all the parties in the system. The joint actions of all the agents influence the evolution of the environment. A simple setup that captures the fundamental aspects is a _remote MDP_, as formulated in [10] and shown in Fig. 2. Here a controller, say a guide, observes the environment and transmits messages through a noisy channel to instruct and navigate agent towards a target location. The actions of the agent can be possible movements in the environment, while the actions of the guide are the messages it transmits. A large positive reward is received when the agent reaches the target, while a unit of negative reward is accrued at each time. This incentivizes to reach the target as soon as possible, as the goal is to maximize the total cumulative reward. Note that the timing here is in reference to the interactions with the environment, while the channel can still be used multiple times for the transmission of each message. Differently from Level A or B problems, here the communication performance cannot be measured by the reliability of bits transmitted at each round, or the reconstruction quality of the receiver. What matters is the set of actions taken over the horizon until the target is reached, while maximizing the cumulative reward.
As illustrated in [10], this formulation includes Level A and B problems as special cases. If the state of the environment is a randomly generated bit sequence, and the goal of the agent is to match this bit sequence, then we recover the classical channel coding problem, where maximizing the average reward over a fixed time horizon is equivalent to minimizing the error probability of a fixed-length code. If the state of the environment is a randomly generated sequence of i.i.d. symbols from a fixed distribution \(p(x)\) and the goal of the agent is to match this sequence with the minimum distortion, then we recover the lossy source coding, or the semantic communication problem as formulated in Section III.
The notion of pragmatic communication can be extended to source coding [11] where the standard objective is to minimize the expected length of the description of a random symbol drawn from an alphabet. To pose it as an effectiveness problem, let the alphabet represent the set of possible states: the aim is to communicate a randomly drawn one with the minimum average time. As in a remote MDP, states cannot change arbitrarily but there is a constrained set of possible transitions. This setting can be interpreted as guiding an agent through a graph of possible states from initial to the goal state. At each time step, the agent receives a message and decides a state transition. The simplest strategy to communicate such plan is to code each state transition sequentially; this has the benefit of immediate relevance to the controlled agent. The opposite approach is to only send the goal, which is more efficient in terms of total communicated information, but introduces a greater delay before actionable information. In general, a right balance needs to be achieved between prioritizing information useful in short term and over a long term.
This formulation can be further generalized by assigning a cost to each state transition and minimizing the total expected cost. The cost of each transition
Fig. 2: Illustration of the remote MDP problem exemplifying level C pragmatic communications: an intelligent controller controls an agent over a wireless channel to maximize its reward through interactions with the environment.
can be, e.g., the total energy or fuel used by a robot-agent. Minimization of the expected time corresponds to uniform cost of each transition.
### _Timing Constraints and Pragmatic Communication_
The underlying assumption behind defining timing-related QoS is that the delay between the generation of a piece of information and its reception and processing by the receiver affects control performance [4]. If we move to the Level C problem, the nature of this assumption becomes clear: timing is useful because information about a process becomes less accurate over time, as the real environment drifts from the observed state.
Latency and age of information (AoI) are textbook Level A metrics [4], as they do not consider the content of updates, only how quickly they are delivered (latency) or how stale the information available at the receiver gets (AoI). If we consider a tracked signal \(x(t)\), which is observed by the transmitter, the latency of packet \(i\), generated at time \(g_{i}\) and received at time \(r_{i}\), is simply \(\tau_{i}=r_{i}-g_{i}\), while the AoI at time \(t>r_{i}\) (but before any other, more recent, packet is received) is \(\Delta(t)=t-g_{i}\). We can see that AoI \(\Delta(t)\) is tied more closely to the tracked process than the latency: as the last update from the transmitter has an age \(\Delta(t)\), the receiver must estimate the current value of the process, \(\hat{x}(t)\), using only the historic data of the signal up until time \(g_{i}\).
The value of information (VoI) considers the _content_ of messages and not just their timing [12]. If the receiver has a known estimator, the value \(\nu(t)\) of an update generated at time \(t\) is given by the difference its transmission would make in the error. The receiver gets a series of observations \(h(t)=(y_{1},g_{1}),\ldots,(y_{i},g_{i})\), which correspond to the packets transmitted up until time \(t\) by the sensor observing the process and their generation instants. The observations may be noisy, requiring further processing. The receiver keeps an estimate \(\hat{x}(t|h(t))\) and the error function can be defined as \(e(x(t),\hat{x}(t|h(t)))\). The VoI of an update is defined as:
\[\nu(t)= e(x(t),\hat{x}(t|h(t)))-e(x(t),\hat{x}(t|h(t),y(t))). \tag{1}\]
If the receiver has a good estimator, and the error function is well-defined, additional information will never increase the error, and VoI is always positive; a common definition for \(e\) is the \(\ell_{2}\) norm.
We can further distinguish between _push-based_ communication systems, where the sender decides autonomously when to send data, and _pull-based_ systems, where the sender only sends updates in response to requests from the receiver. In the pull-based case, the VoI definition in (1) cannot use the value of \(y(g_{i})\), as the receiver does not know it before it is transmitted, but must estimate the VoI as well, based on its current knowledge [13]:
\[\nu^{\prime}(t)= \mathbb{E}\left[e(x(t),\hat{x}(t)|h(t))\right] \tag{2}\] \[-\mathbb{E}\left[e(x(t),\hat{x}(t|h(t),y(t))|h(t)\right].\]
If the process is _memoryless_, i.e., the residual error of the estimator is a martingale (e.g., a Wiener process), the Level B problem of VoI maximization reduces to a simpler Level A AoI minimization. In this case, the estimate of the current state can be performed knowing only \(\Delta(t)\) with the same precision as the estimate knowing the full history \(h(t)\), and minimizing a (potentially non-linear) function of \(\Delta(t)\) is equivalent to maximizing the receiver-computed VoI. We can contrast this with _stateful_ processes, which include most Markov processes, where the value of \(x(t)\) affects future changes in the process, and AoI is not sufficient to compute the expected VoI.
Naturally, we can extend this to multiple dimensions, and different error functions: we can easily imagine a system in which a multidimensional signal \(\mathbf{x}(t)\) is reconstructed from multiple noisy measurements from a set of \(N\) different sensors, none of which can directly communicate with the others. In this case, computing VoI is more complex: the receiver has a full picture of the environment, i.e., can receive data from all the sensors, but all its knowledge is statistical and partially outdated. On the other hand, each sensor has relatively precise information about part of the signal, as its measurements are still noisy, but a much narrower view of the system. Medium access is also a significant issue, and it is generally easier to avoid collisions in pull-based systems, as the receiver can act as a central coordinator and limit the pool of potential transmitters by requesting specific information, or even address nodes directly. A further complication comes from epistemic errors, i.e., events and perturbations not included in the receiver's model of the tracked signal; a purely pull-based system will often take a long time to detect these errors, and operate poorly on incorrect assumptions.
We can then move to the Level C problem by introducing the possibility of control, i.e., of changing the
environment to achieve a specific goal [10]. Note that the environment is subject to natural evolution according to the physical time, such that the timing requirements depend on the speed of the process relative to the AoI, as already stated in the description of pragmatic communication. In this case, the definition of VoI can be adapted to consider the performance of the controller instead of the accuracy of the estimator. Unless the control task is simply to track the input signal, the _pragmatic_ VoI is always different from the _semantic_ VoI, as it considers both the content of a sensory observation and its effect on control There are three possible cases: _(1)_ The update has a low semantic value and does not significantly affect the receiver's estimate of the environment. It can be subject to a lossy compression with a limited loss in terms of performance, or even discarded entirely; _(2)_ The update has a high semantic value, but it has a low pragmatic value: the change in the receiver's estimate leads to taking the same action, or another action with the same control performance. Even though a semantic scheduler and compressor would consider this type of update highly important, its relevance to the Level C problem is limited; _(3)_ The update significantly affects the receiver's choices, and receiving it would lead to greatly improved control performance; hence, the update has a high pragmatic VoI, and should be transmitted with a high priority.
The equivalences between AoI, semantic VoI, and pragmatic VoI, which are Level A, B, and C metrics, respectively, are listed in Table I. However, timing metrics such as AoI can still be extremely useful as a first approximation of higher-level metrics: in most scenarios, sharp variations are relatively rare, and can be treated as anomalies in a relatively simple weighted AoI minimization problem.
### _Real-time inference over communication links_
Inference at the network edge will empower a broad spectrum of applications, such as industrial automation, extended reality, autonomous vehicles, and robots [14]. _Edge inference_ refers to the process of offloading inference tasks, executed through large-scale AI models, from edge devices to edge servers, which are co-located and connected to base stations. By leveraging the powerful computational resources of these servers, edge devices can experience enhanced capabilities (e.g., vision, decision making, and natural language processing) and extended battery life. A real-time edge inference can support tactile applications, such as extended reality and robotic control. The major challenge to this paradigm is the communication bottleneck resulting from a massive number of mobile devices uploading high-dimensional data features to servers for inference.
A specific edge inference architecture is _split inference_[15], which divides a trained model into a low-complexity sub-model on a device and a deep sub-model on a server. The low-complexity sub-model extracts feature maps from raw data, while the server sub-model performs inference on the uploaded features. This architecture enables resource-constrained edge devices to access large-scale AI models on servers (e.g., image recognition with tens-to-hundreds of object classes) while preserving data ownership. In split inference, the communication bottleneck is addressed by designing task-oriented techniques, aimed to optimize end-to-end (E2E) inference throughput, accuracy, or latency. To overcome communication constraints, the model's splitting point can be adapted according to available bandwidth and latency requirements [16]. For optimizing E2E system performance, JSCC has emerged as a popular approach for split inference. This method follows the same approach as DeepJSCC [9], with the objective of inference at the receiver rather than signal reconstruction [17].
Next, we introduce several new approaches that directly accelerate communication and computation of edge inference.
#### Iii-C1 Over-the-Air Inference for AloT Sensing
6G is likely to give rise to a widespread deployment of edge AI to enhance IoT applications [14] and large-scale distributed sensing enabled by cross-network collaboration among edge devices [15]. The natural integration of edge AI and network sensing, known as AI of Things (AIoT) sensing, leverages the advantages of multi-view observations by sensors and the powerful prediction capabilities of DNN models to enable accurate and intelligent sensing. Split inference for distributed AIoT sensing can be achieved by incorporating an air interface between multiple sensors and a server (fusion center) into the popular multi-view CNN (MVCNN) architecture [18]. Unlike point-to-point split inference, the distinctive feature of MVCNNs is multi-view pooling, which refers to the fusion of features extracted from different sensors' views into a global feature map that is fed into the server's inference model.
Over-the-air computing (AirComp) is a promising simultaneous-access technique that enables ultra-fast multi-view pooling over-the-air by leveraging the waveform-superposition property of a multiple access channel [19]. The primary motivation behind AirComp is to overcome channel distortion and noise, allowing for accurate implementation of data averaging or other computation functions over-the-air. Recently, AirComp
has gained popularity for its application in supporting efficient model/gradient aggregation in federated edge learning (FEEL), known as over-the-air FEEL [20, 21].
AirComp-based multi-view pooling is referred to as AirPooling. Implementing average AirPooling using AirComp is relatively straightforward; however, realizing max-pooling with AirComp is considerably more challenging. This is because the class of air-computable functions, known as nomographic functions, are characterized by a summation form with different pre-processing of summation terms and post-processing of the summation [22]. Examples include averaging and geometric mean. The max function in Max AirPooling is not a nomographic one and does not have a direct AirComp implementation. In [23], maximization is approximated by using the properties of the generalized p-norm:
\[\|x\|_{p}=\left(\sum_{n=1}^{N}|x_{n}|^{p}\right)^{\frac{1}{p}}\begin{cases}= \sum_{n=1}^{N}|x_{n}|,&p=1,\\ \rightarrow\max_{n}|x_{n}|,&p\rightarrow\infty.\end{cases} \tag{3}\]
One can see that when the configuration parameter, \(p\), is set as one, the function implements averaging; when the parameter decreases, the norm approximates maximization. Then a dual-mode AirPooling can be realized by decomposing the air-interface function into pre-processing at devices and post-processing at the server on top of the conventional AirComp and switching the pooling function by controlling \(p\).
Consider the popular E2E sensing task of classification for object and pattern recognition. The configuration parameter of generated AirPooling should be optimised as it regulates a trade-off between functional approximation and channel-noise suppression. On one hand, the max-function approximation error monotonically decreases as \(p\) grows. On the other hand, a larger \(p\) tends to amplify channel noise by making the transmitted features, \(x_{n}\), highly skewed. Specifically, features with small magnitudes are suppressed and their submission is prone to channel distortion. The main difficulty in the parametric optimisation is the lack of closed-form expression for the E2E performance metric, e.g., sensing accuracy. One sub-optimal method as adopted in [23] is to use a surrogate metric such as AirComp error.
#### Iii-B2 Accelerating Edge Inference by Batching and Early Exiting
In the field of computing architecture, the well-known von Neumann bottleneck refers to the frequent shuttling of data between memory and processors, which can account for up to 90% of total computation energy consumption and latency. One technique to address this issue is called batching, which consolidates tasks offloaded by multiple users into a single batch for parallel execution. This process increases the number of tasks executed per unit time by amortizing memory access time and enhancing the utilization of computational resources.Consequently, the latency per task is reduced, facilitating real-time edge inference. However, an excessively large batch size can result in increased queuing time for individual users, leading to reduced throughput.
Another technique that can reduce inference latency is called early exiting, which allows a task to exit a DNN once it meets the task-specific accuracy requirement. By avoiding the traversal of all network layers uniformly for all tasks, execution speeds are accelerated. Early exiting is characterized by a trade-off between accuracy and computation latency, which is based on a customized neural network architecture known as a backbone network [24]. This architecture consists of a conventional deep neural network augmented with multiple low-complexity intermediate classifiers, serving as candidate exit points. A task that requires lower accuracy traverses fewer network layers before exiting, essentially being diverted to an intermediate classifier for immediate inference. Recently, early exiting has been applied to edge inference, specifically in the local/server model partitioning for split inference. Exit points can be jointly optimized to maximize inference accuracy under a latency constraint, while layer skipping and early exiting are combined to facilitate edge inference with stringent resource constraints.
Batching and early exiting are two computational techniques beneficial for real-time multi-user edge inference. Their joint design with radio resource management can further enhance E2E latency performance, which combines multiple access and inference latency. Specifically, the advantages of end-to-end design are twofold. First, the early-feedback effect, i.e., the instantaneous downloading of inference results upon early exits to intended users, thereby shortening their task latency--is further complemented by batched parallel processing. Second, early exits release computational resources to accelerate other ongoing tasks in the same batch, as reflected by a progressively reducing batch size over blocks of model layers, which in turn accelerates computation over these blocks. This is known as the effect of shrinking batch size. However, the complex edge-inference process renders joint communication-and-computing (C2) resource management challenging. One challenge arises from the interweaving of batching and early exiting, where the latter results in a shrinking batch size over sequential layer blocks, causing heterogeneous computation time for different tasks. Second, besides communication coupling due to spectrum sharing, the scheduled users' task executions are also interdependent in a complex manner. Specifically, users' accuracy requirements are translated
into different numbers of layer blocks to traverse, and the computation time of each block depends on the random number of tasks passing through it. Third, due to constraints on end-to-end latency, communication and computation are coupled, necessitating the joint consideration of channel states and users' quality of service (QoS) requirements in batching/scheduling and bandwidth allocation. Mathematically, the optimization of C2 resource allocation involves an integer programming problem that is NP-complete.
These challenges present numerous research opportunities in real-time edge inference, such as formulation of the joint optimal control of batching, early exiting, and radio resource management as a combinatorial optimization problem.
## V Massive Closed-Loop Communication in 6G
### _The Role of Massive Downlink Communication_
Massive communication, in which a very large number of devices communicate with a single base station, is typically associated to the _massive random access_ scenario, in which a small fraction of a large pool of potentially active devices transmit messages to the receiver. The Level A problem of designing communication systems to support such transmissions has received much attention. However, when we start to consider semantic and pragmatic communication with massive devices, motivated by applications such as robotic control, distributed learning and remote inference, downlink communication appears as an equally important feature of massive communication systems. This necessitates a generalization or extension of the massive random access problem to include the downlink as well. As an attempt to do so, we can note that the fundamental characteristic of massive communication is the fact that the users are _uncoordinated_. Specifically, instead of considering only massive random access, we can think of _massive uncoordinated communication_ as a generalized communication scenario involving both uplink and downlink in which there is high uncertainty about who is active (or relevant) at any given time.
To illustrate the idea, we can consider the task of extending the massive random access scenario to include a simple automatic repeat request (ARQ) feedback scheme, which clearly requires the massive access scenario to include downlink communication. However, while the base station knows which devices it successfully received a message from, it does not know the subset of users that it failed to decode, e.g., due to outage. This is in sharp contrast to traditional ARQ in the coordinated (scheduled) communication, where the base station knows if it failed to decode a message from a specific user. A consequence of this uncertainty is that the simple task of communicating what used to be a single bit of information, indicating whether the packet was received or not, becomes a highly non-trivial problem. As it turns out, simply concatenating the information for each user with their identifiers is sub-optimal, as illustrated in Fig. 3[25]. This sub-optimality of concatenation in the massive downlink scenario holds not only for the ARQ problem, but also in the case of transmitting general messages to a small subset of a large population of users [26].
The previous example illustrates the challenge of massive uncoordinated communication at Level A, where the aim is to increase the reliability of the individual messages. As we move towards semantic and pragmatic communication, we can start to think of alternative ways to make use of massive downlink. To stay with the concept of ARQ, we can try to imagine how an ARQ scheme could look at the semantic level. In that case, since our goal is to convey a meaning, the purpose would be to keep requesting messages until the intended meaning has been communicated. This requires an acknowledgment message that does not inform whether a given message is received or not, but instead tells what is known/not known by the destination. The prototypical example could be a camera scenario, in which the aim of the destination is to receive all objects observed by the cameras, e.g., as part of an inference task. By acknowledging the detection of specific objects, as opposed to the individual captured images, a single message acknowledges all cameras that capture a specific object. Finally, an ARQ at the pragmatic level would request additional transmissions until a desired task can
Fig. 3: Message length, \(B\), required to encode acknowledgment feedback to \(K\) out of \(2^{32}\) users, either error-free or with false alarm probability \(\epsilon_{\mathrm{FA}}\).
be performed effectively or with satisfactory quality. This could for instance be employed in the the AIoT multi-view split inference scenario mentioned earlier, where the task of the ARQ protocol would be to request features from the subset of devices that are most relevant and likely to contribute to the specific inference task.
So far, we have ignored the initiator of the communication, which in the massive access case is usually assumed to be the devices, triggered by some physical event, i.e., push-based communication. However, as previously argued, communication can also be pull-based, which implies two-way communication and requires the use of the downlink, irrespective of who initiates the communication.
### _Distributed Learning in Massive Access Scenarios_
The massive access problem was originally perceived for a massively large number of sensory nodes offloading their occasional bursty measurements in an efficient manner. These measurements are often employed at an edge or cloud server for certain downstream tasks involving inference or learning. However, as the number of sensors and the amount of data collected by them increases, there is a trend towards distributing the learning and inference tasks due to the increasing communication load and privacy concerns. In particular, federated learning [20] has emerged as a natural framework for collaborative learning, where an iterative learning algorithm is carried out in a distributed fashion without offloading the data to a central server (see Fig. 4 for an illustration). The devices participating in FL run the necessary iterations locally on their private data, and share only the computed model updates with the parameter server (PS) at each iteration. The PS, which is the orchestrator of the learning process, aggregates the model updates from the participating devices, and updates the global model, which is then shared with the devices for the next round. This iterative learning process continues for a fixed number of rounds, or until a certain convergence condition is met.
FL can be implemented across hundreds or even thousands of devices. In certain scenarios, those devices may be in the same physical environment orchestrated by a single PS, which may be a macro base station or a WiFi access point, called FEEL. This can be the case for sensor deployed in the same environment learning a task based on their measurements, or mobile devices in the same environment collaboratively learning a communication protocol or a communication related task, which can be learned or fine-tuned for the particular characteristics of the physical environment, e.g., millimeter wave beam alignment based on LIDAR measurements.
#### Iv-B1 FEEL with AirComp
While FEEL with a large number of sensors is a perfect example of a massive access communication problem, there is an important distinction. In the classical massive access problem, the goal of the receiver is to decode all the transmitted messages individually, even though the receiver may not be interested in the source of these messages. On the other hand, in the case of FL, the PS does not need individual model updates, as it only needs to recover their averages.
The computation problem is often treated separately from communication, although this is known to be sub-optimal information-theoretically. This was first shown by Korner and Marton in [27] on a simple example, where the receiver wants to compute the parity of two correlated symmetric binary random variables. They were able to characterize the optimal rate region for this toy example, which was sufficient to illustrate the suboptimality of first sending the sources to the receiver and then carrying out the computation task. Later, Orlitsky showed that the function computation problem is inherently different from conventional source coding by focusing on a point-to-point scenario in [28], where the receiver wants to compute a function of the source available at the transmitter and correlated side information available at the receiver. He showed that the optimal rate required for computing any function in a lossless fashion is given by the the conditional \(G\)-entropy, where \(G\) is the characteristic graph of the two sources. As one would expect, this is more efficient than first sending the source to the receiver at a rate equal to its conditional entropy given the side information, and
Fig. 4: Illustration of FEEL, where the participating devices upload their local updates to the PS through a multiple access channel, while the goal of the PS is to compute their average.
then computing the function.
In the case of FEEL, the goal is to compute the average rather than an arbitrary function, and this is quite convenient in the case of wireless multiple access as the wireless channel already adds the transmitted signals thanks to its superposition property. As discussed in Section IV-C, averaging is a nomographic function, amenable for AirComp. Therefore, in the context of FEEL, wireless interference that we typically try to mitigate in multiple access systems, can be used in our favour [21] to improve both the speed and the final accuracy of learning tasks under a bandwidth constraint.
#### V-C2 FEEL with Unsourced Massive Access
Massive access problem is a generalization of the traditional multiple access problem to a massively large number of transmitters. This formulation adopts the common assumption of the multiple access problem, that the messages are chosen independently at the transmitters, and hence, two simultaneously active users are highly unlikely to transmit the same message. As a consequence, the goal at the receiver at each point in time reduces to estimating the number of active users, and decoding those many messages without necessarily associating them to particular devices. However, when devices are collaborating for a common task, such as distributed learning, their messages can be highly correlated. In this case, the assumption of each transmitter sending a different message is not valid any more, and the receiver must not only decide which messages were transmitted by the active transmitters, but also by how many transmitters.
Consider, for example, the FEEL problem mentioned above. In the AirComp solution, the model updates are directly modulated to the amplitude of the transmitted symbols; however, this creates several problems: First, this requires strict synchronisation among the devices, and some of the gains will be lost in the case of misalignments. Second, AirComp assumes continuous amplitude modulation, which is not possible in most practical communication systems, and may lead to problems such as power amplifier saturation or high peak-to-average power ratio. Last, but not least, the AirComp requires sending one channel symbol for each model dimension, which requires a very large bandwidth when training large models.
In [29], an alternative digital scheme is proposed for efficient FEEL which employs the idea of unsourced massive access [30]. In this approach, each device first vector quantizes its model update, which is then mapped to a channel codeword. Only a small subset of the devices may be active at each round, but this is not a problem since the PS is not interested in the identity of the active devices, as long as each device is active sufficiently often to contribute to the learning process. Therefore, unsourced massive access techniques can be used here to detect which updates are transmitted by the devices at each round, to be aggregated by the PS. However, since the model updates at the devices are correlated, several active devices can transmit the same codeword. Hence, in addition to detecting the active messages, the PS should also count by how many active transmitters each message is transmitted.
In Fig. 5, we plot the convergence behaviour of test accuracy achieved by perfect accumulation (PA), which assumes perfect noise-free channel, one-bit digital accumulation (OBDA) scheme introduced in [20], and the unsourced massive access (UMA)-based generalized digital AirComp (GD-OAC) scheme, proposed above for the CIFAR-10 image classification task. Here, \(Q\) denotes the size of the compression blocks, where \(Q=1\) corresponds to scalar quantization, while \(Q=50,100\) represent vector quantization over a block of symbols. For simplicity, we also set the size of the channel codewords transmitted for each block to \(Q\). The size of the quantization codebook is given by \(2^{J}\). We can observe that GD-OAC not only significantly improves the test accuracy compared to the one-bit baseline, but also meets the PA baseline.
## VI Conclusions and Outlook
We have presented a view on the wireless 6G systems and its associated technologies by extrapolating the evolution of massive and low-latency connectivity, which were two of the focal points in 5G. This evolution is put in the context of semantic and pragmatic communication in contrast to the technical problem of communication, which was the sole subject of the previous wireless
Fig. 5: Comparison of UMA-based digital AirComp (GD-OAC) with PA and OBDA.
generations. The paper argues that semantic and pragmatic communications have become relevant due to the increasing role that ML/AI play in communication systems. We presented how the low-latency requirement is expected to evolve towards generalized real-time requirements, coupled with cyber-physical systems and real-time inference. Finally, massive access is considered to evolve towards massive closed-loop communication used in interactive communication with wireless sensors and actuators, as well as in distributed learning. There are other aspects of 6G that were not covered in this article; nevertheless, the principled discussion of semantic and pragmatic communication is receptive to be expanded with those other 6G aspects, such as integrated communication and sensing or non-terrestrial networks (NTN).
|
2309.15274 | Memory-Efficient Continual Learning Object Segmentation for Long Video | Recent state-of-the-art semi-supervised Video Object Segmentation (VOS)
methods have shown significant improvements in target object segmentation
accuracy when information from preceding frames is used in segmenting the
current frame. In particular, such memory-based approaches can help a model to
more effectively handle appearance changes (representation drift) or
occlusions. Ideally, for maximum performance, Online VOS methods would need all
or most of the preceding frames (or their extracted information) to be stored
in memory and be used for online learning in later frames. Such a solution is
not feasible for long videos, as the required memory size grows without bound,
and such methods can fail when memory is limited and a target object
experiences repeated representation drifts throughout a video. We propose two
novel techniques to reduce the memory requirement of Online VOS methods while
improving modeling accuracy and generalization on long videos. Motivated by the
success of continual learning techniques in preserving previously-learned
knowledge, here we propose Gated-Regularizer Continual Learning (GRCL), which
improves the performance of any Online VOS subject to limited memory, and a
Reconstruction-based Memory Selection Continual Learning (RMSCL), which
empowers Online VOS methods to efficiently benefit from stored information in
memory. We also analyze the performance of a hybrid combination of the two
proposed methods. Experimental results show that the proposed methods are able
to improve the performance of Online VOS models by more than 8%, with improved
robustness on long-video datasets while maintaining comparable performance on
short-video datasets such as DAVIS16, DAVIS17, and YouTube-VOS18. | Amir Nazemi, Mohammad Javad Shafiee, Zahra Gharaee, Paul Fieguth | 2023-09-26T21:22:03Z | http://arxiv.org/abs/2309.15274v2 | # Memory-Efficient Continual Learning Object Segmentation for Long Videos
###### Abstract
Recent state-of-the-art semi-supervised Video Object Segmentation (VOS) methods have shown significant improvements in target object segmentation accuracy when information from preceding frames is used in undertaking segmentation on the current frame. In particular, such memory-based approaches can help a model to more effectively handle appearance changes (representation drift) or occlusions. Ideally, for maximum performance, online VOS methods would need all or most of the preceding frames (or their extracted information) to be stored in memory and be used for online learning in consecutive frames. Such a solution is not feasible for long videos, as the required memory size would grow without bound. On the other hand, these methods can fail when memory is limited and a target object experiences repeated representation drifts throughout a video. We propose two novel techniques to reduce the memory requirement of online VOS methods while improving modeling accuracy and generalization on long videos. Motivated by the success of continual learning techniques in preserving previously-learned knowledge, here we propose Gated-Regularizer Continual Learning (GRCL), which improves the performance of any online VOS subject to limited memory, and a Reconstruction-based Memory Selection Continual Learning (RMSCL) which empowers online VOS methods to efficiently benefit from stored information in memory. Experimental results show that the proposed methods improve the performance of online VOS models up to **10%**, and boosts their robustness on long-video datasets while maintaining comparable performance on short-video datasets DAVIS16 and DAVIS17.
Video Object Segmentation, Continual Learning, Robustness, Memory Selection, Regularization.
## 1 Introduction
Video object segmentation (VOS) aims to extract an accurate pixel-wise object mask in each frame of a given video. Broadly, proposed VOS algorithms can be divided into two different streams: i) semi-supervised or one-shot VOS, when the ground truth masks of the target objects are provided in at least one frame at inference time, and ii) unsupervised VOS, when no information about the objects is provided. The focus of this paper is on the former context, that of semi-supervised VOS.
The intuition behind semi-supervised VOS is to perform fine-tuned learning on a VOS model,
separately for each test video, based on the given target information (i.e., the given object mask). This ideal is not feasible, due to the limited training samples, the VOS model size, and the time-consuming training process. In practice, online learning-based VOS approaches [1; 2; 3; 4] address these challenges by introducing efficient training mechanisms and keeping some amount of information in memory to augment the training set for model fine-tuning.
These approaches proceed on the assumption that sufficient memory is available at inference time, and that there are no limitations in storing and exploiting information. It is also assumed that an object representation is not undergoing significant shifts between frames, such that the information stored in the memory is somehow representative of the target object in question. In practice, these assumptions hold poorly, at best, and particularly in long videos it is common to experience significant representation drift of the target object. Such a drift can lead to drastic drops in performance, particularly when there is a limitation on the amount of memory available to store past object representations. A second bottleneck of online VOS is its limitation to learn useful information from memory. As more training data (more frames of video) become available in the memory, online VOS methods have difficulty to extract and learn discriminative information [5], due to their limited online model size and training process, since online VOS prefers training small models on limited memory over few epochs. Clearly these issues become increasingly problematic on long video sequences, the focus of this paper.
We reformulate semi-supervised VOS as online continual learning [6], which benefits from two disjunctive solutions with a small fixed working memory to process long video sequences:
* In Section 3.2 a Gated-Regularizer Continual Learning (GRCL) is proposed to improve the performance of online VOS by preserving and consolidating the acquired knowledge from the target objects in preceding frames while limiting the required memory.
* A very different approach is developed in Section 3.3, where we propose a Reconstruction-based Memory Selection Continual Learning (RMSCL) method which is able to augment any online VOS framework and improves its performance, particularly on long videos.
The GRCL is inspired from prior-based continual learning [7; 8], whereas the latter proposed RMSCL is motivated by rehearsal methods in continual learning [9; 10; 11; 12]. We apply the proposed methods to two state-of-the-art online VOS algorithms, LWL [4] and Joint [13], both subject to a fixed memory. Our experimental results show an improvement of both LWL and Joint, particularly on long video sequences. To the best of our knowledge, this is the first time that online VOS is addressed as a continual learning VOS.
## 2 Related Work
The primary objective of our work is to address online video object segmentation, specifically when dealing with long video sequences. Our objective particularly relates to the instances, which are preserved in a memory for future selection and usage in the continuation of the learning process. For a better illustration of the problem, first we investigate the baselines and the state-of-the-art memory-based approaches as well as some of those proposed in continual learning.
Next, we present some feature selection methods with a wide range of applications in various domains such as machine learning, data mining and computer vision which can potentially be used as memory selection for VOS. Finally, we introduce several solutions available in the literature addressing the learning challenges of long video sequences.
### Memory-based Approaches
Memory-based approaches [1; 2; 3; 4; 5; 14] try to address semi-supervised VOS problems by storing deep representations and predicted output masks of preceding frames in a memory and use them for evaluating the current frame.
Using this strategy, there are different approaches proposed to retrieve information from this dynamic model's memory. One solution is to update (fine-tune) a small model on the memory proposed by the online learning methods [2; 4; 15; 16; 17].
A second solution is to propagate the information of the most recent frames received from the mask [18] or a hidden representation [19; 20]
proposed by the recurrent methods, and a third solution is to match the representations of previous frames stored in the memory with the corresponding features extracted from the current frame proposed by the query-based methods [1; 21; 22; 23; 24; 25; 26; 27; 28].
The approach proposed in this article stems from the online learning methods, and is compared to the state-of-the-art query-based methods.
#### 2.1.1 Query-based Methods
Among the query-based methods is the STM [1], which uses a similarity matching algorithm to retrieve encoded information from the memory and pass it through a decoder to produce an output.
In VOS the target object in the query frame usually appears in the local neighborhood of the target's appearance in the memory frames, but STM is based on non-local matching between the query and memory frames. Therefore, KMN [27] proposed a kernelized memory network applying a Gaussian kernel to address the non-localization aspect of the STM.
HMMN [28] also proposed kernel-based memory matching to achieve temporal smoothness by restricting possible correspondences between two adjacent frames to a local window and applying a kernel guidance to the non-local memory matching. For matching of distant frames, HMMN applies tracking of the most probable correspondence of a memory pixel to a query pixel. Instead of building a specific memory bank and therefore affinity for every object in the video as in STM, STCN [23] builds a model, which learns all object relations beyond just the labeled ones by using an affinity matrix based on RGB relations. For querying, a target object passes through the same affinity matrix for feature transfer. To deal with appearance changes and deformation LCM [21] proposed applying a memory mechanism to retrieve pixels globally, and to learn position consistency for more reliable segmentation.
#### 2.1.2 Online Learning-based Methods
On the other hand, there are online learning-based methods which learn the new object appearance within an online learning-based approach [3; 4; 29] simultaneously at inference time. In this scenario, instead of using a query-based (matching based) algorithm on each frame, a small latent model network so called target model, is updated every \(s\) frames which is eventually used to produce the updated information about each video frame.
The target model proposed by FRTM [2], LWL [4] and the induction branch of JOINT [3] is formulated as a small convolutional neural network, which performs online learning on the available training data in the memory. As such, these methods can provide an efficient yet effective dynamic update process for VOS frameworks.
While target model-based approaches improve the performance of VOS, the effectiveness of online learning algorithms is highly dependent on their memory capacity and usage. In other words, to obtain the best performance, these models require to store all preceding output masks and the encoded features in their memory and also make a way to increase the generalization of the updated model. Therefore, memory limitation results in facing similar challenges already known in the domain of continual learning.
In this paper, we hypothesize these issues can be mitigated, specifically motivated by the success of continual learning algorithms in preserving the learned knowledge while limiting the required memory.
### Continual Learning
Continual learning [30; 31; 32; 33] is a process of sequential learning, where the sequence of data may stem from different domains and tasks; that is, a model is learning from data in which an abrupt or gradual concept drift [34] can happen.
Similarly in online VOS methods with a limited memory a concept drift can easily happen on the appearance of the target objects. Therefore, in such situations the distribution of the available data in the memory will significantly change through every updating step. The primary challenge in this situation is known as _catastrophic forgetting_, a term which was first defined in the context of neural networks [35; 36], although it is a common problem in other machine learning methods [37].
#### 2.2.1 Catastrophic Forgetting
Catastrophic forgetting [38] commonly takes place in varying machine learning problems such as
few shot learning [39; 40], graph neural networks [41; 42] knowledge distillation [43] and Bayesian inference framework [8].
Catastrophic forgetting occurs when a machine learning model is trained on a sequence of tasks but at any moment in time, it gains access to the training data of the most recent task only. Consequently, the model has a tendency to update those parameters dominated by data from current task. This results in a degree of forgetting previously learned tasks.
A long video with some sections of an object with different view point and appearance, and with some challenges such as occlusion and missing object would form a continual learning problem.
For an online VOS approach each section of a long video in memory can be considered as a task, thus, forgetting the previously learned task can be problematic when processing video sequences as the number of tasks increases with the length of the video.
There are three different solutions for catastrophic forgetting problems such as prior-focused (regularization-based) solutions [7; 8], likelihood-focused (rehearsal-based) solutions [9; 10; 11; 12] and the hybrid (ensemble) approaches [44; 45].
In this paper, a regularized (GRCL) and a rehearsal-based based (RMSCL) solutions are proposed to generalize the the usefulness of online VOS methods on long video sequences. We also investigate the usefulness of the combination of two proposed methods as a Hybrid method on long video sequences.
### Feature Selection
Memory reading is an important step in query-based VOS methods[1; 23; 24]. For instance, in STCN [23] benefits from L2 similarity for memory reading and in STM [1], dot product is used. Here we are looking for memory selection approaches which is addressed in feature selection.
High dimensional data significantly demands larger memory storage and more computational resources for data analytics. Furthermore, the existence of irrelevant, redundant and noisy features increases the probability of overfitting in the learning algorithms thus results in less efficiency and worse performance.
Feature selection methods trying to deal with the high dimensional data are categorized into supervised [46; 47; 48] and unsupervised [49; 50; 51] learning approaches.
Supervised methods have access to the discriminative information encoded in the class labels while real-world data is usually unlabeled and data annotation is too expensive. Unsupervised feature selection methods utilize different criteria to define the relevance of features such as data similarity, local discriminative information and data reconstruction error.
The Reconstruction based methods approximate the original data by performing a reconstruction function on some selected features [52; 53; 54; 55]. In this article we as well propose a Reconstruction-based Memory Selection Continual Learning (RMSCL) to improve online VOS on long video sequences.
Figure 1: A typical online VOS pipeline: The target model C\({}^{t}\) is initialized based on the given ground truth mask \(Y_{g}\) and its associated feature \(X_{g}\). The dotted arrow shows how the target model C\({}^{t}\) is updated based on memory \(\mathcal{M}\) every \(s\) frames. The methods proposed in this paper are mainly engaged with the target model component of the pipeline.
### Long Video Sequences
Long video sequences containing several concepts are more challenging to be learned since the model requires a memory with large capacity to store the previously learned frames representations.
To address the limitations in memory and training time, AFB-URR [56] uses an exponential moving averages to merge a new memory component with earlier ones if they are similar, or to store it as a new component in the memory otherwise. The model removes unused features from the memory when its capacity reaches a predefined limit.
Using a global context module [57] is another way to deal with the limitations caused by long video sequences. The model calculates a mean of the entire memory components and apply it as a single representation.
However, both methods apply a compact representation of the memory, which sacrifices the segmentation accuracy. On the other hand, XMem [5] uses a multi-store feature memory to avoid compression and achieves much higher accuracy in both short-term and long-term predictions. In this article, we focus on improving online VOS by providing an efficient memory usage method (RMSCL) and a regularization based continual learning approach (GRCL).
## 3 Proposed Approach
In this section we develop two proposed methods (GRCL and RMSCL) in depth. It is important to understand that these methods are not limited to one specific framework, rather they can be extended to any regular online VOS architecture. The significance of this generality is that online VOS frameworks are preferred against query-based methods in practical applications, since query-based architectures (such as XMem [5]) lead to memory requirements which grow with video length, whereas online VOS methods assumed a fixed memory size. So, although online learning does not possess the memory challenges associated with query-based methods, nevertheless online learning-based approaches do have some problems that are addressed in this section.
We begin with the general structure of online VOS in Section 3.1, followed by the formulation of the proposed gated-regularizer (GRCL) in Section 3.2, and the reconstruction-based memory selection continual learning (RMSCL) in Section 3.3. We conclude this section by proposing the hybrid method of GRCL and RMSCL.
### Online VOS
Online VOS [2; 4; 13], as overviewed in Figure 1, typically comprises the following pieces:
1. A pretrained encoder, extracting features from each frame;
2. A memory \(\mathcal{M}\), storing features and their associated labels / mask;
3. A target model \(\mathrm{C}^{t}\), which is trained on the memory at updating time \(t\), and provides information to the decoder;
4. Pretrained decoder and label encoder \(\mathrm{E}\)[4] networks which obtain temporal information from the target model alongside the encoder's output, to generate a fine-grain output mask \(Y_{i}\) from the current frame \(F_{i}\).
The target model \(\mathrm{C}^{t}\) is usually a small convolutional neural network, for reasons of efficiency. The target model is updated every \(s\) frames throughout the video, repeatedly trained on the complete set of features \(X\in\mathds{X}\) and the encoded labels \(\mathrm{E}(Y)\) of stored decoder outputs \(Y\in\mathds{Y}\) from preceding frames. Both \(X\) and \(Y\) are stored in memory \(\mathcal{M}\), where the memory is constrained to some size \(N\), as shown in Figure 1. It is worth noting that \(\mathrm{E}\) is a label encoder, generating sub-mask labels from each \(Y\)[4]. For online training of \(\mathrm{C}^{t}\), \(Y\) is fed to \(\mathrm{E}\) and we seek a trained model \(\mathrm{C}^{t}\) to learn what \(\mathrm{E}\) specifies from \(Y\). That is, the target model acts like a dynamic attention model to generate a set of score maps \(\mathrm{E}\big{(}\mathrm{C}(X_{i})\big{)}\) in order for the segmentation network (decoder) to produce the segmented output mask \(Y_{i}\) associated with the current frame \(F_{i}\). The loss function \(L\) which is used for the online training of target model \(\mathrm{C}^{t}\) is
\[L(\Theta,\mathcal{M})= \tag{1}\] \[\sum_{n=1}^{N}\Big{\|}d_{n}W_{n}\Big{(}\mathrm{E}(Y_{n})-\mathrm{ E}\big{(}\mathrm{C}^{t}(X_{n})\big{)}\Big{)}\Big{\|}_{2}^{2}+\sum_{k=1}^{K}\lambda \theta_{k}^{2},\]
where \(\theta_{k}\in\Theta\) is a parameter of \(\mathrm{C}^{t}\). Depending on the overall architecture, \(\mathrm{E}\) could be an offline / pre-trained label encoder network, as in [4], or just a pass-through identity function, as in [2]. \(W_{n}\) is the spatial pixel weight, deduced from \(Y_{n}\)
and \(d_{n}\) is the associated temporal weight decay coefficient. \(W_{n}\) balances the importance of the target and the background pixels in each frame, whereas \(d_{n}\) defines the temporal importance of sample \(n\) in memory, typically emphasizing more recent frames.
Online VOS methods suffer from three main limitations which deteriorate their performance, particularly on long videos:
1. **Memory Size:** To maximize performance, online VOS would need to store in the memory all or most of the extracted information of all preceding frames. However, for videos of arbitrary length this requires an unlimited memory size, which is infeasible.
2. **Target Model Updating:** Even with an unlimited memory size, updating the target model C on an arbitrarily large memory would be computationally problematic.
3. **Hyperparameter Sensitivity:** The sensitivity of online VOS approaches to the target model's configuration and memory updating step size affects both speed and accuracy.
The proposed GRCL and RMSCL aim to mitigate these limitations by incorporating simple yet effective methods applied to the target model C\({}^{t}\) and memory \(\mathcal{M}\).
Since video frame information is provided consecutively into the online VOS framework, there is a high possibility of drift in the object's appearance, especially in long-video sequences. As such, the conventional approach of passing all of the information, as a whole, to the model to decide which to use, is not effective and can lead to ineffective learning or even divergence in the target model. In the experimental results, we further focus on this specific issue.
Instead, inspired by continual learning [30], we seek to regularize the parameters, \(\Theta\), of the target model C\({}^{t}\) in each online learning step \(t\), with a goal of preserving the model knowledge, acquired from those earlier samples (frames) which are no longer present in the memory \(\mathcal{M}\). That is, we have the two fundamental questions of
1. _How do we constrain or regularize the model parameters_, to be explored in the gated-regularizer continual learning (GRCL) method of Section 3.2.
The proposed GRCL is inspired by Memory Aware Synapses (MAS) continual learning [58]. The proposed GRCL allows the memory size to be reduced while maintaining model performance, also increasing the robustness of the target model against the updating step size \(s\), which otherwise typically affects model performance.
2. _How do we decide what to keep in memory, or which subset of memory to use in learning,_ to be explored in the context of reconstruction-based memory selection continual learning (RMSCl) of Section 3.3. The proposed RMSCl is inspired by reconstruction-based feature selection methods, makes it possible that updating C\({}^{t}\) can efficiently benefit from stored information in the memory \(\mathcal{M}\).
### Parameter Regularization
Parameter regularization seeks to preserve important parameters of the target model, \(\Theta\), specifically those parameters which were learned or significantly modified in preceding update steps. The MAS algorithm [58] is formulated such that at update step \(t\) the importance of each parameter \(\theta_{k}^{t}\) is associated with its gradient magnitudes \(\{u_{k}^{t}\}_{l=1}^{t-1}\) during preceding update steps. Therefore, during each online learning step, we update the parameter weights \(\omega_{k}^{t}\) based on the gradient magnitudes,
\[\omega_{k}^{t}=\omega_{k}^{t-1}+u_{k}^{t} \tag{2}\]
As such, for the set of features \(\mathbb{X}\) and their related output masks \(\mathbb{Y}\) in a memory \(\mathcal{M}\) having size \(N\), and given a target model C\({}^{t}\) with \(K\) parameters \(\Theta\), the regularized loss function \(L_{R}\) is defined as
\[L_{R}(\theta,\mathcal{M})=L(\theta,\mathcal{M})+\gamma\sum_{k=1}^{K}\omega_{k} ^{t-1}\big{\|}\theta_{k}^{t}-\theta_{k}^{t-1}\big{\|}_{2}^{2}, \tag{3}\]
where \(L(\theta,\mathcal{M})\) is as described in (1). The latter term is the regularization, controlled by \(\gamma\), and \(t\) counts the model update steps.
The goal is that the loss \(L_{R}\) allows the target model to be updated while preserving its previously learned knowledge. Clearly the effectiveness of the loss function \(L_{R}\) deteriorates over
time (frames) as \(\Omega=\left\{\omega_{k}\right\}_{k=1}^{K}\) loses its effectiveness in regularization, since most parameters become important as the number of update steps \(t\) increases. Our proposed GRCL aims to address this limitation.
### Gated-Regularizer Continual Learning
We wish to formulate GRCL such that, instead of accumulating the importance parameters in \(\Omega^{t}\), it stores a limited number (\(P\)) of binarized importance maps \(\left\{G^{j}\right\}_{j=1}^{P}\) in a gated-regularizer memory \(\mathcal{M}_{G}\) where size of \(\mathcal{M}_{G}\) is limited \(\left(\left|\mathcal{M}_{G}\right|\leq P\right)\).
Thus, at each update step \(t\), the overall gated-regularized map \(\mathbf{G}^{t}\) is defined as
\[\mathbf{G}^{t}=\bigvee_{j=1}^{J}G^{j}\ \,\ \ \ J=\left|\mathcal{M}_{G}\right| \tag{4}\]
Here \(\left|\mathcal{M}_{G}\right|\) is the number of occupied memory cells in \(\mathcal{M}_{G}\). Given the current overall gated-regularizer maps \(\mathbf{G}^{t}\), the gated-regularized loss function \(L_{G}\) can be formulated as
\[L_{G}(\theta,\mathcal{M})=L(\theta,\mathcal{M})+\gamma\sum_{k=1}^{K}\mathbf{g }_{k}^{t}\left\|\theta_{k}^{t}-\theta_{k}^{t-1}\right\|_{2}^{2} \tag{5}\]
where \(\mathbf{g}_{k}^{t}\in\mathbf{G}^{t}\), such that with a large coefficient \(\gamma\cong\infty\), it acts as a gating function that allows some parameters to be updated and others to be frozen. After updating the target model \(\mathrm{C}^{t}\), a new gated-map (\(G^{J+1}\)) should be defined and memory \(\mathcal{M}_{G}\) is updated.
To this end, after accumulating the magnitude of the gradient in \(U^{t}=\left\{u_{k}^{t}\right\}_{k=1}^{K}\), a binary gated-regularizer \(g_{k}^{j+1}\in G^{j+1}\) will be defined as
\[g_{k}^{j+1}=\begin{cases}1&\text{if }\frac{u_{k}^{t}}{\max_{k}(U^{t})}>h\\ 0&\text{else}\end{cases} \tag{6}\]
where \(0<h<1\) is a threshold, which is determined based on the distribution of the gradients in \(U^{t}\). The bigger the value of \(h\), the more sparse the resulting gated-regularized map \(G^{j+1}\).
Figure 2 shows the flow-diagram of an online VOS framework at time \(t\) when the target model \(\mathrm{C}^{t}\) is regularized by the proposed GRCL.
One of the main advantages in formulating the loss function of the online VOS framework as \(L_{G}\), is to store an efficient set of binary maps \(\left\{G^{j}\right\}_{j=1}^{P}\) in \(\mathcal{M}_{G}\), much smaller in size compared to the sets of features \(\mathds{X}\) and masks \(\mathds{Y}\) stored in \(\mathcal{M}\).
It is worth noting that the encoder, decoder and network E in the proposed architecture are trained offline, and we use the same trained models in all experiments. Additionally, the memory is initialized by the encoded features of the given frame \(F_{g}\) with the provided ground-truth mask \(Y_{g}\), as defined in semi-supervised VOS frameworks.
Figure 2: The proposed online VOS framework, with adopted Gated-Regularized Continual Learning (GRCL): At time \(T\), the overall gated-regularizer map \(\mathbf{G}^{t}\) is calculated using the stored gated maps in the gated-regularizer memory \(\mathcal{M}_{G}\) and regularizes the process of updating \(\mathrm{C}^{t}\). Finally, \(\mathcal{M}_{G}\) is updated using the calculated \(G^{t+1}\).
### Reconstruction-based Memory Selection Continual Learning
Given the forgetting behaviour of an online VOS due to the appearance drift of objects, a trivial solution for mitigating this problem is simply to have an unlimited memory size. However, it is difficult for a limited-size target model to extract generalized discriminating information from a considerably larger memory \(\mathcal{M}\). As such, the effectiveness of updating the target model \(\mathrm{C}^{t}\) becomes dramatically deteriorated on long videos as memory grows.
To solve this limitation, we propose a dynamic working memory \(\mathcal{M}_{W}\), a _subset_ of \(\mathcal{M}\), and update the target model using this new (smaller) memory instead of on (larger) \(\mathcal{M}\). This new approach can address two problems:
1. Allowing a limited size target model to benefit from a large memory, and
2. The update step becomes significantly more efficient, since it is training on a smaller working memory \(\mathcal{M}_{W}\).
The proposed RMSCL approach adapts a methodology similar to those of likelihood-based (rehearsal) approaches in continual learning, where a set of selected observations from preceding tasks would be preserved in memory to mitigate the catastrophic forgetting of the target model on proceeding tasks.
As such, \(\mathcal{M}_{W}\) needs to be a small, diverse memory which contains the required features \(X\) and masks \(Y\) of preceding evaluated frames. Thus, the goal of the proposed RMSCL is to select \(q\) samples from memory \(\mathcal{M}\) and to place them in \(\mathcal{M}_{W}\) for target model updating. This memory selection is performed on \(\mathcal{M}\) every update step \(t\). The selection of samples from memory is formulated as a LASSO [59] optimization problem: To update the target model \(\mathrm{C}^{t}\), the optimal linear reconstruction of the stored features \(\mathbbm{X}\) in memory \(\mathcal{M}\) for the current feature \(X_{i}\) is identified via a \(L_{1}\) constraint on the coefficients \(\Psi\) by minimizing
\[\operatorname*{arg\,min}_{\Psi}\left(\frac{1}{2}\big{\|}X_{i}-\Psi\mathbbm{X} \big{\|}_{2}^{2}+\lambda\|\Psi\|_{1}\right). \tag{7}\]
\(\mathbbm{X}\) contains the vectorized features \(\{X_{n}\}\), similarly \(\Psi\) consists of the \(N\) coefficients \(\Psi=\{\psi_{n}\}\) weighting each feature \(X_{n}\) in reconstructing \(X_{i}\). In other words, we want to have the best sparse linear reconstruction of current frame \(X_{i}\) using the stored features \(\mathbbm{X}\) in memory \(\mathcal{M}\).
The \(L_{1}\) constraint leads to a sparse set of coefficients, meaning that only a small number
Figure 3: The proposed online VOS framework with augmented Reconstruction-based Memory Selection Continual Learning (RMSCL). At the current time \(T\), three samples associated to three positive \(\psi\) are selected using a reconstruction based (Lasso) optimization.
of coefficients are non-zero after the optimization process, and it is those coefficients \(\psi\) and their associated features \(X\) which will be selected for updating target model \(\text{C}^{t}\). It is important to mention that the pixel weight \(W_{n}\) and the deterministic temporal weight \(d_{n}\) are not involved in the loss function of (7), thus we replace \(d_{n}\) and \(W_{n}\) in (1) with \(\psi_{m}\) as follows:
\[L(\Theta,\mathcal{M}_{W})= \tag{8}\] \[\sum_{m=1}^{q}\left\|\psi_{m}\Big{(}\text{E}(Y_{m})-\text{E}\big{(} \text{C}^{t}(X_{m})\big{)}\Big{)}\right\|_{2}^{2}+\sum_{k=1}^{K}\lambda\theta_ {k}^{2},\]
Here \(q\) is the size of dynamic working memory \(\mathcal{M}_{W}\), equal to number of non-zero positive \(\psi\). The only problem with the Lasso minimizing of (7) is that its computational complexity depends on feature size \(X\), such that a gigantic feature size can lead to (7) becoming the bottleneck of online VOS. In order to handle this problem, we apply a channel based max pooling function \(\text{pool}(\cdot)\) on each feature \(X\), such that (7) becomes
\[\operatorname*{arg\,min}_{\Psi}\left(\frac{1}{2}\big{\|}\text{ pool}(X_{i})-\Psi\ \text{pool}(\mathbb{X})\big{\|}_{2}^{2}+\lambda\|\Psi\|_{1}\right). \tag{9}\]
It is worth noting that the pooling function is only performed for estimating the coefficient set \(\Psi\); it is still the actual feature \(X\) which is used for creating the working memory \(\mathcal{M}_{W}\) and updating the target model. Figure 3 shows an online VOS pipeline resulting from the proposed RMSCL.
## 4 Results
The effectiveness of the proposed methods to improve the performance of online VOS frameworks is evaluated by augmenting state-of-the-art online VOS algorithms. It is worth noting that both proposed gated-regularizer continual learning (GRCL) and reconstruction-based memory selection continual learning (RMSCL) can augment a given online VOS framework, and they can even be combined and used together1.
Footnote 1: Here we call the combination of both modules as the “Hybrid” approach.
Here we adopt two well-known and state-of-the-art online VOS frameworks: LWL [4] and JOINT [13]. LWL is an extension over the well-known FRTM [2] framework, benefitting from a label encoder network E which tells the target model what to learn [4]. JOINT approaches the VOS problem by using an online learning induction branch, jointly with a transduction branch which benefits from a lightweight transformer for providing sufficient temporal and spatial attention to its decoder. JOINT has reported the state-of-the-art performance for the problem of online VOS in terms of accuracy. All experiments were performed on a machine with a single NVIDIA V100 GPU.
### Datasets
We compared the proposed methods with two different types of video sequences: long and short. The long video dataset [56] contains objects with a long trajectory with multiple distribution drifts; the short videos are from the standard DAVIS16 [60], DAVIS17 [60] datasets, where the target objects are being tracked in a short period of time and usually without significant changes in appearance. Evaluating the competing methods on both long and short video datasets demonstrates the robustness of different algorithms to different environments.
The **Long Video Dataset**[56] contains three videos with a single object which is recorded for more than 7000 frames. The target objects have long trajectories of movement and sudden appearance changes which lead to significant representation drifts of the video objects.
With regards to **Short Video Datasets**, the DAVIS16 [60] validation set has 20 videos, each of which has a single object for segmentation, the validation set of DAVIS17 [60] contains 30 video sequences with multiple objects to be segmented in each frame. The target objects in these datasets are mostly with a short trajectory, with modest changes in object appearance.
### Experimental Setup
We use a fixed parameter setup for the baselines, with maximum memory sizes of \(N=32\) for LWL and \(N=20\) for JOINT, as is suggested in their setups. For all experiments, the target model \(\text{C}^{t}\) is updated for three epochs in each updating step to have a fair comparison with the baselines. The target model is updated every time the memory
is updated, following the proposed setup in [5]. The memory \(\mathcal{M}\) is initialized based on the given (ground truth) frame \(F_{g}\).
In all experiments, as suggested in the semi-supervised online VOS baselines (LWL and JOINT), the information in \(F_{g}\) is preserved and is used throughout the whole video sample. For GRCL, we keep the gated-regularizer map \(G\) related to the training of \(F_{g}\) in \(\mathcal{M}_{G}\). For RMSCL, the feature \(X_{g}\) and mask \(Y_{g}\) are always placed in working memory with a minimum weight \(\psi_{g}\) as shown in Figure 3. We use the same available pre-trained decoder and encoder models for all experiments of LWL and JOINT. To measure the effectiveness of the competing methods, consistent with the standard DAVIS protocol [60] the mean Jaccard \(\mathcal{J}\) index, mean boundary \(\mathcal{F}\) scores, and the average of \(\mathcal{J}\&\mathcal{F}\) are reported for all methods. The speed of each method is reported on the DAVIS16 dataset [2] in units of Frames Per Second (FPS).
### Results
#### 4.3.1 Long Video Evaluation
Figure 4 shows the GPU memory usage of LWL, JOINT and XMem on the "blueboy" video sequence from the long video dataset. The online VOS methods (LWL and JOINT) require only a fixed GPU memory size, which enables them to be used on smaller devices with more modest GPUs. This section will show that the proposed methods do not further increase the GPU memory requirement.
The effectiveness of the proposed GRCL and RMSCL is evaluated by augmenting two state-of-the-art online VOS frameworks, LWL and JOINT, however our proposed methods can be extended to any online VOS method having a periodically-updated target model network, as in Figure 1.
Table 1 shows the results of the selected baselines (LWL and JOINT), each augmented by the proposed GRCL and RMSCL, evaluated on the Long video dataset. For LWL-GRCL and JOINT-GRCL, the threshold \(h\) is dynamically set to the \(99^{th}\) percentile of the distribution of normalized \(U^{t}\) in (6) for LWL and \(995^{th}\) permilles for JOINT. We also limit \(h\) between 0.1 and 0.55. The hyper-parameters related to \(h\) were selected by cross-validation and they are tuned for a selected gated-regularizer memory size \(P=20\). In section 4.4.1 we investigate the effect of using different \(P\) in GRCL. For the adopted frameworks by RMSCL, the parameter \(\lambda\) defines the sparsity of \(\Psi\) in (9). To select the best \(\lambda\), we used the Akaike information criterion (AIC) [61] for model selection, automatically selecting \(\lambda\) and the number of positive non-zero coefficients \(\Psi\), which defines the size of the working memory \(\mathcal{M}_{W}\). Thus, for each update step, in principle \(\mathcal{M}_{W}\) could have a different size, depending upon the selected \(\lambda\).
We conduct six experiments with six different memory and target model update steps \(s\in\{1,2,4,6,8,10\}\), where the target model C\({}^{t}\) is updated after each memory update. The performance of RMSCL fluctuates with step size \(s\)
\begin{table}
\begin{tabular}{l|c c c} \hline
**Method** & \(\mathcal{J}\&\mathcal{F}\) & \(\mathcal{J}\) & \(\mathcal{F}\) \\ \hline LWL [4] & 84.1\(\pm\)2.9 & 81.7\(\pm\)2.5 & 86.5\(\pm\)3.3 \\ LWL-GRCL (ours) & 85.7\(\pm\)6.6 & 83.5\(\pm\)0.7 & 88.0\(\pm\)0.5 \\ LWL-RMSCL (ours) & 85.5\(\pm\)1.8 & 83.8\(\pm\)1.8 & 87.6\(\pm\)1.8 \\ LWL-Hybrid (ours) & 86.0\(\pm\)1.6 & 84.0\(\pm\)1.6 & 88.1\(\pm\)1.5 \\ \hline JOINT [13] & 69.1\(\pm\)3.8 & 67.0\(\pm\)3.5 & 71.2\(\pm\)4.1 \\ JOINT-GRCL (ours) & 72.7\(\pm\)8.0 & 70.5\(\pm\)7.4 & 74.8\(\pm\)8.5 \\ JOINT-RMSCL (ours) & 80.0\(\pm\)4.5 & 77.5\(\pm\)4.2 & 82.5\(\pm\)4.8 \\ JOINT-Hybrid (ours) & 80.8\(\pm\)3.5 & 78.1\(\pm\)3.4 & 82.9\(\pm\)3.6 \\ \hline RMNet [22] & 59.8\(\pm\)3.9 & 59.7\(\pm\)8.3 & 60.0\(\pm\)7.5 \\ STM [1] & 80.6\(\pm\)1.3 & 79.9\(\pm\)0.9 & 81.3\(\pm\)1.0 \\ STCN [23] & 87.3\(\pm\)0.7 & 85.4\(\pm\)1.1 & 89.2\(\pm\)1.1 \\ XMem [5] & 89.8\(\pm\)0.2 & 88.0\(\pm\)0.2 & 91.6\(\pm\)0.2 \\ \hline \end{tabular}
\end{table}
Table 1: Comparison results on the long video dataset [56], based on the online VOS baseline methods (LWL and JOINT), their augmented versions with GRCL and/or RMSCL, and four query-based VOS.
Figure 4: GPU memory usage of XMem, LWL and JOINT when processing 2416 frames of the \(\mathbf{blueboy}\) video in the long video dataset [56]. As shown, the GPU memory usage of XMem increases significantly over time, whereas LWL and JOINT have a fixed GPU memory usage.
because of the differing distributions which are formed in the memory as a function of sampling frequency. For reference, the means and standard deviations of all competing methods are reported in Table 1.
In [5], authors also compare the performance of different methods by taking the average of five runs, however, they did not report the five update steps which they used. Comparing the standard deviations of JOINT in Table 1 with those reported in [5], we see that our six selected memory update steps are close to those in [5].
As seen in Table 1, the proposed methods improve the performance of both online VOS models on long videos when the objects in the video have a long trajectory with sudden representation drifts.
Furthermore, as illustrated in Table 1, the proposed GRCL improves the robustness of LWL model against different memory \(\mathcal{M}\) and model C update step sizes evident by the lower reported standard deviation in the Table 1. It is worth noting that JOINT has a parallel transduction branch in its structure which benefits from a transformer model that acts like a query-based method. This is an important factor causing the proposed GRCL is not as effective in reducing the standard deviation of JOINT-GRCL performance; It is worth noting that the transduction branch of JOINT can boost the positive or even negative effects of the proposed methods. However, the average performance \(\mathcal{J}\&\mathcal{F}\) improves significantly by more than 3%. For both baselines, RMSCL improves the robustness of the model against different memory update steps by decreasing the standard deviation of baselines in LWL-RMSCL and JOINT-RMSCL. It is worth noting that JOINT-RMSCL outperforms JOINT by almost 10% on long video dataset. We also apply the combination of both methods (GRCL and RMSCL) on the baselines (LWL and JOINT) with the names LWL-Hybrid and Joint-Hybrid. As shown in Table 1, using a Hybrid method improves the robustness (smaller standard deviation) of the baselines with a better average performance. To have a fair comparison, the proposed methods and the baseline online VOS frameworks are compared with four query-based methods including RMNet [22], STM [1], STCN [23], and the current VOS state-of-the-art approach XMem [5]. The reported results of the query-based methods on short video datasets are from [5]. STM is a query-based VOS baseline which had been state-of-the-art method for a long period of time in VOS and RMNet, STCN and XMem are its follow up methods. RMNet and STCN try to improve the memory functionality of STM by having a better memory encoding and memory reading methods. XMem also can be considered as an extension of STM which is specifically designed to work on long video sequences.
As demonstrated in Figure 5, the average performance \(\mathcal{J}\&\mathcal{F}\) of each 6 runs based on different memory and target model update step sizes (\(s\)) are compared. In other words, Figure 5 shows first
Figure 5: Performance comparison of competing methods as a function of memory and target model update step sizes, \(s\), on the long video dataset [56]. Each curve encodes the average \(\mathcal{J}\&\mathcal{F}\) over 6 runs.
eight methods performance of Table 1. On LWL, GRCL outperforms RMSCL when the memory and target model step size is small (\(s=\{1,2,4\}\)) whereas for a bigger target model step size (\(s=\{6,8,10\}\)), RMSCL is better and it is because of having a more diverse memory \(\mathcal{M}\) with a bigger memory step size \(s\) that makes RMSCL more effective.
#### 4.3.2 Short Video Evaluation
Table 2 demonstrates the performance of adopted online VOS frameworks based on the proposed approaches and competing algorithms on short video datasets (i.e., DAVIS16 and DAVIS17). We use the same hyper-parameters for short and long videos indicating the models do not have prior knowledge of the length of video sequences. As mentioned before, objects in these datasets have a short trajectory and their representations are mostly kept intact through the frames. As seen in Table 2,, the augmented frameworks by the proposed GRCL perform the same as the baseline methods and the proposed regularizer not only does not effect the performance when there is no representation drift on objects in videos, but also JOINT-GRCL performs slightly better compared to JOINT on DAVIS2017. In Table 2, we follow the baseline models' suggested parameters for reporting \(\mathcal{J}\), \(\mathcal{F}\) and FPS. For JOINT, \(\mathcal{M}\) is updated every 3 frames, and for LWL \(\mathcal{M}\) is updated every frame; however, XMem update its so called working memory every 5 frames. The proposed RMSCL improves the performance of JOINT on DAVIS16 but it slightly degrades the performance of JOINT on DAVIS17. In JOINT-RMSCL both its online learning part and its transformer part use \(\mathcal{M}_{W}\) and that is why JOINT-RMSCL reports higher FPS in comparison with JOINT. Table 2 also shows the baselines performs slightly better in terms of FPS since GRCL needs to calculate \(G^{J+1}\) after every updating step \(t\); however, for a small size target model C\({}^{t}\) this FPS degradation is not considerable.
Figure 6 shows the qualitative results of the proposed methods and baselines (LWL and JOINT) on 6 selected frames of "dressage" video sequence of long video dataset. The results show the proposed methods improves the segmentation results of last frames of the video sequence where the baselines are more vulnerable to the distribution drift of the target object.
We also compare the qualitative results of proposed methods and baselines on short video dataset (DAVIS16 [60]). Figure 7 shows the qualitative results of evaluated models on the "soapbox" video sequence of DAVIS16. As illustrated, the proposed continual learning methods offer positive improvement on JOINT results with slight and tiny changes of LWL results, which is in agreement with the reported results in Table 2. "Soapbox" video sample is considered as one of the longest video sequences of DAVIS16 with 99 frames.
On long video sequences, it is not feasible to store all previously evaluated frames' information in memory \(\mathcal{M}\), as such it is important to limit the memory size \(N\). Here we aim to evaluate how different memory size effects baselines and the proposed methods. For this experiment, we compare the performance of LWL, LWL-RMSCL and LWL-GRCL on long video dataset with \(N\in\{8,16,32,64,128,256\}\) and the target model and memory update step \(s=4\). As seen in Figure 8, increasing the memory size \(N\) improves the performance of all methods, however, it is more in favor of LWL-RMSCL since RMSCL provides a solution to the problem of learning a small target model C\({}^{t}\) on a big dataset with few training epochs by providing a working memory \(\mathcal{M}_{W}\). Additionally, Figure 9 illustrates how increasing the memory size \(N\) effects the FPS of evaluated methods (LWL, LWL-GRCL, LWL-RMSCL). The memory and the target model update step is set to \(s=1\) for the experiments in Figure 9. As shown, the FPS of LWL-RMSCL is degraded less than LWL and LWL-GRCL while LWL-GRCL and LWL reported almost the same degradation with increasing the memory size \(N\). For LWL-RMSCL, minimizing 9 is effected by increasing the memory size \(N\) and consequently it affects the FPS of LWL-RMSCL.
framework is augmented by a standard MAS continual learning modules [58] as a regularizer for updating the target model. The evaluation is conducted on long video dataset where the results are demonstrated in Figure 10. As shown in Figure 10, LWL-GRCL reported higher average performance \(\mathcal{J}\&\mathcal{F}\) compared to when LWL is augmented with MAS. Two main reasons can be provided to further elaborate the reported gap on the performance of these two compared frameworks, i) the gated-regularized map \(\mathbf{G}^{t}\) keeps the efficiency of the proposed method compared to MAS approach. MAS highly benefits from \(\Omega^{t}\), however the efficiency of \(\Omega^{t}\) is being degraded as more and more target model gradients be processed and stored over time. ii) For small number of training epochs in each updating step of \(\mathrm{C}^{t}\), the binarized regularizer (hard regularizer) is more effective than MAS with a soft regularizer \(\Omega^{t}\).
#### 4.3.4 Memory Efficiency
To compare the memory efficiency of the proposed GRCL against the baseline, we compare each unit of memory \(\mathcal{M}\) of LWL and memory unit of \(\mathcal{M}\) of adopted LWL-GRCL. In LWL, each sample in the memory \(\mathcal{M}\) consists of the preceding estimated object masks \(\mathds{Y}\) and its related input frames' extracted features \(\mathds{X}\). Each feature \(X\in\mathds{X}\) has dimension of \(512\times 30\times 52\) floats (64 bits). In contrast, each binary regularized-gated map (target model parameters) has a dimension of \(512\times 16\times 3\times 3\) bits. As a result, each unit of \(\mathcal{M}_{G}\) is almost 693 times smaller than each unit of \(\mathcal{M}\).
### Ablation Study
In this section, we evaluate and analyze the effect of some key parameters of the proposed methods on performance on both LWL and JOINT methods when augmented with GRCL and RMSCL. For the experimental results in section 4.3, the gated-regularizer memory size \(P\) was set to 20 and we selected this parameter using cross validation and we fixed it for both LWL-GRCl and JOINT-GRCL. Here, we evaluate the effect of different gated-regularizer memory size \(P\) of LWL-GRCL on long video dataset.
#### 4.4.1 Gated-memory Size
Figure 11 shows the performance of LWL-GRCL with different gated-memory size \(P\in\{4,20,32,64,80,128\}\). As demonstrated in Figure 11, increasing \(P\) improves the performance of LWL-GRCL till the number of regularized parameters do not degrade target model learning. While for the main experimental results is \(P=20\) when the memory size is \(N=32\), in Figure 11, \(N=8\) which requires a bigger \(P\) and that is why \(P=32\) has the best performance.
#### 4.4.2 Regularized Parameters
The number of regularized parameters in C is also analyzed. As seen in Figure 12, the regularized parameters of the target model \(\mathrm{C}^{t}\) increase while the gated-regularized memory \(\mathcal{M}_{G}\) is growing and when it reaches its maximum capacity, the number of regularized parameters of \(\mathrm{C}^{t}\) remains under certain threshold by updating the oldest gated-regularized map in \(\mathcal{M}_{G}\) with the new one. For \(P=128\), almost all parameters of C are regularized and in this case \(\mathrm{C}^{t}\) can not be updated and even removing one gated-regularization map \(G^{j}\) from \(\mathcal{M}_{G}\) would not solve the problem. In other words, \(\mathrm{C}^{t}\) would not have enough free parameters to be updated on the new updated memory.
#### 4.4.3 Update Step Size
To justify the effect of target model update step size on the proposed methods, another ablations study is conducted to compare the performance of LWL-GRCL, LWL-RMSCL and LWL on the long video dataset. Here, we fix the memory update step size to 1 and vary the target model update step size \(s\in\{1,2,4,6,8,10,12,14,16\}\). It is worth noting that in all results in section 4.3, memory and \(\mathrm{C}^{t}\) were updated at the same time. For this experiment, we set memory size \(N=8\) and we update memory every frame. As seen in Figure 13, LWL's performance fluctuates with different update step size but by undertaking the proposed methods, the amount of this fluctuation is decreased. As evident by Figure 13, the improvement is more considerable when the target model \(\mathrm{C}^{t}\) is updated more frequently (smaller step sizes \(s\)).
## 5 Conclusion
In this paper, we proposed two novel modules,Gated-Regularizer Continual Learning (GRCL) and Reconstruction-based Memory Selection Continual Learning (RMSCL), which can be integrated with any online VOS algorithms and improve the memory-limitation of these algorithms while preserving the performance accuracy.The proposed modules can augment the capability of any online VOS frameworks and make them more memory efficient with higher performance accuracy. Additionally, we showed the combination of two proposed method (proposed Hybrid method) will increase the robustness of the augmented baselines. Our results showed that the proposed methods improve the accuracy of the baseline approaches to online VOS in different scenarios. Moreover, the proposed methods do not mitigate the performance of the baselines on the short video datasets(DAVIS16, DAVIS17).
Acknowledgments.We thank NSERC Alliance and Microsoft Office Media Group for their generous support of this research project.
## Declarations
The datasets and pre-trained models which are used during and/or analysed during the current study are publicly available.
* DAVIS16 [https://davischallenge.org/davis2016/code.html](https://davischallenge.org/davis2016/code.html)
* DAVIS17 [https://davischallenge.org/davis2017/code.html](https://davischallenge.org/davis2017/code.html)
* Long Video Dataset [https://www.kaggle.com/datasets/gvclsu/long-videos](https://www.kaggle.com/datasets/gvclsu/long-videos)
* LWL [https://github.com/visionml/pytracking](https://github.com/visionml/pytracking)
* JOINT [https://github.com/maoyunyao/JOINT](https://github.com/maoyunyao/JOINT)
* XMem [https://github.com/hkchengrex/XMem](https://github.com/hkchengrex/XMem)
|
2309.05067 | Mutation-based Fault Localization of Deep Neural Networks | Deep neural networks (DNNs) are susceptible to bugs, just like other types of
software systems. A significant uptick in using DNN, and its applications in
wide-ranging areas, including safety-critical systems, warrant extensive
research on software engineering tools for improving the reliability of
DNN-based systems. One such tool that has gained significant attention in the
recent years is DNN fault localization. This paper revisits mutation-based
fault localization in the context of DNN models and proposes a novel technique,
named deepmufl, applicable to a wide range of DNN models. We have implemented
deepmufl and have evaluated its effectiveness using 109 bugs obtained from
StackOverflow. Our results show that deepmufl detects 53/109 of the bugs by
ranking the buggy layer in top-1 position, outperforming state-of-the-art
static and dynamic DNN fault localization systems that are also designed to
target the class of bugs supported by deepmufl. Moreover, we observed that we
can halve the fault localization time for a pre-trained model using mutation
selection, yet losing only 7.55% of the bugs localized in top-1 position. | Ali Ghanbari, Deepak-George Thomas, Muhammad Arbab Arshad, Hridesh Rajan | 2023-09-10T16:18:49Z | http://arxiv.org/abs/2309.05067v1 | # Mutation-based Fault Localization
###### Abstract
Deep neural networks (DNNs) are susceptible to bugs, just like other types of software systems. A significant uptick in using DNN, and its applications in wide-ranging areas, including safety-critical systems, warrant extensive research on software engineering tools for improving the reliability of DNN-based systems. One such tool that has gained significant attention in the recent years is _DNN fault localization_. This paper revisits _mutation-based fault localization_ in the context of DNN models and proposes a novel technique, named deepmuff, applicable to a wide range of DNN models. We have implemented deepmuff and have evaluated its effectiveness using 109 bugs obtained from StackOverflow. Our results show that deepmuff detects 53/109 of the bugs by ranking the buggy layer in top-1 position, outperforming state-of-the-art static and dynamic DNN fault localization systems that are also designed to target the class of bugs supported by deepmuff. Moreover, we observed that we can halve the fault localization time for a pre-trained model using _mutation selection_, yet losing only 7.55% of the bugs localized in top-1 position.
Deep Neural Network, Mutation, Fault Localization
## I Introduction
Software bugs [1] are a common and costly problem in modern software systems, costing the global economy billions of dollars annually [2]. Recently, data-driven solutions have gained significant attention for their ability to efficiently and cost-effectively solve complex problems. With the advent of powerful computing hardware and an abundance of data, the use of deep learning [3], which is based on deep neural networks (DNNs), has become practical. Despite their increasing popularity and success stories, DNN models, like any other software, may contain bugs [4, 5, 6, 7], which can undermine their safety and reliability in various applications. Detecting DNN bugs is _not_ easier than detecting bugs in traditional programs, _i.e._, programs without any data-driven component in them, as DNNs depend on the properties of the training data and numerous hyperparameters [8]. Mitigating DNN bugs has been the subject of fervent research in recent years, and various techniques have been proposed for testing [9, 10], fault localization [11, 12], and repair [13, 14] of DNN models.
Fault localization in the context of traditional programs has been extensively studied [15], with one well-known approach being _mutation-based fault localization_ (MBFL) [16, 17]. This approach is based on mutation analysis [18], which is mainly used to assess the quality of a test suite by measuring the ratio of artificially introduced bugs that it can detect. MBFL improves upon the more traditional, lightweight _spectrum-based fault localization_[19, 20, 21, 22, 23, 24] by uniquely capturing the relationship between individual statements in the program and the observed failures. While both spectrum-based fault localization [25, 26] and mutation analysis [27, 28, 29] have been studied in the context of DNNs, to the best of our knowledge, MBFL for DNNs has not been explored by the research community, yet the existing MBFL approaches are not directly applicable to DNN models.
This paper revisits the idea of MBFL in the context of DNNs. Specifically, we design, implement, and evaluate a technique, named deepmuff, to conduct MBFL in pre-trained DNN models. The basic idea behind deepmuff is derived from its traditional MBFL counterparts, namely, Metallaxis [30] and MUSE [17], that are based on measuring the impact of mutations on passing and failing test cases (see SSII for more details). In summary, given a pre-trained model and a set of data points, deepmuff separates the data points into two sets of "passing" and "failing" data points (test cases), depending on whether the output of the model matches the ground-truth. deepmuff then localizes the bug in two phases, namely _mutation generation phase_ and _mutation testing/execution phase_. In mutation generation phase, it uses 79 mutators, a.k.a. mutation operators, to systematically mutate the model, _e.g._, by replacing activation function of a layer, so as to generate a pool of mutants, _i.e._, model variants with seeded bugs. In mutation testing phase, deepmuff feeds each of the mutants with passing and failing data points and compares the output to the output of the original model to record the number of passing and failing test cases that are impacted by the injected bugs. In this paper, we study two types of impacts: _type 1_ impact, _a la_ MUSE, which tracks only fail to pass and pass to fail, and _type 2_ impact, like Metallaxis, which tracks changes in the actual output values. deepmuff uses these numbers to calculate _suspiciousness values_ for each layer according to MUSE, as well as two variants of Metallaxis formulas. The layers are then sorted in descending order of their suspiciousness values for the developer to inspect.
We have implemented deepmuff on top of Keras [31], and it
supports three types of DNN models for regression, as well as classification tasks that must be written using Sequential API of Keras: fully-connected DNN, convolutional neural network (CNN), and recurrent neural network (RNN). Extending deepmulti to other libraries, _e.g._, TensorFlow [32] and PyTorch [33], as well as potentially to other model architectures, _e.g._, functional model architecture in Keras, is a matter of investing engineering effort on the development of new mutators tailored to such libraries and models. Since the current implementation of deepmulti operates on pre-trained models, its scope is limited to _model bugs_[7], _i.e._, bugs related to activation function, layer properties, model properties, and bugs due to missing/redundant/wrong layers (see SSVI).
We have evaluated deepmulti using a diverse set of 109 Keras bugs obtained from StackOverflow. These bugs are representatives of the above-mentioned model bugs, in that our dataset contains examples of each bug sub-category at different layers of the models suited for different tasks. For example, concerning the sub-category _wrong activation function_ model bug, we have bugs in regression and classification fully-connected DNN, CNN, and RNN models that have wrong activation function of different types (_e.g._, ReLU, softmax, _etc._) at different layers. For 53 of the bugs, deepmulti, using its MUSE configuration, pinpoints the buggy layer by ranking it in top-1 position. We have compared deepmulti's effectiveness to that of state-of-the-art static and dynamic DNN fault localization systems Neuralint [12], DeepLocalize [11], DeepDiagnosis [8], and UMLAUT [34] that are also designed to detect model bugs. Our results show that, in our bug dataset, deepmulti, in its MUSE configuration, is 77% more effective than DeepDiagnosis, which detects 30 of the bugs.
Despite this advantage of deepmulti in terms of effectiveness, since it operates on a pre-trained model, it is slower than state-of-the-art DNN fault localization tools from an end-user's perspective. However, this is mitigated, to some extent, by the fact that similar to traditional programs, one can perform _mutation selection_[35] to curtail the mutation testing time: we observed that by randomly selecting 50% of the mutants for testing, we can still find 49 of the bugs in top-1 position, yet we halve the fault localization time after training the model.
In summary, this paper makes the following contributions.
* **Technique**: We develop MBFL for DNN and implement it in a novel tool, named deepmulti, that can be uniformly applied to a wide range of DNN model types.
* **Study**: We compare deepmulti to state-of-the-art static and dynamic fault localization approaches and observed:
* In four configurations, deepmulti outperforms other approaches in terms of the number of bugs that appear in top-1 position and it detects 21 bugs that none of the studied techniques were able to detect.
* We can halve the fault localization time for a pre-trained model by random mutation selection without significant loss of effectiveness.
* **Bug Dataset**: We have created the largest curated dataset of _model bugs_, comprising 109 Keras models ranging from regression to classification and fully-connected DNN to CNN and RNN.
**Paper organization.** In the next section, we review concepts of DNNs, mutation analysis, and MBFL. In SSIII, we present a motivating example and discusses how deepmulti works under the hood. In SSIV, we present technical details of the proposed approach, before discussing the scope of deepmulti in SSV. In SSVI, we present the results of our experiments with deepmulti and state-of-the-art DNN fault localization tools from different aspects. We discuss threats to validity in SSVII and conclude the paper in SSIX.
**Data availability.** The source code of deepmulti and the data associated with our experiments are publicly available [36].
## II Background
### _Mutation Analysis_
Mutation analysis [18], is a program analysis method for assessing the quality of the test suite. It involves generating a pool of program variants, _i.e._, _mutants_, by systematically mutating program elements, _e.g._, replacing an arithmetic operator with another, and running the test suite against the mutants to check if the output of the mutated program is different from that of the original one; if different, the mutant is marked as _killed_, otherwise as _survived_. A mutant might survive because it is semantically equivalent to the original program, hence the name _equivalent_ mutant. Test suite quality is assessed by computing a _mutation score_ for the test suite, which is the ratio of killed mutants over the non-equivalent survived mutants. Mutation score indicates how good a test suite is in detecting real bugs [37]. In addition to its original use, mutation analysis has been used for many other purposes [38], such as fault localization [16, 17], automated program repair [39, 40], test generation [41, 42] and prioritization [43], program verification [44, 45], _etc._
### _Mutation-based Fault Localization_
Mutation-based fault localization (MBFL) uses mutation analysis to find bugs. In this section, we review two major approaches to MBFL, namely Metallaxis [30] and MUSE [17]. Both of these approaches are implemented in deepmulti. The reader is referred to the original papers [30, 17] for examples explicating the rationale behind each approach.
#### Ii-B1 Metallaxis
Metallaxis [30] posits that mutants generated by mutating the same program element are likely to exhibit similar behaviors and mutants generated by mutating different program elements are likely to exhibit different behaviors. Since a fault itself can also be viewed as a mutant, it is expected to behave similar to other mutants generated by mutating that same buggy program element and can be located by examining the mutants based on this heuristic. Metallaxis assumes that the mutants impacting the test outputs, or their error messages, _e.g._, stack trace, as _impacting the tests_. Thus, mutants impacting failing test cases might indicate that their corresponding code elements is the root cause of the test failures, while mutants impacting passing test cases might indicate that their corresponding code elements are correct.
Once the number of impacted passing and failing test cases are calculated, Metallaxis uses a fault localization formula to calculate suspiciousness values for each element.
Metallaxis fault localization formula can be viewed as an extension to that of spectrum-based fault localization, by treating all mutants impacting the tests as covered elements while the others as uncovered elements. Specifically, the maximum suspiciousness value of the mutants of a corresponding code element is returned as the suspiciousness value of the code element. More concretely, assuming we are using SBI formula [46], suspiciousness value for a program element \(e\), denoted \(s(e)\), is calculated as follows.
\[s(e)=\max_{m\in M(e)}\left(\frac{|T_{f}(m,e)|}{|T_{f}(m,e)|+|T_{p}(m,e)|}\right), \tag{1}\]
where \(M(e)\) denotes the set of all mutants targeting program element \(e\), \(T_{f}(m,e)\) is the set of failing test cases that are impacted by the mutant \(m\), while \(T_{p}(m,e)\) denotes the set of passing test cases that are impacted by \(m\). In this definition, and in the rest of the paper, the notation \(|\cdot|\) represents the size of a set. Alternatively, had we used Ochiai [47], Metallaxis suspiciousness formula would be modified as follows.
\[s(e)=\max_{m\in M(e)}\left(\frac{|T_{f}(m,e)|}{\sqrt{(|T_{f}(m,e)|+|T_{p}(m,e) |)|T_{f}|}}\right), \tag{2}\]
where \(T_{f}\) denotes the set of all failing tests cases.
#### Ii-B2 Muse
MUSE [17] is based on the assumption that mutating a faulty program element is likely to impact more failed test cases than passing test cases by "fixing" it, while mutating a correct program element is likely to impact more passing test cases than failing test cases by breaking it. The notion of "impacting test cases" in MUSE, unlike Metallaxis, is more rigid, in that it refers to making passing test cases fail, _vice versa_. Once the number of impacted failing and passing test cases are identified, _suspiciousness values_ can be calculated using the following formula.
\[s(e)=\frac{1}{|M(e)|}\Sigma_{m\in M(e)}\left(\frac{|T_{f}(m,e)|}{|T_{f}|}- \alpha\frac{|T_{p}(m,e)|}{|T_{p}|}\right), \tag{3}\]
where \(T_{p}\) denotes the set of all passing tests cases and \(\alpha\) is a constant used to balance the two ratios that is defined to be \(\frac{|F-P|}{|T_{f}|}\frac{|T_{p}|}{|T_{p}|}\). In the latter definition, \(F\leadsto P\) denotes the set of failing test cases that pass due to some mutation, while \(P\leadsto F\) denotes the set of passing test cases that fail as a result of some mutation.
### _Deep Neural Networks_
A neural network is intended to compute a function of the form \(\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}\), where \(m,n\) are positive integers. A neural network is often represented as a weighted directed acyclic graph arranged in layers of three types, _i.e._, _input layer_, one or more _hidden layers_, and an _output layer_. Input and output layers output linear combination of their inputs, while hidden layers can be viewed as more complex computational units, _e.g._, a _non-linear unit_, _convolutional unit_, or a _batch normalization unit_. A non-linear unit is composed of _neurons_, functions applying a non-linear _activation function_, _e.g._, rectified linear unit (ReLU), tanh, or sigmoid, on the weighted sum of their inputs. A convolutional layer, calculates the convolution between the vector of the values obtained from the previous layer and a learned kernel matrix. Lastly, a batch normalization layer, normalizes the vector of values obtained from the previous layer _via_ centering or re-scaling. A neural network with two or more hidden layers is referred to as a _deep neural network_ (DNN).
## III Motivating Example
In this section, we first describe how deepmulti helps programmers detect and fix bugs by presenting a hypothetical use case scenario and then motivate the idea behind deepmulti by describing the details of how deepmulti works, under the hood, on the example developed in the use case story.
Courtney is a recent college graduate working as a junior software engineer at an oil company, which frequently makes triangular structures, made of epoxy resin, of varying sizes to be used under the water. The company needs to predict with at least 60% confidence that a mold of a specific size will result in an epoxy triangle after it has been dried, and potentially shrunk, and it does not need to spend time on cutting and/or sanding the edges. Over time, through trial and error, the company has collected 1,000 data points of triangle edge lengths and whether or not a mold of that size resulted in a perfect triangle. Courtney's first task is to write a program that given three positive real numbers \(a\), \(b\), and \(c\), representing the edge lengths of the triangle mold, determines if the mold will result in epoxy edges that form a perfect triangle. As a first attempt, she writes the program shown in Listing 1.
```
1:ifload994ofthedatapoints=X_trainandy_train
2:#
3:
4:model=Sequential()
5:model.add(Dense(2,activation=relu^))
6:
7:model.compile(les='sparse_categorical_consistency'.
8:optimizers='normal'.netics='accuracy')
9:model.fit(X=train.Y=train.Y=train.Spe=ks=100.validation_split=0.1)
```
Listing 1: Courtney's first attempt
The program uses 994 out of 1,000 data points for training a model. After testing the model on the remaining 6 data points, she realizes that the model achieves no more than 33% accuracy. Fortunately, Courtney uses an IDE equipped with a modern DNN fault localization tool, named deepmulti, which is known to be effective at localizing bugs that manifest as stuck accuracy/loss. She uses deepmulti, with its default settings, _i.e._, Metallaxis with SBI, to find the faulty part of her program. The tool receives the fitted model in.h5 format [48] together with a set of testing data points \(T\) and returns a ranked list of model elements; layers, in this case. After Courtney provides deepmulti with the model saved in.h5 format and the 6 testing data points that she had, within a few seconds, the tool returns a list with two items, namely Layer 2 and Layer 1, corresponding to the lines 5 and 4, respectively, in Listing 1. Once she navigates
to the details about Layer 2, she receives a ranked list with 5 elements, _i.e._, Mutant 12: replaced activation function'relu' with'softmax',..., Mutant 10: divided weights by 2, Mutant 11: divided bias by 2. By seeing the description of Mutant 12, Courtney immediately recalls her machine learning class wherein they were advised that in classification tasks they should use _softmax_ as the activation function of the last layer. She then changes the activation function of the last layer at Line 5 of Listing 1 from relu to softmax. By doing so, the model achieves an accuracy of 67% on the test dataset, and similarly on a cross-validation, exceeding the expectations of the company.
We now describe how deepmulti worked, under the hood, to detect the bug _via_ Metallaxis' default formula. Figure 1 depicts the structure of the model constructed and fitted in Listing 1. Each edge is annotated with its corresponding weight and the nodes are annotated with their bias values. The nodes are using ReLU as the activation function. In this model, the output \(T\) is intended to be greater than the other output if \(a\), \(b\), and \(c\) form a triangle, and \(\sim\)\(T\) should be greater than or equal to the other output, otherwise.
Table 1 shows an example of how deepmulti localizes the bug in the model depicted in Figure 1. In the first two columns, the table lists the two layers, and within each layer, the neurons. For each neuron three mutators are applied, _i.e._, halving weight values, halving bias value, and replacing the activation function. More mutators are implemented in deepmulti, but here, for the sake of simplicity, we only focus on 3 of them and also restrict ourselves to only one activation function replacement, _i.e._, ReLU vs. softmax.
As we saw in Courtney's example, she had a test dataset \(T\) with 6 data points which initially resulted in 33% accuracy. These six data points are denoted T1,..., T6 in Table 1, where correctly classified ones are colored green, whereas misclassified data points are colored rose. deepmulti generates 12 mutants for the model of Figure 1, namely, M1,..., M12. Each mutant is a variant of the original model. For example, M1 is the same as the model depicted in Figure 1, except the weights of the incoming edges to neuron N1 are halved, _i.e._, 0.51, -0.38, and -0.52 from left to right, while M9 is the same as the model depicted in Figure 1, except that the activation functions for N3 and N4 are softmax instead of relu. After generating the mutants, deepmulti applies each mutant on the inputs T1,..., T6 and compares the results to that of the original model. For each data point T1,..., T6, if the result obtained from each mutant M1,..., M12 is different from that of the original model, we put a bullet point in the corresponding cell. For example, two bullets points in the row for M3 indicate that the mutant misclassifies the two data points that used to be correctly classified, while other data points, _i.e._, T1,..., T4, are misclassified as before. Next, deepmulti uses SBI formula [46] to calculate suspiciousness values for each mutant \(m\in\) {M1,..., M12}, individually. These values are reported under the one but last column in Table 1. Lastly, deepmulti takes the maximum of the suspiciousness values of the mutants corresponding to a layer and takes it as the suspiciousness value of that layer (c.f. Eq. 1 in SII). In this particular example, layer L1 gets a suspiciousness value of 0, while L2 gets a suspiciousness value of 1. Thus, deepmulti ranks L2 before L1 for user inspection and for each layer it sorts the mutants in descending order of their suspiciousness values, so that the user will understand what change impacted most the originally correctly classified data points. In this case, M12 and M9 wind up at the top of the list, and as we saw in Courtney's story, the information associated with the mutations helped fixing the bug.
## IV Proposed Approach
Our technique deepmulti comprises four components: (1) mutation generator, (2) test case splitter, (3) mutation executor/tester, and (4) suspiciousness calculator. Figure 2 depicts these components as processes, numbered accordingly, taking
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c}
inputs and producing outputs. Mutation generator (marked 1 in Figure 2), applies 79 mutators on all the layers of the input buggy DNN, so as to generate a pool of mutants, _i.e._, variants of the original, buggy DNN model with small perturbations, _e.g._, replacing activation function of a layer. Test case splitter (marked 2 in the figure), applies the original buggy DNN on a given set of test data points, _i.e._, test cases (or input values) paired with their expected output values, so as to partition the set into two subset, namely _passing test cases_ and _failing test cases_. Passing test cases are referred to as those input values for which the expected output matches that of produced by the original model, whereas failing test cases are referred to as those input values for which the expected output does not match that of produced by the model. This component also stores the output of the original model on each of the test cases. Next, mutation executor (which is also called mutation tester, marked 3 in the figure) applies the generated mutants on each of the passing and failing test cases and the results are compared to that of the original model recorded in the previous step. This amounts to a mutation execution matrix that is used to calculate suspiciousness values for each layer in the model (marked 4 in the figure). The user may instruct deepmull to use a specific fault localization formula, _e.g._, MUSE or Metallaxis with SBI or Ochiai, for calculating suspiciousness values. The layers are then ranked based on the calculated suspiciousness values for user inspection. The ranked list is accompanied with the information about the mutations conducted on each layer to facilitate debugging.
### _Mutation Generator_
Mutation generator component receives the original, buggy DNN model and generates as many variants of the model, _i.e._, mutants, as possible, by systematically mutating every elements of the input model. This component implements 79 mutators. Mutators can be viewed as transformation operators that when applied to a given element, _e.g._, a neuron or a layer, in the model, returns a new variants of the model with that particular element mutated. Table 2 lists all the mutators implemented in deepmull, the types of model elements on which they can operate, and the way each mutator affects the target element. These mutators are inspired by the ones implemented in the existing mutation analysis systems, _e.g._, [27, 28, 29, 50, 51, 52, 53], to name a few. Ma _et al._[29], and Hu _et al._[28], define so-called _model-level_ mutators that also operate on pre-trained models. Direct reuse of all of their mutators was not possible, as those mutators depend on random values which would introduce a source of non-determinism in deepmull: mutating the same model element with random values, _e.g._, Gaussian fuzzing, as in [29], would yield a different mutant each time, making deepmull to produce different outputs on each run for the same model. In general, as far as MBFL is concerned, using variable values (whether it is deterministic or not), instead of the current hard-coded ones, for mutation of weights would not bring about any benefit, as the goal here is to _break_ the model in some way and observe its impact on originally failing and passing test cases.
We argue that not all model bugs could be emulated using mutators at the level of pre-trained models, _e.g._, missing batch normalization, but the mutators listed in Table 2 are sufficient for emulating a subset of such bugs, _e.g._, wrong activation function or missing/redundant layer. Please see SSV for a more detailed discussion on supported bugs.
Mutation generation in deepmull is done directly on the trained model and there is no need for retraining the model. This makes deepmull quite fast and perhaps more attractive from a practical point of view. However, this comes with a risk of not being traceable, _i.e._, a mutation on pre-trained.h5 model does not directly correspond to a line of source code for the user to inspect. In the Keras programs that we studied, this limitation was mitigated by the fact that the models with Sequential architecture were implemented using a well-understood structure and mapping layer numbers/identifiers in deepmull's reports to source code was trivial. In a future work, with the help of simple auto-generated annotations, _e.g._, for lexical scoping of the code snippet for model declaration, we will extend deepmull to automatically map layer numbers/identifiers in its reports to actual lines of source code.
Humbatova _et al._[27] argue about the importance of mutators emulating real DNN faults. We acknowledge that mutators emulating real faults would help generating more informative reports that would also give hints on how to fix the program. However, unlike mutation analysis, the main objective of an MBFL technique is to assign suspiciousness values to the program elements which can, in theory, be done using any kind of mutator, whether or not it makes sense from the standpoint of a developer. It is worth noting that the alternative design decision of using DeepCrime [27] as a mutation generation engine for deepmull would result in finding more bugs than the current version of deepmull, _e.g._, bugs related to training hyper-parameters or training dataset, but such a design is expected to be impacted by the nondeterminacy inherent in training process and, given the fact that we do not employ any training data selection, would be significantly slower due to numerous re-training. Nevertheless, finding more bugs would be an incentive for exploring this alternative as a future work.
### _Test Case Splitter_
Before we introduce this component, we need to clarify certain terminology.
**Definition 1**.: _A data point in a testing dataset for a DNN model is defined to be a pair of the form \((I,O)\), where \(I\in\mathbb{R}^{m}\) and \(O\in\mathbb{R}^{n}\), with \(m\) and \(n\) being positive integers. In this paper \(I\) is called test case, test input, or input, while \(O\) is called expected output or ground-truth value._
Given a test dataset, test case splitter component applies the original model on each of the test cases for the data points and checks if the model output matches to the expected output. If the two outputs match, then the corresponding test case will be marked as _passing_, otherwise it will be marked as _failing_. This component also records the output produced by the original model to be used during impact analysis, described below.
### _Mutation Executor (Mutation Tester)_
We start describing this component with a definition.
**Definition 2**.: _A mutation execution matrix \(\mathcal{E}\) is a \(k\times l\) matrix, where \(k\) is the number of generated mutants, while \(l\) is the number of test cases. Each element \(\mathcal{E}_{i,j}\) in the matrix is a member of the set \(\{\bigstar,\bigstar,\bigstar,\bigstar\}\), wherein \(\bigvee\) indicates that the \(i^{\mathrm{th}}\) mutant impacts \(j^{\mathrm{th}}\) test case, whereas \(\bigstar\) indicates that the mutant does not affect the test case. \(\bigcirc\) denotes a nonviable mutant, i.e., a mutant that fails when loading or applying it on a test case. Such mutants might be generated, e.g., due to creating a shape error [32] after the mutation._
Mutation executor component construct mutation execution matrix by applying each of the generated mutants (see SSIV-A) on the failing and passing test cases to determine which of the test cases are impacted by which mutants. The impact of mutation on test cases is measured using two types of impacts, _i.e._, _type 1_ impact and _type 2_ impact, defined below.
**Definition 3**.: _Given a DNN model \(\mathcal{M}\), its mutated version \(\mathcal{M}^{\prime}\), and a data point \((I,O)\), we define the two types of impacts:_
* _Type 1: Mutant_ \(\mathcal{M}^{\prime}\) impacts the test case \(I\) if \(\mathcal{M}(I)=O\) but \(\mathcal{M}^{\prime}(I)\neq O\), or \(\mathcal{M}(I)\neq O\) but \(\mathcal{M}^{\prime}(I)=O\). In other words, type 1 impact tracks pass to fail and fail to pass test cases, a la MUSE [17]._
* _Type 2: Mutant_ \(\mathcal{M}^{\prime}\) impacts the test case \(I\) if \(\mathcal{M}(I)\neq\mathcal{M}^{\prime}(I)\)._
_In this definition, \(\mathcal{M}(I)\) or \(\mathcal{M}^{\prime}(I)\) denotes the operation of applying model \(\mathcal{M}\) or \(\mathcal{M}^{\prime}\) on the test case \(I\)._
It is worth noting that checking for equality of two values can be tricky for regression models, as those models approximate the expected values. To work around this problem, deep-null compares values obtained from regression models using a user-defined delta threshold, _i.e._, the values are deemed equal if their absolute difference is no more than a threshold. By default, deepnull uses a threshold of 0.001. This is the approach adopted by well-known testing frameworks for comparing floating-point values [54, 55]. Also, whether deepnull uses type 1 or type 2 impact is a user preference and is determined alongside the threshold.
### _Suspiciousness Value Calculator_
Armed with the above definitions, we can now give concrete definitions for the terms used in Eq. 1, 2, and 3, specialized to DNNs.
* Given a model element, _i.e._, neuron or a layer, \(e\), \(M(e)\) is defined to be the set of all mutants generated by mutating \(e\). These sets are produced by mutation generator process.
* Assuming that \(m\) is a mutant on the element \(e\), \(T_{f}(m,e)\) (or \(T_{p}(m,e)\)) is defined as the set of failing (or passing) test cases that are impacted in type 1 or type 2 fashion by \(m\). More concretely, \(T_{p}(m,e)=\{t\mid\mathcal{E}_{m,t}=\bigvee\wedge t\text{ is missing}\}\), similarly \(T_{f}(m,e)=\{t\mid\mathcal{E}_{m,t}=\bigvee\wedge t\text{ is failing}\}\). These two sets are defined using a quite similar notation; the readers are encouraged to review Definitions 2 and 3 to avoid confusion.
* \(T_{f}\) (or \(T_{p}\)) is the set of originally failing (or passing) test cases. These sets are constructed by test case splitter.
* \(F\rightsquigarrow P\) (or \(P\rightsquigarrow F\)), for a model element \(e\), is defined as the set of originally failing (or passing) test cases that turned into passing (or failing) as a result of some mutation on \(e\). More concretely, assuming the execution matrix \(\mathcal{E}\) is constructed using type 1 impact, \(F\rightsquigarrow P\) is defined as \(\{t\mid t\text{ is missing}\wedge\exists m\in M(e)\cdot\mathcal{E}_{m,t}= \bigvee\}\). Similarly, \(P\rightsquigarrow F\) is defined as \(\{t\mid t\text{ is missing}\wedge\exists m\in M(e)\cdot\mathcal{E}_{m,t}= \bigvee\}\). In other words, these sets track all the failing/passing test cases that are _type 1 impacted_ by some mutant of a given element. These two sets are defined using a quite similar notation; the readers are encouraged to review Definitions 2 and 3 to avoid confusion.
Having specialized definitions for the terms used in the fault localization formulas described earlier, we are now able to calculate suspiciousness values for the elements in a DNN model. Guided by the user preferences, deepnull calculates all the values for \(|T_{f}(m,e)|\), _etc._, and plugs into the specified formula to calculate suspiciousness values for the elements. It is worth noting that if all the mutants generated for a given element are nonviable, MUSE formula (Eq. 3) and all the variants of Metallaxis (_e.g._, Eq. 1), by definition, will return 0 as the suspiciousness value for the element. Nonviable mutants do not contribute toward localizing the bug, therefore they are considered _overhead_ to the fault localization process. Fortunately, based on our observations in our dataset of buggy DNNs, nonviable mutants are rare.
\begin{table}
\begin{tabular}{c||c}
**Mutation Class** & **Description** \\ \hline \hline
**MATH_WEIGHT** & **Adiabatic 1** within the weights of a given action and multiple multiple times up 2. Targets Ounces and SimplePMN loops. \\ \hline
**MATH_WEIGHT**\(\bigcirc\) & **Adiabatic 1** without the weights of a condition keyword and multiple multiple times up 2. Targets subsets of Ounces, _i.e._, \(\bigcirc\). \\ \hline
**MATH_ACT_WEIGHT** & **Adiabatic 1** without the restriction weights of a condition keyword and multiple times up 2. Targets SinglePMN loops. \\ \hline
**MATH_IST_WLR_WEIGHT** & **Adiabatic 1** without the weight of an LSTM layer and multiple multiple times up 2. Targets 15MB loops. \\ \hline
**MATH_ISIT_PORTPORT_WEIGHT** & **Adiabatic 1** without the weight of a given action and multiple multiple times up 2. Targets Ounces and SimplePMN loops. \\ \hline
**MATH_ISIT_WLR_WEIGHT** & **Adiabatic 1** without the weight of a given action \((I,O)\), \(O\
Equivalent mutants are another source of overhead for deepmul. Currently, we do not have any means of detecting equivalent mutants, but we argue that these mutants do not impact MBFL results, as they are equivalent to the original model and do not impact any passing or failing test cases.
## V Supported DNN Bugs
Due to the complex nature of DNN bugs, and MBFL itself, we do not hope to give a formal account of what types of DNN bugs deepmul is capable of localizing. Instead, we attempt to provide as accurate description of the supported bugs as possible and discuss the way such bugs manifest in DNN programs. The discussion given in this section leverages the characterization of DNN bugs provided by previous research [7, 4, 6].
As we mentioned earlier, current version of deepmul generates on pre-trained Keras Sequential models. This means that much of the information, such as training hyper-parameters and whether or not the input data is normalized, has already been stripped away from the input to deepmul, and the current version of the technique is not capable of detecting any bug related to training process, _e.g._, training data and hyper-parameters. Moreover, a pre-trained model does not contain bugs related to tensor shapes (as otherwise, the training would fail with shape errors), and since deepmul does not receive the source code of the buggy model as input, bugs related to GPU usage and API misuse are also out of the reach of the technique, by definition. This leaves us with the so-called _model bugs_[7] the extent to which deepmul is capable of localizing is explicated below. The four model bug sub-categories are represented with identifiers SC1,..., SC4 in the rest of this paper for ease of reference.
* **SC1: Activation function**. These bugs are related to the use of wrong activation function in a layer. We observed that deepmul detects this type of bugs and it also gives actionable, direct fixes.
* **SC2: Model type or properties**. These bugs include wrong weight initialization, wrong network architecture, wrong model for the task, _etc._ Through altering the weights and biases in layers, deepmul detects weight/bias initialization bugs and pinpoint the location of the bug, but the bug report produced by the tool does not provide helpful information for fixing.
* **SC3: Layer properties**. These bugs include wrong filter/kernel/stride size, sub-optimal number of neurons in a layer, wrong input sample size, _etc._ deepmul detects and pinpoints the bugs related to filter/kernel/stride size and sub-optimal number of neurons. We observed that, the former case sometimes produce non-viable mutants. In the cases where deepmul produced viable mutants, effective MBFL takes place and it has been able to pinpoint the bug location and provide explanation on how to fix it. In the latter case, deepmul was able to pinpoint the bug location, but the bug report does not give helpful information on how to fix the bugs in this sub-category.
* **SC4: Missing/redundant/wrong layer**. These bugs include missing/extra one dense layer, missing dropout layer, missing normalization layer, _etc._ By mutating the layers adjacent to the missing layer, or deleting the redundant layer, deepmul detects and pinpoints the location of the missing/culprit layer, and in most of the cases, it provides useful information on how to fix such bugs.
By manually examining the bug descriptions provided by the programmers in our dataset of bugs, and also referring to the previous work on DNN bugs and root cause characterization [4], these bugs might manifest as low test accuracy/MSE, constant validation accuracy/MSE/loss during training, NaN validation accuracy/MSE/loss during training, dead nodes, vanishing/exploding gradient, and saturated activation.
At this point, we would like to emphasize that deepmul is not intended to repair a model, so if a mutation happens to be the fix for the buggy model, the model has to be retrained from scratch so that correct weights and biases will be calculated.
## VI Evaluation
We evaluate deepmul and compare it to state-of-the-art static and dynamic DNN fault localization techniques, by investigating the following research questions (RQs).
* **RQ1 (Effectiveness)**: 1. How does deepmul compare to state-of-the-art tools in terms of the number of bugs detected? 2. How many bugs does deepmul detect from each subcategory of model bugs in our dataset and how does that compare to state-of-the-art tools? 3. What are the overlap of detected bugs among deepmul and other fault localization techniques?
* **RQ2 (Efficiency)**: 1. What is the impact of mutation selection on the effectiveness and efficiency of deepmul? 2. How does deepmul compare to state-of-the-art tools in terms of end-to-end fault localization time?
### _Dataset of DNN Bugs_
To evaluate deepmul and compare it to state-of-the-art DNN fault localization techniques, we queried StackOverflow Q&A website for posts about Keras that had at least one accepted answer. Details about the SQL query used to obtain the initial list of posts is available online [36]. The query resulted in 8,412 posts that we manually sieved through to find the programs with model bugs. Specifically, we kept the bugs that satisfied the following conditions.
* Implemented using Sequential API of Keras,
* The bug in the program was a _model bug_ supported by deepmul as described in SSV, and
* The bug either had training datatset available in the post in some form (_e.g._, hard-coded, clearly described in the body of the post, or a link to the actual data was provided) or we could see the described error using synthetic data obtained from scikit-learn's dataset generation API.
This resulted in 102 bugs and we paired each bug with a fix obtained from the accepted answer to the question. We further added 7 bugs from DeepLocalize dataset [11] that are also coming from StackOverflow and we paired these bugs also with their fixes that are obtained from the most up-voted answer. Thus, we ended up with 109 bugs in total. To the best of our knowledge, this is the largest dataset of model bugs obtained from StackOverflow and it overlaps with the existing DNN bug datasets from previous research [12, 56]. Our bug dataset contains 85 classifiers (45 fully-connected DNNs, 29 CNNs, and 11 RNNs) and 24 regression models (19 fully-connected DNNs, 3 CNNs, and 2 RNNs). And each category has at least one example of model bugs. Therefore, we believe that our dataset is greatly representative of model bugs, _i.e._, the bugs supported by deepmft (and other tools that support this type of bugs), as we have examples of each sub-category of bug in various locations of the models for various regression and classification tasks.
After loading the training dataset for the bugs, we fitted the buggy models three times and stored them in.h5 file format separately. The repetition was conducted to take randomness in training into account. Randomness in data generation was mitigated by using deterministic random seeds. For fault localization purposes, we used the test dataset, and if it was not available, we used the training dataset itself. When we had to use synthesized data points, we deterministically splitted the generated set of data into training and testing datasets.
### _Baseline Approaches and Measures of Effectiveness_
In RQ1 and RQ2, we compare five different configurations of deepmft to recent static and dynamic DNN fault localization tools. The five configurations of deepmft are as follows.
* **Metallaxis**[30]: In this setting, we use the Metallaxis formula to calculate suspiciousness values of model elements. Metallaxis, by default, uses SBI [46] to calculate suspiciousness values for individual mutants. A recent study [57] provides empirical evidence on the superiority of Ochiai [47] over SBI when used within Metallaxis formula. Thus, we considered the following four combinations: _type 1 impact_: (1) SBI formula (_i.e._, Eq. 1); (2) Ochiai formula (_i.e._, Eq. 2), and _type 2 impact_: (3) SBI formula (_i.e._, Eq. 1); (4) Ochiai formula (_i.e._, Eq. 2).
* **MUSE**[17]: We used the default formula of MUSE to calculate the suspiciousness of model elements. For this, only type 1 impact is considered, as the heuristics behind MUSE are defined based on type 1 impact.
Our technique follows a more traditional way of reporting root causes for the bugs [30, 17, 57, 19, 22, 21, 15], in that it reports a list of potential root causes ranked based on the likelihood of being responsible for the bug. This allows the users find the bugs faster and spend less time reading through the fault localization report, which in turn increases practicality of the technique [58]. We have used top-\(N\), with \(N=1\), metric to measure the effectiveness of deepmft in RQ1 and RQ2. Specifically, if the numbers of any of the buggy layers of the bug appeared in the first place in the output of deepmft, we reported it as _detected_, otherwise we marked the bug as _not-detected_. We emphasize that top-1 metric gives a strong evidence on the effectiveness of deepmft, as the developers usually only inspect top-ranked elements, _e.g._, over 70% of the developers only check top-5 ranked elements [59].
Our selection criteria for the studied fault localization techniques are: (1) availability; (2) reproducibility of the results reported in the original papers, so as to have a level of confidence on the correctness of the results reported here; and (3) support for _model bugs_ in our dataset, so that we can make a meaningful comparison to deepmft. Below we give a brief description of each of the selected tools, why we believe they support model bugs, and how we have interpreted their outputs in our experiments, _i.e._, when we regard a bug being detected by the tool.
#### Iii-B1 Neuralint
A static fault localization tool that uses 23 rules to detect faults and design inefficiencies in the model. Each rule is associated with a set of rules of thumb to fix the bug that are shown to the user in case the precondition for any of the rules are satisfied. The five rules described in Section 4.2.1 of the paper target model bugs. Neuralint produces outputs of the form \([\texttt{Layer}\ L==>MSG]^{*}\), where \(L\) is the suspicious layer number, and \(MSG\) is a description of the detected issue and/or suggestion on how to fix the problem. A bug is deemed _detected_ by this tool if it is located in the layer mentioned in the output message or the messages describe any of the root causes of the bug.
#### Iii-B2 DeepLocalize
A dynamic fault localization technique that detects numerical errors during model training. One of three rules described in Section III.D of the paper checks model bugs related to wrong activation function. DeepLocalize produces a single message of the form \(\texttt{Batch}\ B\ \texttt{Layer}\ L:\ MSG\), where \(B\) is the batch number wherein the symptom is detected and \(L\) and \(MSG\) are the same as we described for Neuralint. A bug is deemed _detected_ if it is located in the layer mentioned in the output message or the message describes any of the root causes of the bug.
#### Iii-B3 DeepDiagnosis
A tool similar to DeepLocalize, but with more bug pattern rules and a decision procedure to give actionable fix suggestions to the users based on the observations. All 8 rules in Table 2 of the paper monitor the symptoms of model bugs. Similar to DeepLocalize, DeepDiagnosis produces a single message of the form \(\texttt{Batch}\ B\ \texttt{Layer}\ L:\ MSG_{1}\ [\texttt{OR}\ MSG_{2}]\), where \(B\) and \(L\) are the same as described in DeepLocalize and \(MSG_{1}\) and
\begin{table}
\begin{tabular}{c||c|c|c|c||c} \hline
**deepmft configuration /** & **SC 1** & **SC 2** & **SC 3** & **SC 4** & **Total (detected)** \\ \hline
**Metallaxis SBI + Type 1** & 31 & 2 & 6 & 3 & 42 \\ \hline
**Metallaxis Ochiai + Type 1** & 36 & 2 & 7 & 2 & 47 \\ \hline
**Metallaxis SBI + Type 2** & 18 & 2 & 4 & 2 & 26 \\ \hline
**Metallaxis Ochiai + Type 2** & 29 & 2 & 4 & 2 & 37 \\ \hline
**MUSE** & **41** & 3 & 6 & 3 & 38 \\ \hline \hline
**Neuralint** & 15 & 1 & 4 & 1 & 21 \\ \hline
**DeepLocalize** & 21 & 0 & 4 & 1 & 26 \\ \hline
**DeepDiagnosis** & 22 & 2 & 5 & 1 & 30 \\ \hline
**UNLATY** & 18 & 1 & 6 & 0 & 25 \\ \hline \hline
**Total (entire dataset)** & 80 & 4 & 17 & 8 & \\ \hline \end{tabular}
\end{table}
Table 3: Effectiveness of different deepmft configurations and four other tools in detecting bugs from four sub-categories of model bugs
\(MSG_{2}\) are two alternative solutions that the tool might suggest to fix the detected problem. A bug is deemed _detected_ if it is located in the layer mentioned in the output message or the message describes any of the root causes of the bug.
#### V-B4 Umlaut
A hybrid, _i.e._, a combination of static and dynamic, technique that works by applying heuristic static checks on, and injecting dynamic checks in, the program, parameters, model structure, and model behavior. Violated checks raise error flags which are propagated to a web-based interface that uses visualizations, tutorial explanations, and code snippets to help users find and fix detected errors in their code. All three rules described in Section V-B2 of the paper target model bugs. The tool generates outputs of the form \([<MSG_{1}>\cdots<MSG_{m}>]^{*}\), where \(m>0\) and \(MSG_{i}\) is a description of the problem detected by the tool. A bug is deemed _detected_ if any of the messages match the fix prescribed by the ground-truth.
### _Results_
To answer RQ1, we ran deepmulti (using its five configurations) and four other tools on the 109 bugs in our benchmark. We refer the reader to the repository [36] for the raw data about which bug is detected by which tool, and here we describe the summaries and provide insights.
At top-1, deepmulti detects 42, 47, 26, 37, and 53 bugs using its Metallixs SBI + Type 1, Metallixs Ochiai + Type 1, Metallixs SBI + Type 2, Metallixs Ochiai + Type 2, and MUSE, respectively, configurations. Meanwhile Neuralint, DeepLocalize, DeepDiagnosis, and UMLAUT detect 21, 26, 30, and 25, respectively, bugs. Therefore, as far as the number of bugs detected by each technique is concerned, MUSE configuration of deepmulti is the most effective configuration of deepmulti, significantly outperforming studied techniques, and Metallixs Ochiai + Type 2 is the least effective one, outperformed by DeepDiagnosis. An empirical study [57], which uses a specific dataset of traditional buggy programs, concludes that Metallixs Ochiai + Type 2 is the most effective configuration for MBFL. Meanwhile, our results for DNNs corroborates the theoretical results by Shin and Bae [60], _i.e._, we provide empirical evidence that in the context of DNNs MUSE is the most effective MBFL approach.
Table III reports more details and insights on the numbers discussed above. Specifically, it reports the number of bugs detected by each configuration of deepmulti an four other studied tools from each sub-category of model bugs present in our dataset of bugs. As we can see from the upper half of the table, MUSE is most effective in detecting bugs related to activation function (SC1), bugs related to model type/properties (SC2), and wrong/redundant/missing layer (SC4), while Metallixs Ochiai + Type 1 configuration outperforms other configurations in detecting bugs related to layer properties (SC3). Similarly, from bottom half of the table, we can see that other tools are also quite effective in detecting bugs related to activation function, with DeepDiagnosis being the most effective one among others. We can also observe that UMLAUT has been the most effective tool in detecting bugs related to layer properties. As we can see, MUSE configuration of deepmulti is consistently more effective than other tools across all bug sub-categories.
Table IV provides further insights on the overlap of bugs detected by each variant of deepmulti and those detected by the other four tools. Each value in row \(r\) and column \(c\) of this table, where \(2\leq r\leq 5\) and \(2\leq c\leq 6\), denotes the percentage of bugs detected by the deepmulti variant corresponding to row \(r\) and tool corresponding to column \(c\). The values inside the parenthesis are the actual number of bugs. For example, 8 out of 42, _i.e._, 19.05%, of the bugs detected by Metallixs SBI + Type 1 configuration of deepmulti are _also_ detected by DeepLocalize. The last column of the table reports same statistics, except for all four of the studied tools combined. As we can see from the table, 60.38% of the bugs detected by MUSE configuration of deepmulti are already detected by one of the four tools, yet it detects 21 (=53-32) bugs that are not detected by any other tools. This is because deepmulti approaches fault localization problem from a fundamentally different aspect giving it more flexibility. Specifically, instead of looking for conditions that trigger a set of hard-coded rules, indicating bug patterns, deepmulti breaks the model using a set of mutators to observe how different mutation impact the model behavior. Then by leveraging the heuristics underlying traditional MBFL techniques, it performs fault localization using the observed impacts on the model behavior. Listing 2 shows an example of a model bug that only deepmulti can detect.
``` :H:Head andpoint() :H:Head and() :Head.Head() :Head.Head() :Head.Head() :Head.Head() :Head.Head.Head() :Head.Head.(Head) :Head.() :Head.(Head) :Head.
To answer RQ2, we ran deepmulti and the other four tools on a Dell workstation with Intel(R) Xeon(R) Gold 6138 CPU at 2.00 GHz, 330 GB RAM, 128 GB RAM disk, and Ubuntu 18.04.1 LTS and measured the time needed for model training as well as the MBFL process to complete. We repeated this process four times, and in each round of deepmulti's execution, we randomly selected 100% (_i.e._, no selection), 75%, 50%, and 25% of the generated mutants for testing. Random mutation selection is a common method for reducing the overhead of mutation analysis [61, 35]. During random selection, we made sure that each layer receives at least one mutants, so that we do not mask any bug. The last row in Table 5 reports the average timing (of 3 runs) of MBFL in each round of mutation selection. The table also reports the impact of mutation selection on the number of bugs detected by each configuration of deepmulti. As we can see, in MUSE configuration of deepmulti, by using 50% of the mutants, one can halve the execution time and still detect 92.45% of the previously detected bugs. Therefore, mutation selection can be used as an effective way for curtailing MBFL time in DNNs.
For a fair comparison of deepmulti to state-of-the-art fault localization tools in terms of efficiency, we need to take into account the fact that deepmulti requires a pre-trained model as its input. Thus, as far as the end-to-end fault localization time from an end-user's perspective is concerned, we want to take into consideration the time needed to train the input model in addition the time needed to run deepmulti. With training time taken into account, deepmulti takes, on average, 1492.48, 1714.63, 1958.35, and 2192.4 seconds when we select 25%, 50%, 75%, and 100% of the generated mutants, respectively. We also emphasize that the time for DeepLocalize and DeepDiagnosis varied based on whether or not they found the bug. Given the fact that a user could terminate the fault localization process after a few epochs when they lose hope in finding bugs with these two tools, we report two average measurements for DeepLocalize and DeepDiagnosis: (1) average time irrespective of the fact that the tools succeed in finding the bug; (2) average time if the tools successfully finds the bug. Unlike these two tools, the time for Neuralint and UMLAUT does not change based on the fact that they detect a bug or not. DeepLocalize takes on average 1244.09 seconds and it takes on average 57.29 seconds when the tool successfully finds the bug. These numbers for DeepDiagnosis are 1510.71 and 11.05 seconds, respectively. Meanwhile, Neuralint and UMLAUT take on average 2.87 seconds and 1302.61 seconds to perform fault localization.
### _Discussion_
It is important to note that while deepmulti outperforms state-of-the-art techniques in terms of the number of bugs detected in our dataset, it is not meant to replace them. Our dataset only covers a specific type of bugs, _i.e._, model bugs, while other studied techniques push the envelope by detecting bugs related to factors like learning rate and training data normalization, which are currently outside of deepmulti's reach. We observed that combining all the techniques results in detecting 87 of the bugs in our dataset; exploring ways to combine various fault localization approaches by picking the right tool based on the characteristics of the bug is an interesting topic for future research. Moreover, depending on the applications and resource constraints, a user might prefer one tool over another. For example, although Neuralint might be limited by its static nature, _e.g._, it might not be able analyze models that use complex computed values and objects in their construction, it takes only few seconds for the tool to conduct fault localization. Thus, in some applications, _e.g._, online integration with IDEs, approaches like that of Neuralint might be the best choice.
A major source of overhead in an MBFL technique is related to the sheer number of mutants that the technique generates and tests [62, 61]. Sufficient mutator selection [63] is referred to the process of selecting a subset of mutators that achieve the same (or similar) effect, _i.e._, same or similar mutation score and same or similar number of detected bugs, but with smaller number of mutants generated and tested. For the mutators of Table 2, so far, we have not conducted any analysis on which mutators might be redundant, as a reliable mutator selection requires a larger dataset that we currently lack. We postpone this study as a future work.
Combining fault localization tools can be conducted with the goal of improving efficiency. We see the opportunity in building faster, yet more effective, fault localization tools by predicting the likely right tool upfront for a given model or running tools one by one and moving on to the next tool if we have a level of confidence that the tool will not find the bug. We postpone this study for a future work.
Lastly, we would like to emphasize that comparisons to the above-mentioned techniques in a dataset of bugs that deepmulti supports is fair, as the other tools are also designed to detect bugs in the category of model bugs. However, making these tools to perform better than this, would require augmenting their current rule-base with numerous new rules, yet adding new rules comes with the obligation of justifying the generality and rationale behind them, which might be a quite difficult undertaking. deepmulti, on the other hand, approaches the fault localization problem differently, allowing for more flexibility without the need for hard-coded rules.
## VII Threats to Validity
As with most empirical evaluations, we do not have a working definition of representative sample of DNN bugs, but we made efforts to ensure that the bugs we used in the evaluation is as representative as possible by making sure
\begin{table}
\begin{tabular}{c|c|c|c|c} & \multicolumn{4}{c}{Selected mutants} \\ \cline{2-5} & **25\%** & **30\%** & **75\%** & **100\%** \\ \hline
**Metaplasia Still + Type 1** & 37 & 41 & 42 & 42 \\ \hline
**Metaplasia Still + Type 2** & 40 & 46 & 47 & 47 \\ \hline
**Metaplasia Still + Type 2** & 25 & 26 & 26 & 26 \\ \hline
**Metaplasia Conjal + Type 2** & 34 & 37 & 37 & 37 \\ \hline
**MUSE** & 42 & 49 & 51 & 53 \\ \hline
**Time (6)** & 340.58 & 562.72 & 806.45 & 1,040.49 \\ \end{tabular}
\end{table}
Table 5: The impact of mutation selection on the effectiveness and execution time of deepmulti
that our dataset has diverse examples of bugs from each sub-category of model bugs.
Many of the bugs obtained from StackOverflow did not come with accompanying training datasets. To address this issue, we utilized the dataset generation API provided by scikit-learn [64] to generate synthetic datasets for regression or classification tasks. We ensured that the errors described in each StackOverflow post would manifest when using the synthesized data points and that applying the fix suggested in the accepted response post would eliminate the bug. However, it is possible that this change to the training process may introduce new unknown bugs. To mitigate this risk, we have made our bug benchmark publicly available [36].
Another potential threat to the validity of our results is the possibility of bugs in the construction of deepmulti itself, which could lead to incorrect bug localization. To mitigate this, we make the source code of deepmulti publicly available for other researchers to review and validate the tool.
Another threat to the validity of our results is the potential impact of external factors, such as the stochastic nature of the training process and the synthesized training/testing datasets, as well as system load, on our measurements. To address this, besides using deterministic seeds for dataset generation and splitting, we repeated our experiments with deepmulti three times. Similarly, we also ran other dynamic tools three times to ensure that their results were not affected by randomness during training. We did not observe any differences in effectiveness between the rounds for either deepmulti or the other studied techniques. Additionally, we repeated the time measurements for each round, and reported the average timing, to ensure that our time measurements were not affected by system load. Furthermore, judging whether or not any of the tools detect a bug requires manual analysis of textual description of the bugs and matching it to the tools; output messages which might be subject to bias. To mitigate this bias, we have made the output messages by the tools available for other researchers [36].
Lastly, deepmulti uses a threshold parameter to compare floating-point values (see SSIV-C). In our experiments, we used the default value of 0.001 and ensured that smaller threshold values yield the same results.
## VIII Related Work
Neuralint [12] uses _graph transformations_[65] to abstract away unnecessary details in the model and check the bug patterns directly on the graph. While Neuralint is orders of magnitude faster than deepmulti, it proved to be less effective than deepmulti in our dataset.
DeepLocalize [11] and DeepDiagnosis [8] intercept the training process looking for known bug patterns such as numerical errors. DeepDiagnosis pushes the envelope by implementing a decision tree that gives actionable fix suggestions based on the detected symptoms. A closely related technique, UMLAUT [34], works by applying heuristic static checks on, and injecting dynamic checks in, various parts of the DNN program. deepmulti outperforms DeepLocalize, DeepDiagnosis, and UMLAUT in terms of the number of bugs detected.
DeepFD [66] is a recent learning-based fault localization technique which frames the fault localization as a learning problem. MODE [25] and DeepFault [26] implement white-box DNN testing technique which utilizes suspiciousness values obtained _via_ an implementation of spectrum-based fault localization to increase the hit spectrum of neurons and identify suspicious neurons whose weights have not been calibrated correctly and thus are considered responsible for inadequate DNN performance. MODE was not publicly available, but DeepFault was, but unfortunately it was hard-coded to the examples shipped with its replication package, so we could not make the tool work without making substantial modifications to it, not to mention that these techniques work best on ReLU-based networks and applying them on most of the bugs in our dataset would not make much sense.
Other related works are as follows. PAFL [67] operates on RNN models by converting such models into probabilistic finite automata (PFAs) and localize faulty sequences of state transitions on PFAs. Sun _et al._[68] propose DeepCover, which uses a variant of spectrum-based fault localization for DNN explainability.
## IX Conclusion
This paper revisits mutation-based fault localization in the context of DNN and presents a novel DNN fault localization technique, named deepmulti. The technique is based on the idea of mutating a pre-trained DNN model and calculating suspiciousness values according to Metallaxis and MUSE approaches, Ochiai and SBI formulas, and two types of impacts of mutations on the results of test data points. deepmulti is compared to state-of-the-art static and dynamic fault localization systems [11, 8, 34, 12] on a benchmark of 109 model bugs. In this benchmark, while deepmulti is slower than the other tools, it proved to be almost two times more effective than them in terms of the total number of bugs detected and it detects 21 bugs that none of the studied tools were able to detect. We further studied the impact of mutation selection on fault localization time. We observed that we can halve the time taken to perform fault localization by deepmulti, while losing only 7.55% of the previously detected bugs.
## Acknowledgments
The authors thank Anonymous ASE 2023 Reviewers for their valuable feedback. We also thank Mohammad Wardat for his instructions on querying StackOverflow. This material is based upon work supported by the National Science Foundation (NSF) under the grant #2127309 to the Computing Research Association for the CIFellows Project. This work is also partially supported by the NSF grants #2223812, #2120448, and #1934884. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the NSF. |
2309.03416 | Topological Mixed Valence Model in Magic-Angle Twisted Bilayer Graphene | We develop a model to describe the mixed valence regime in magic-angle
twisted bilayer graphene (MATBG) using the recently developed heavy-fermion
framework. By employing the large-$N$ slave-boson approach, we derive the
self-consistent mean field equations and solve them numerically. We find that
the SU(8) symmetry constraint moir\'e system exhibits novel mixed-valence
properties which are different from conventional heavy-fermions systems. We
find the solutions describing the physics at the filling near the Mott
insulator regime in the limit of strong Coulomb interactions between the
flat-band fermions. Our model can provide additional insight into the possible
microscopic origin of unconventional superconductivity in MATBG. | Yantao Li, Benjamin M. Fregoso, Maxim Dzero | 2023-09-07T00:27:31Z | http://arxiv.org/abs/2309.03416v1 | # Topological Mixed Valence Model in Magic-Angle Twisted Bilayer Graphene
###### Abstract
We develop a model to describe the mixed valence regime in magic-angle twisted bilayer graphene (MATBG) using the recently developed heavy-fermion framework. By employing the large-\(N\) slave-boson approach, we derive the self-consistent mean field equations and solve them numerically. We find that the SU(8) symmetry constraint moire system exhibits novel mixed-valence properties which are different from conventional heavy-fermions systems. We find the solutions describing the physics at the filling near the Mott insulator regime in the limit of strong Coulomb interactions between the flat-band fermions. Our model can provide additional insight into the possible microscopic origin of unconventional superconductivity in MATBG.
_Introduction._--The discovery of correlated electronic phases including superconductivity in magic-angle twisted bilayer graphene (MATBG) [1; 2] has stimulated research efforts to explore various electronic properties in graphene-based multilayer structures [3; 4; 5; 6; 7; 8; 9; 10; 11] as well as in van der Waals heterostructures and other platforms [12; 13; 14; 15; 16; 17; 18; 19]. As a result, a new field "twistronic physics" has emerged [20] which focuses on theoretical aspects of these systems and covers both static [21; 22; 23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] and non-equilibrium properties [39; 40; 41; 42; 43].
The MATBG system consists of two single graphene sheets which are twisted relative to each other at certain angles called magic angles [44; 45; 46; 47; 48; 49; 50]. It is believed that it is the flat bands that appear at such magic angles are the main driver for the exotic physical phenomena which were experimentally observed in these systems. The origin of magic angles is related to the case when the electron tunneling in the AA-region of MATBG is neglected corresponding to the chiral limit [51; 52]. It can be theoretically shown that one does not need to use the chiral limit for other twisted graphene stacks to exhibit such flat bands associated with Dirac cones [53]. Furthermore, the discovery of superconductivity in MATBG demonstrates yet another example of superconductivity emerging from the'strange metal' phase [54] and, as such, is reminiscent of the physics of the high-Tc copper-based and heavy-fermion superconductors. In passing we note, that several works have recently attempted to explain the origin of superconductivity in MATBG from various perspectives [55; 56; 57; 58; 59].
Most recently an alternative viewpoint has emerged. Specifically, focusing on a first magic angle, Song and Bernevig showed that a model for twisted bilayer graphene can be mapped to the heavy-fermion model [60]. This elegant theory is based on the experimental fact that AA-region in MATBG exhibits a quantum dot-like behavior [61] and so one can describe the physics of this region using a model with the flat band electrons (\(f\)-electrons). The electrons in the AB/BA regions play the role of conduction electrons (\(c\)-electrons). By mapping the Bistritzer-MacDonald model [44] to the periodic Anderson model at the first magic angle, Song and Bernevig created a way to bridge the MATBG with the more conventional heavy-fermion systems. However, in striking contrast with the conventional heavy-fermion system, the MATBG hosts SU(8) symmetry due to spin and valley degrees of freedom and two central flat bands. Shortly after several important works appeared, which addressed various aspects of this unique moire heavy-fermion-like system [62; 63; 64; 65; 66; 67; 68; 69; 70].
In conventional heavy-fermion systems, one usually distinguishes between the so-called local moment (or Kondo) regime and the mixed-valent one. In the local moment regime, the energy of the flat (\(f\)-orbital) band lies well below the Fermi energy of the conduction electrons, while in the mixed-valent regime, it lies close to the Fermi energy. In order to describe both of these regimes on a technical level, one considers the limit when the local Coulomb repulsion is taken to infinity. Then one introduces the projection operators along with the Lagrange multiplier to enforce the constraint of the single occupancy on the \(f\)-levels. In the mean-field approximation, one replaces the projection operators and constraint fields with the \(c\)-numbers which are computed self-consistently and describe the renormalization of the \(f\)-energy level, hybridization between \(c\) and \(f\)-electrons as well as renormalization of the chemical potential.
Early works on the application of the heavy-fermion model to MATBG have typically focused on the Kondo regime and have not solved the problem self-consistently. In this paper, we attempt to resolve this issue. In what follows, we present a new heavy-fermion model for the MATBG and discuss the physics both in the Kondo regime and the mixed valence regime self-consistently. Our self-consistent solutions pave the way to go beyond the mean-field approximation and consider the effects of fluctuations on the competing phases of MATBG.
An important aspect of our model is that we consider the strong limit of Coulomb interactions, this could provide another way to explain the microscopic origin of unconventional (\(d\)-wave) superconductivity, which can be shown to arise from the quantum mechanical fluctuations in the number of the \(f\)-electrons and is purely electronic in origin. Indeed, as the very recent experiment sug
gests [71], the MATBG does indicate the \(d\)-wave superconductivity similar to the Cooper pair symmetry of the superconductivity in the conventional heavy fermion systems [72].
_Setup and Topological Heavy Fermion Model._--The MATBG setup is illustrated in Fig. 1(a). Two single graphene layers are stacked together and they are twisted relative to each other with angle \(\theta\). The AA regions are designated by white spots (with red arrows depicting flat-band electrons) and AB/BA regions are dark green (with blue spheres depicting conduction electrons).
Next, we consider the first magic angle \(\theta_{m}=1.05^{\circ}\). At this magic angle, the MATBG can be mapped from the Bistritzer-MacDonald model to a special SU(8) periodic Anderson model, i.e. the Song-Bernevig model (SB) [60]. We choose the SB model as the starting point. The Hamiltonian is
\[\hat{H}=\hat{H}_{0,c}+\hat{H}_{0,\text{f}}+\hat{H}_{0,\text{cf}}+\hat{H}_{\text {U}}, \tag{1}\]
where
\[\hat{H}_{0\text{c}}=\sum_{a,a^{\prime},\eta,s}\sum_{\mathbf{p}}h^{(c,\eta s)} _{aa^{\prime}}(\mathbf{p})c^{\dagger}_{\mathbf{p},a,\eta,s}c_{\mathbf{p},a^{ \prime},\eta,s}, \tag{2}\]
is the Hamiltonian of the conduction (\(c\)-) electrons in AB/BA moire lattice sites,
\[\hat{H}_{0\text{f}}=\sum_{\alpha,\alpha^{\prime},\eta,s}\sum_{\mathbf{k}}h^{( f,\eta s)}_{\alpha\alpha^{\prime}}(\mathbf{k})f^{\dagger}_{\mathbf{k},\alpha, \eta,s}f_{\mathbf{k},\alpha^{\prime},\eta,s}, \tag{3}\]
is the Hamiltonian of flat band (\(f\)-) electrons in AA moire lattice sites,
\[\begin{split}\hat{H}_{0\text{cf}}=&\sum_{\alpha,a,\eta,s}\sum_{\mathbf{G}}\sum_{\mathbf{k}\in\text{BZ}}[V^{(\eta s)}_{\alpha a} (\mathbf{k}+\mathbf{G})f^{\dagger}_{\mathbf{k},\alpha,\eta,s}c_{\mathbf{k}+ \mathbf{G},a,\eta,s}\\ &+\text{H.c.}],\end{split} \tag{4}\]
accounts for the hybridization between the \(c\) and \(f\)-electrons and
\[\hat{H}_{\text{U}}=\frac{U}{2}\sum_{\mathbf{R}}:\hat{n}^{f}_{\mathbf{R}}:: \hat{n}^{f}_{\mathbf{R}}:, \tag{5}\]
is the Hamiltonian describing the local Coulomb repulsion between the \(f\)- electrons. In the expressions above \(\alpha=1,2\), \(a=1,2,3,4\), \(\eta=\pm\), and \(s=\uparrow,\downarrow\) are flat band, conduction band, valley, and spin indexes correspondingly, \(\mathbf{p}=\mathbf{k}+\mathbf{G}\), \(\mathbf{G}\) is the reciprocal lattice vectors in the moire momentum space, \(\hat{n}^{f}_{\mathbf{R}}\) is the on-site density operator of \(f\)-electrons, \(U\) is the strength of the Coulomb repulsion. Lastly, the matrices which appear in the expressions above are defined according to
\[\begin{split}\hat{h}^{(c,\eta s)}(\mathbf{p})&= \begin{bmatrix}-\mu_{c}\hat{\sigma}_{0}&\nu_{\star}(\eta p_{x}\hat{\sigma}_{0 }+ip_{y}\hat{\sigma}_{z})\\ \nu_{\star}(\eta p_{x}\hat{\sigma}_{0}-ip_{y}\hat{\sigma}_{z})&M\hat{ \sigma}_{x}-\mu_{c}\hat{\sigma}_{0}\end{bmatrix},\\ V^{(\eta s)}(\mathbf{p})&=e^{\frac{-|\mathbf{p}|^{2}\lambda^{2}}{2}} \begin{bmatrix}\gamma\sigma_{0}+\nu^{\prime}_{\star}(\eta p_{x}\sigma_{x}+p_{ y}\sigma_{y})\\ 0_{2\times 2}\end{bmatrix},\end{split} \tag{6}\]
and \(h^{(f,\eta s)}=(\epsilon_{f_{0}}-\mu_{f})\hat{\sigma}_{0}\), where \(\mu_{c}\) and \(\mu_{f}\) are the chemical potentials for \(c\) and \(f-\) electrons accordingly, \(\hat{\sigma}_{0}=I_{2\times 2}\) is the unit matrix, \(\hat{\sigma}_{j}\) (\(j=x,y,z\)) are the Pauli matrices. Note that the Hamiltonian is expressed in the moire momentum space using the plane wave approximation. The size of moire momentum space (per valley per spin) is \(2+4N_{G}\) which means 2 flat bands and 4 conduction bands with \(N_{G}\) moire Brillouin zones (mBZ). The values of the parameters are \(\nu_{\star}=-4.303\) eVA, \(M=3.697\) meV, \(\gamma=-24.75\) meV, \(\nu^{\prime}_{\star}=1.622\) eVA, and the dampling factor \(\lambda=0.3375\)\(a_{M}\), where \(a_{M}\) is the moire lattice constant. Note that all these parameters correspond to'magic angle' \(\theta_{m}=1.05^{\circ}\), \(U_{0}=W_{AA}/W_{AB}=0.8\), \(W_{AA}\) and \(W_{AB}\) are interlayer hopping amplitudes in AA regions, and AB/BA regions separately, and the velocity of the electron in single layer graphene \(v_{F}=5.94\) eVA. We note that when \(M=0\) the system reaches the limit of the flat band and we do _not_ consider this case in this paper.
_Slave-Boson approach and Mean Field Equations._--To handle the SB model in the mixed valence region, we use the slave-boson approach. We extend the number of orbital, spin, and valley for both \(c\)- and \(f\)-electrons to \(N\) flavors and set the interaction
\[U=\infty \tag{7}\]
to exclude the double occupancy. Introducing the slave-boson operators \(b^{\dagger}_{\mathbf{R}}\) and \(b_{\mathbf{R}}\) at each AA site in real space
Figure 1: (a) Sketch of the setup. Two single graphene layers twist relative to each other with angle \(\bar{\theta}=\theta_{m}\). The AA regions behave like quantum dots (red arrows) and AB/BA regions behave like conduction states (blue balls). (b) Moiré momentum space with 7 mBZs. The blue dashed lines mean the path of the spectrum. (c) and (d) are spectrum with parameters \(\rho=0\) (before hybridization) and \(\rho=0.5\) (full hybridization) separately. Also, both of them are set with \(\mu_{f}=\lambda=0\) and other parameters can be seen in the main text.
the constraint becomes
\[Q=\sum_{l=1}^{N}\sum_{\alpha}(f_{\mathbf{R},\alpha,l}^{\dagger}f_{\mathbf{R}, \alpha,l}+b_{\mathbf{R}}^{\dagger}b_{\mathbf{R}}). \tag{8}\]
Note that \(N=4\) includes spin and valley for two flat band orbitals. Here, we suppose different valley has the same band structure and we choose the valley \(\eta=+\) in the above Hamiltonian. We stress that there are two index spaces in the Hamiltonian. One is the extended \(N\) space with \(1/N\) expansion corresponding to the index \(l\). Another is the moire momentum space which has a size of \(2+4N_{G}\) corresponding to the index \(\alpha\), \(\alpha^{\prime}\), \(a\), and \(a^{\prime}\) separately
We introduce Lagrangian multipliers \(\lambda_{\mathbf{R}}\) to ensure the number of \(f\) electrons is \(Q=1\). We rewrite \(Q\to q_{0}N\), \(b_{\mathbf{R}}\to b_{\mathbf{R}}\sqrt{N}\), and \(V_{\alpha a}\to V_{\alpha a}/\sqrt{N}\). The local gauge transformation is \(b_{\mathbf{R}}=\rho_{\mathbf{R}}\exp(i\theta_{\mathbf{R}})\), \(f_{\mathbf{R}}=f_{\mathbf{R}}^{\prime}\exp(i\theta_{\mathbf{R}})\) and \(\lambda_{\mathbf{R}}=\lambda_{\mathbf{R}}^{\prime}-\theta_{\mathbf{R}}\). We then rewrite \(f_{\mathbf{R}}^{\prime}\) and \(\lambda_{\mathbf{R}}^{\prime}\) to \(f_{\mathbf{R}}\) and \(\lambda_{\mathbf{R}}\). The partition function is
\[Z=\int\mathfrak{D}(cc^{\dagger}ff^{\dagger}\rho\lambda)\exp(-S), \tag{9}\]
where the action is \(S=\int_{0}^{\beta}L(\tau)d\tau\), and
\[L=\sum_{\begin{subarray}{c}a,a^{\prime},l=1\\ \mathbf{p}\end{subarray}}^{N}(\partial_{\tau}+h_{aa^{\prime}}^{(c)}(\mathbf{p }))c_{\mathbf{p},a,l}^{\dagger}c_{\mathbf{p},a^{\prime},l}(\tau)+\sum_{ \begin{subarray}{c}\alpha,\alpha^{\prime}\\ \mathbf{k},\mathbf{k}^{\prime}\in\mathrm{mBZ}\end{subarray}}(\partial_{\tau}+ h_{\alpha\alpha^{\prime}}^{(f)}(\mathbf{k})\delta_{\mathbf{k},\mathbf{k}^{\prime}}+ \frac{i\lambda(\mathbf{k}-\mathbf{k}^{\prime};\tau)}{\sqrt{N_{L}}})f_{ \mathbf{k},\alpha,l}^{\dagger}f_{\mathbf{k}^{\prime},\alpha^{\prime},l}(\tau)+ \frac{1}{\sqrt{N_{L}}}\sum_{\begin{subarray}{c}\mathbf{G},\alpha,a;l=1\\ \mathbf{k},\mathbf{k}^{\prime}\in\mathrm{mBZ}\end{subarray}}^{N}\] \[\left[V_{\alpha a}(\mathbf{k}+\mathbf{G})c_{\mathbf{k}+\mathbf{ G},a,l}^{\dagger}(\tau)f_{\mathbf{k}^{\prime},a,l}(\tau)\rho^{\dagger}( \mathbf{k}-\mathbf{k}^{\prime};\tau)+\mathrm{H.c.}\right]+\frac{iN}{\sqrt{N_{L }}}\sum_{\mathbf{k},\mathbf{k}^{\prime}\in\mathrm{mBZ}}\rho(\mathbf{k};\tau) \lambda(\mathbf{k}^{\prime}-\mathbf{k};\tau)\rho(-\mathbf{k}^{\prime};\tau) \right.-iq_{0}N\sqrt{N_{L}}\lambda(0;\tau)\left., \tag{10}\]
where \(\beta=\frac{1}{T}\), \(T\) is the temperature, and \(N_{L}\) is the number of lattice site in moire real space. We set \(\lambda(\mathbf{k};\tau)=\frac{1}{T}\bar{\lambda}\delta_{\mathbf{k},0}\), \(\rho(\mathbf{k};\tau)=\frac{1}{T}\bar{\rho}\delta_{\mathbf{k},0}\) to get the mean field action. We rewrite \(\frac{\bar{\lambda}}{\sqrt{N_{L}}}\rightarrow\bar{\lambda}\) and \(\frac{\bar{\rho}}{\sqrt{N_{L}}}\rightarrow\bar{\rho}\). After applying \(\partial S_{0}/\partial\bar{\rho}=0\), \(\partial S_{0}/\partial\bar{\lambda}=0\), and summing over the Matsubara frequency \(\omega_{n}\), we end up with three mean field equations as follows
\[\bar{\rho}\bar{\lambda}=\frac{i}{2N_{L}}\sum_{\mathbf{k}\in\mathrm{mBZ}}\sum_{ j}^{2+4N_{G}}(P^{\dagger}A_{0}^{\bar{\rho}}P)_{jj}\cdot n_{F}(\mathscr{E}_{j}), \tag{11}\]
\[q_{0}-\bar{\rho}^{2}=-\frac{i}{N_{L}}\sum_{\mathbf{k}\in\mathrm{mBZ}}\sum_{j}^{2 +4N_{G}}(P^{\dagger}A_{0}^{\bar{\lambda}}P)_{jj}\cdot n_{F}(\mathscr{E}_{j}), \tag{12}\]
\[n_{t}=q_{0}-\bar{\rho}^{2}+\frac{1}{N_{L}}\sum_{\mathbf{k}\in\mathrm{mBZ}}\sum_{ j}^{2+4N_{G}}(P^{\dagger}A_{0}^{c}P)_{jj}\cdot n_{F}(\mathscr{E}_{j})-2N_{G}, \tag{13}\]
where \(n_{t}\) is the total filling, \(n_{F}(\epsilon)=(\exp(\epsilon/T)+1)^{-1}\) is the Fermi-Dirac distribution, \(\mathscr{E}_{j}\) is the eigenvalues of matrix \(A_{0}=\{\{\hat{h}^{(c)},\hat{V}\},\{\hat{V}^{\dagger},\hat{h}^{(f)}+i\bar{ \lambda}\sigma_{0}\}\}\), \(A_{0}^{\bar{\rho}}=\partial(-i\omega_{n}+A_{0})/\partial\bar{\rho}\), \(A_{0}^{\bar{\lambda}}=\partial(-i\omega_{n}+A_{0})/\partial\bar{\lambda}\), \(A_{0}^{c}=-\partial(-i\omega_{n}+A_{0})/\partial\mu_{c}\), and \(A_{0}\) expands in the moire momentum space. Note that \(P\) can be constructed by the eigenvectors of \(A_{0}\) and \(P=(c_{1},c_{2},\cdots,c_{(2+4N_{G})})_{2+4N_{G}\times 2+4N_{G}}\), where \(c_{j}\) are the eigenvectors of \(A_{0}\). (See Supplemental Material [2 ] for the detailed derivation). The above three self-consistent mean field equations are one of our main results.
_Numerics_.--Now, we will numerically solve the equations Eq. 11, Eq. 12, and Eq. 13. To solve them self-consistently, we set an error bar \(errs=\sum_{n=1}^{3}(l_{n}-r_{n})^{2}\), where \(l_{n}\) and \(r_{n}\) represent left and right sides of \(n\)th mean field equation separately. We set \(\mu_{c}=\mu_{f}\) and go through the parameters regions \(\rho\in[0,0.5]\), \(\mu_{f}\in[-100,100]\) meV, and \(i\lambda\in[-100,100]\) meV. Since we set the interaction \(U=\infty\), \(Q=1\), so we have \(q_{0}=1/4\). To reach the mixed valence region, we also set the total filling \(n_{t}=0.83\times q_{0}\). We find the solutions to make the \(errs\approx 0\). There exist two solutions: one is positive \(\mu_{f}\), and another is negative \(\mu_{f}\). We note that \(\mathscr{E}_{j}(\bar{\lambda},\bar{\rho},\mu_{f},\mathbf{k})\) is numerically calculated and depends on the chemical potential \(\mu_{f}\) and momentum \(\mathbf{k}\) with \(\mathbf{k}\in\mathrm{mBZ}\).
We substitute the solutions to \(A_{0}\) and we get the spectrums, see Fig. 2. We also plot the variation of parameters in the mixed valence region as a function of the temperature \(T\), see Fig. 2.
_Discussion_.--Different from the conventional heavy fermion system, the topological mixed valence model in MATBG has SU(8) symmetry. The two central flat bands for each valley and spin are particle-hole symmetric in the chiral limit. Although the particle-hole symmetry is broken at the experimental lattice relaxation range \(U_{0}=0.8\), the valence bands host a similar band structure as the conduction bands before the hybridization. This makes the difficult to reach the Kondo region just by pushing the flat bands lower away from the conduction bands as it does in the conventional heavy fermion system. One needs to consider many valence bands together. We also note that it might be interesting to consider different flavors of slave boson [73] since we have a total of
8 flat bands; they are not degenerate at the experimental range. We leave this task for future work.
_Summary._--We have introduced a new model to describe the mixed valence region of the magic-angle twisted bilayer graphene with infinite Coulomb interaction. We start from the SB model and use the slave-boson method in large-\(N\) expansion. We derive a new group of mean field equations to describe the mixed valence regions of twisted bilayer graphene. The solutions can catch the physics of the filling near the strong correlation, which is at the edge of the Mott insulator, and then could be approaching the unconventional superconductivity. Our topological mixed valence model paves the way to study the possible origin of superconductivity in twisted bilayer graphene. We hope our model could stimulate further research in the mixed valence region in various related van der Waals heterostructures materials or platforms.
_Note added._--A related paper comes out recently [74], which deals with the mixed valence model in twisted bilayer graphene with finite Coulomb interaction.
_Acknowledgments._ We would like to acknowledge the very useful discussions with Yang-Zhi Chou. This work was financially supported by the National Science Foundation Grants No. NSF-DMR-2002795 (Y. L. and M. D.) and NSF-DMR-2015639 (B.M.F.). Parts of this paper were written during the Aspen Center of Physics 2023 summer program on "New Directions on Strange Metals in Correlated Systems" (M. D.), which was supported by the National Science Foundation Grant No. PHY-2210452.
|
2309.09675 | Gradient estimates of the heat kernel for random walks among
time-dependent random conductances | In this paper we consider a time-continuous random walk in $\mathbb{Z}^d$ in
a dynamical random environment with symmetric jump rates to nearest neighbours.
We assume that these random conductances are stationary and ergodic and,
moreover, that they are bounded from below but unbounded from above with finite
first moment. We derive sharp on-diagonal estimates for the annealed first and
second discrete space derivative of the heat kernel which then yield local
limit theorems for the corresponding kernels. Assuming weak algebraic
off-diagonal estimates, we then extend these results to the annealed Green
function and its first and second derivative. Our proof which extends the
result of Delmotte and Deuschel (2005) to unbounded conductances with first
moment only, is an adaptation of the recent entropy method of Benjamini et. al.
(2015). | Jean-Dominique Deuschel, Takashi Kumagai, Martin Slowik | 2023-09-18T11:25:36Z | http://arxiv.org/abs/2309.09675v1 | # Gradient estimates of the heat kernel for random walks among time-dependent random conductances
###### Abstract.
In this paper we consider a time-continuous random walk in \(\mathbb{Z}^{d}\) in a dynamical random environment with symmetric jump rates to nearest neighbours. We assume that these random conductances are stationary and ergodic and, moreover, that they are bounded from below but unbounded from above with finite first moment. We derive sharp on-diagonal estimates for the annealed first and second discrete space derivative of the heat kernel which then yield local limit theorems for the corresponding kernels. Assuming weak algebraic off-diagonal estimates, we then extend these results to the annealed Green function and its first and second derivative. Our proof which extends the result of [17] to unbounded conductances with first moment only, is an adaptation of the recent entropy method of [9].
Key words and phrases:Random conductance model, time-dependent random environment, entropy method, heat kernel 2020 Mathematics Subject Classification: 60K37, 60F17, 82C41, 82B43
###### Contents
* 1 Introduction
* 1.1 The model
* 1.2 Main result
* 1.3 Structure of the paper
* 2 Estimates for the space-derivative of the heat kernel
* 2.1 Entropy estimates
* 2.2 Discrete space-derivatives of the heat kernel
* 2.3 Off-diagonal upper bound and Green kernel estimates
* 3 Annealed CLT and annealed local limit theorems
* A Forward and backward equation for the semigroup
* B Technical estimates
## 1. Introduction
It is well known that the heat kernel associated with a uniform elliptic operator in divergence form has nice regularity properties, in particular Holder continuity in both time and space variables - the so-called De Giorgi-Nash-Moser theory. These
###### Abstract
We study the problem of finding a _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\). We show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\). We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\). We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space_ shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_. We also show that the _time-space shift \(\tau_{s,z}\) of a given \(\sigma\)-algebra \(\mathcal{F}\) with respect to the \(\sigma\)-algebra \(\mathcal{F}\otimes\mathcal{B}(\mathbb{R})\otimes\mathcal{P}(\mathbb{Z}^{d})\) is a _time-space shift_.
_._
2. _The time dependent random conductances are_ \(\mathbb{P}\)_-a.s. symmetric, that is,_ \[\mathbb{P}\big{[}\omega_{t}(x,y)=\omega_{t}(y,x)\quad\forall\,t\in\mathbb{R} \big{]}\ =\ 1.\]
3. \(\mathbb{E}\big{[}\omega_{t}(e)\big{]}<\infty\) _for all_ \(e\in E^{d}\) _and_ \(t\in\mathbb{R}\)_._
_Remark 1.2_.: In fact the time-space ergodicity assumption is not necessary for most of our results on the annealed estimates, cf. [17]. However, it is important when we are dealing with limit theorems in Section 3. Assumption 1.1-(ii) is essential, because it allows us to identify the dual process, and it is used in several crucial steps. Assumption 1.1-(iii) is quite natural under general ergodic assumption and it also guarantees non-explosion of the corresponding process, cf. [1, Lemma 4.1], otherwise explosion could occur in finite time, cf. [6, Remark 6.6]. In particular, by means of the stationarity in time and Fubini's theorem, this assumption implies that, \(\mathbb{P}\)-a.s., the conductances are locally integrable in time that is
\[\mathbb{P}\bigg{[}\int_{I}\omega_{s}(x,y)\,\mathrm{d}s<\infty\bigg{]}\ =\ 1,\qquad\text{for all finite interval $I\subset\mathbb{R}$}. \tag{1.2}\]
For any fixed realization \(\omega\in\Omega\), we consider the time-inhomogeneous Markov process, \(X\equiv\big{(}(X_{t}:t\geq s),\mathrm{P}^{\omega}_{s,x}:(s,x)\in\mathbb{R} \times\mathbb{Z}^{d}\big{)}\) in the random environment \(\omega\), where \(\mathrm{P}^{\omega}_{s,x}\) denotes the law of \(X\) on \(\mathcal{D}(\mathbb{R},\mathbb{Z}^{d})\), the space of \(\mathbb{Z}^{d}\)-valued cadlag functions on \(\mathbb{R}\), starting at time \(s\) in the vertex \(x\), i.e.
\[\mathrm{P}^{\omega}_{s,x}\big{[}X_{s}=x\big{]}\ =\ 1\qquad\mathbb{P}\text{-a.s.}\]
The time-dependent generator, \(\mathcal{L}^{\omega}_{t}\), acts on bounded functions \(f\colon\mathbb{Z}^{d}\to\mathbb{R}\) as
\[(\mathcal{L}^{\omega}_{t}f)(x)\ :=\ \sum_{y:(x,y)\in E^{d}}\omega_{t}(x,y)\, \big{(}f(y)\,-\,f(x)\big{)}. \tag{1.3}\]
Clearly, under \(\mathrm{P}^{\omega}_{s,x}\), the distribution of the Markov process \(X\) depends explicitly on \(s\in\mathbb{R}\), \(x\in\mathbb{Z}^{d}\) and \(\omega\in\Omega\). Whenever we want to emphasis this dependence, we also write \(X^{s,x,\omega}=(X^{s,x,\omega}_{t}:t\geq s)\). In particular, by the time-space stationarity of the time-inhomogeneous transition rates it follows that, for any \(\omega\in\Omega\), \((s,x)\in\mathbb{R}\times\mathbb{Z}^{d}\), \(h\in\mathbb{R}\) and \(z\in\mathbb{Z}^{d}\),
\[X^{s,x,\tau_{h,z}\omega}_{t}\ =\ X^{s+h,x+z,\omega}_{t+h}\qquad\mathrm{P}^{ \tau_{h,z}\omega}_{s,x}\text{-a.s.}\quad\forall\,t\geq s. \tag{1.4}\]
For any \(s\in\mathbb{R}\), we denote by \((P^{\omega}_{s,t}:t\geq s)\) the Markov semi-group associated to the Markov process \(X\), i.e. \((P^{\omega}_{s,t}f)(x)=\mathrm{E}^{\omega}_{s,x}[f(X_{t})]\) for any bounded function \(f\colon\mathbb{Z}^{d}\to\mathbb{R}\), \(s\leq t\) and \(x\in\mathbb{Z}^{d}\). Moreover, for any \(x,y\in\mathbb{Z}^{d}\) and \(s\leq t\), we denote by
\[p^{\omega}_{s,t}(x,y)\ :=\ \mathrm{P}^{\omega}_{s,x}[X_{t}=y]\qquad\text{and} \qquad\bar{p}_{s,t}(x,y)\ :=\ \mathbb{E}\big{[}p^{\omega}_{s,t}(x,y)\big{]}.\]
the (quenched) transition density with respect to the counting measure (heat kernel of the so-called VSRW) and the annealed (averaged) transition density, respectively. As a consequence of (1.1) we have
\[p^{\tau_{h,z}\omega}_{s,t}(x,y)\ =\ p^{\omega}_{s+h,t+h}(x+z,y+z)\qquad\forall\,h \in\mathbb{R},\,z\in\mathbb{Z}^{d}. \tag{1.5}\]
In particular, in view of Assumption 1.1-(i), it holds
\[\bar{p}_{s,t}(x,y)\;=\;\bar{p}_{0,t-s}(0,y-x)\;=\;\bar{p}_{s-t,0}(x-y,0).\]
Further, define \(\tilde{\omega}_{t}(e):=\omega_{-t}(e)\) for any \(t\in\mathbb{R}\) and \(e\in E^{d}\). Then, in view of Lemma A.2, \(p^{\tilde{\omega}}_{s,t}(x,y)=p^{\omega}_{-t,-s}(y,x)\) for all \(x,y\in\mathbb{Z}^{d}\) and \(s,t\in\mathbb{R}\) with \(s\leq t\). It is also easily check that, for any \(s\leq t\leq u\),
\[p^{\tilde{\omega}}_{s,u}(x,y)\;=\;\sum_{z\in\mathbb{Z}^{d}}p^{\tilde{\omega}}_{ s,t}(x,z)\,p^{\tilde{\omega}}_{t,u}(z,y)\qquad\forall\,x,y\in\mathbb{Z}^{d}.\]
Thus, by setting \((P^{\tilde{\omega}}_{s,t}f)(x):=\sum_{y\in\mathbb{Z}^{d}}p^{\tilde{\omega}}_{s,t}(x,y)f(y)\) for any bounded function \(f:\mathbb{Z}^{d}\to\mathbb{R}\), \(s\leq t\) and \(x\in\mathbb{Z}^{d}\), \((P^{\tilde{\omega}}_{s,t}:t\geq s)\) is a Markov semi-group. For any fixed \(\omega\in\Omega\), we write \(\tilde{X}\equiv\big{(}(\tilde{X}_{t}:t\geq s),\mathrm{P}^{\tilde{\omega}}_{s,x}:(s,x)\in\mathbb{R}\times\mathbb{Z}^{d}\big{)}\) to denote the associated time-inhomogeneous Markov process in the random environment \(\tilde{\omega}\). Again, as a consequence of (1.1),
\[p^{\tau_{-h,s}\tilde{\omega}}_{s,t}(x,y)\;=\;p^{\tilde{\omega}}_{s+h,t+h}(x,z, y+z)\qquad\forall\,h\in\mathbb{R},\,z\in\mathbb{Z}^{d}.\]
Finally, let \((P^{\omega}_{s,t})^{*}\) be the adjoint of \(P^{\omega}_{s,t}\) in \(\ell^{2}(\mathbb{Z}^{d})\). Then, \(P^{\tilde{\omega}}_{s,t}=(P^{\omega}_{-t,-s})^{*}\).
**Lemma 1.3**.: _Suppose that Assumption 1.1 is satisfied. Then,_
1. _for_ \(\mathbb{P}\)_-a.e._ \(\omega\)_,_ \(X\) _and_ \(\tilde{X}\) _are conservative. In particular, for any_ \(s\leq t\)__ \[\sum_{y\in\mathbb{Z}^{d}}p^{\omega}_{s,t}(x,y)\;=\;1\;=\;\sum_{y\in\mathbb{Z} ^{d}}p^{\omega}_{s,t}(y,x)\qquad\forall\,x\in\mathbb{Z}^{d}\qquad\mathbb{P} \text{-a.s.}\] (1.6)
2. _the measure_ \(\mathbb{P}\) _is stationary and ergodic for both the environment process_ \((\tau_{t,X_{t}}\omega\,:\,t\geq 0)\) _and_ \((\tau_{-t,\tilde{X}_{t}}\omega\,:\,t\geq 0)\)_._
Proof.: This follows from [1, Lemma 4.1 and Lemma 4.3] and Lemma A.2.
### Main result
As a further consequence of Assumption 1.1 the averaged mean displacement of the stochastic process, \((X_{t}:t\geq 0)\), behaves diffusively. For this purpose, consider the diffusively rescaled process \(X^{(n)}=(X^{(n)}_{t}:t\geq 0)\), that is,
\[X^{(n)}_{t}\;:=\;\frac{1}{n}X_{tn^{2}},\qquad t\geq 0,n\in\mathbb{N},\]
and write \(\mathbb{P}^{*}_{0,0}[G]:=\mathbb{E}\big{[}\mathrm{P}^{\omega}_{0,0}[G]\big{]}\) for any \(G\in\mathcal{B}(\mathcal{D}([0,\infty),\mathbb{R}^{d}))\) to denote the _annealed probability measure_.
**Proposition 1.4** (Mean displacement).: _Suppose that Assumption 1.1 holds. Then, there exists \(C_{1}<\infty\) such that the following holds for all \(T>0\),_
\[\mathbb{E}\bigg{[}\mathrm{E}^{\omega}_{0,0}\bigg{[}\sup_{0\leq t\leq T}|X_{t}| \bigg{]}\bigg{]}\;\leq\;C_{1}\sqrt{T}. \tag{1.7}\]
_In particular, for any \(t>0\), the family of scaled displacements \(\big{\{}X^{(n)}_{t}:n\in\mathbb{N}\big{\}}\) is tight under \(\mathbb{P}^{*}_{0,0}\)._
Our further results rely on the following additional assumption.
**Assumption 1.5**.: _There exists a non-random constant \(C_{2}>0\) such that_
\[\mathbb{P}[\omega_{t}(x,y)\geq C_{2}]\ =\ 1\qquad\text{for any $(x,y)\in E^{d}$}. \tag{1.8}\]
Our main objective is to prove a spatial derivative estimate of the annealed heat kernel. For a given function \(f\colon\mathbb{Z}^{d}\to\mathbb{R}\) the discrete spatial derivative in direction \((0,x)\in E^{d}\) is defined by
\[\nabla_{x}f(y)\ :=\ f(x+y)-f(y),\qquad\forall\,y\in\mathbb{Z}^{d}.\]
Notice that \(\nabla_{x}\) is a bounded linear operator on \(\ell^{p}(\mathbb{Z}^{d})\) for any \(1\leq p\leq\infty\). For functions \(f\colon\mathbb{Z}^{d}\times\mathbb{Z}^{d}\to\mathbb{R}\) we write \(\nabla^{1}_{x}\) and \(\nabla^{2}_{x}\), respectively, to denote that the discrete spatial derivative acts either on the first or the second variable of \(f\). For any \((0,x),(0,x^{\prime})\in E^{d}\) and \(y,y^{\prime}\in\mathbb{Z}^{d}\), higher spatial derivatives are obtained iteratively via the formula
\[\nabla^{1}_{x}\nabla^{1}_{x^{\prime}}f(y,y^{\prime}) \ :=\ \nabla^{1}_{x}f(x^{\prime}+y,y^{\prime})-\nabla^{1}_{x}f(y,y^{ \prime})\] \[\ =\ f(x+x^{\prime}+y,y^{\prime})-f(x+y,y^{\prime})-f(x^{\prime}+y,y^{\prime})+f(y,y^{\prime}) \tag{1.9}\]
(an analog expression holds for \(\nabla^{2}_{x}\nabla^{2}_{x^{\prime}}f\)), whereas the mixed derivative is given by
\[\nabla^{1}_{x}\nabla^{2}_{x^{\prime}}f(y,y^{\prime}) \ :=\ \nabla^{1}_{x}f(y,y^{\prime}+z^{\prime})-\nabla^{1}_{x}f(y,y^{ \prime})\] \[\ =\ f(x+y,x^{\prime}+y^{\prime})-f(y,x^{\prime}+y^{\prime})-f(x+y,y^{\prime})+f(y,y^{\prime}). \tag{1.10}\]
Note that \(\nabla^{1}_{x}\nabla^{2}_{x^{\prime}}f=\nabla^{2}_{x^{\prime}}\nabla^{1}_{x}f\).
**Theorem 1.6** (Gradient estimates).: _Suppose that Assumption 1.1 and 1.5 hold true._
1. _There exists_ \(C_{3}<\infty\) _such that for any_ \((0,x)\in E^{d}\)_,_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _and_ \(t>0\)_,_ \[\mathbb{E}\Big{[}\big{|}\nabla^{1}_{x}\,p^{\omega}_{0,t}(y,y^{\prime})\big{|} ^{2}\Big{]}^{1/2}\ \leq\ C_{3}\,(1\lor t)^{-(d+1)/2}.\] (1.11) _Moreover, the same estimate holds if_ \(\nabla^{1}_{x}\,p^{\omega}_{0,t}(y,y^{\prime})\) _is replaced by_ \(\nabla^{2}_{x}\,p^{\omega}_{0,t}(y^{\prime},y)\)_._
2. _There exists_ \(C_{4}<\infty\) _such that, for any_ \(p\in[1,2]\)_,_ \((0,x)\in E^{d}\)_,_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _and_ \(t>0\)__ \[\bigg{(}\sum_{y^{\prime}\in\mathbb{Z}^{d}}\mathbb{E}\Big{[}\big{|}\nabla^{1}_{x} \,p^{\omega}_{0,t}(y,y^{\prime})\big{|}^{p}\Big{]}\bigg{)}^{1/p}\ \leq\ C_{4}\,(1\lor t)^{-(1+d(1-1/p))/2}.\] (1.12) _Moreover, the same estimate holds if_ \(\nabla^{1}_{x}\,p^{\omega}_{0,t}(y,y^{\prime})\) _is replaced by_ \(\nabla^{2}_{x}\,p^{\omega}_{0,t}(y^{\prime},y)\)_._
3. _For any_ \((0,x),(0,x^{\prime})\in E^{d}\)_,_ \(y,y^{\prime}\in\mathbb{Z}^{d}\)_, and_ \(t>0\)_,_ \[\big{|}\nabla^{2}_{x}\nabla^{2}_{x^{\prime}}\,\bar{p}_{0,t}(y,y^{ \prime})\big{|} \ \leq\ C_{4}\,(1\lor t)^{-(d+2)/2},\] (1.13) \[\mathbb{E}\Big{[}\big{|}\nabla^{1}_{-x}\nabla^{2}_{x^{\prime}}\,p^{ \omega}_{0,t}(y,y^{\prime})\big{|}\Big{]} \ \leq\ C_{4}\,(1\lor t)^{-(d+2)/2}.\] (1.14)
As a next result we are interested in an _averaged_ or (_annealed_) CLT (ACLT) as well as in the convergences of the corresponding annealed heat kernel and its first derivative.
**Proposition 1.7** (Annealed CLT).: _Suppose that Assumption 1.1 and 1.5 holds. Then, for any \(t>0\), the law of \(X_{t}^{(n)}\) under \(\mathbb{P}_{0,0}^{*}\) converges weakly, as \(n\to\infty\), to the centered Gaussian distribution, \(\mathcal{N}(0,t\Sigma^{2})\), on \(\big{(}\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d})\big{)}\), where the covariance matrix, \(\Sigma^{2}\), is deterministic and non-degenerate._
_Remark 1.8_.: We note that a quenched FCLT has been obtained under Assumption 1.1 and some additional moment bounds on both \(\omega_{t}(e)\) and \(\omega_{t}(e)^{-1}\) in [1, Theorem 1.7] when \(d\geq 2\), and in [19, Theorem 1.4] when \(d=1\). The latter result on the quenched invariance principle in \(d=1\) has been improved if Assumption 1.1 holds true and \(\mathbb{E}\big{[}\omega_{t}(e)^{-1}\big{]}<\infty\), see [10, Theorem 1.2]. In case the dynamical conductances are bounded from above the additional assumption on \(\omega_{t}(e)^{-1}\) has been recently relaxed, cf. [12] and [11]. Clearly, quenched FCLT implies annealed FCLT.
**Theorem 1.9** (Annealed local CLT).: _Suppose that Assumption 1.1 and 1.5 holds. Further, for any \(t>0\), let \(k_{t}\) be the density of the distribution \(\mathcal{N}(0,t\Sigma^{2})\). Then,_
1. _for any_ \(T_{0}>0\) _and_ \(K\subset\mathbb{R}^{d}\) _compact,_ \[\lim_{n\to\infty}\sup_{y\in K}\sup_{t\geq T_{0}}\bigl{|}n^{d}\bar{p}_{0,tn^{2} }(0,[yn])-k_{t}^{\Sigma}(y)\bigr{|}\ =\ 0.\] (1.15)
2. _for any_ \(T_{0}>0\)_,_ \(K\subset\mathbb{R}^{d}\) _compact and each unit vector_ \(e_{i}\sim 0\)_,_ \(i\in\{1,\ldots,d\}\)_,_ \[\lim_{n\to\infty}\sup_{y\in K}\sup_{t\geq T_{0}}\bigl{|}n^{d+1}\nabla_{e_{i}}^ {2}\bar{p}_{0,tn^{2}}(0,[yn])-\big{(}\partial_{i}k_{t}^{\Sigma}\big{)}(y) \bigr{|}\ =\ 0.\] (1.16)
Finally, as a further consequence of Theorem 1.6-(iii), the properly scaled second discrete derivatives of the heat kernel satisfies the following uniform upper bounds. For any \(n\in\mathbb{N}\), \((0,x),(0,x^{\prime})\in E^{d}\), \(y,y^{\prime}\in\mathbb{R}^{d}\), and \(t>0\) we have that
\[\bigl{|}n^{d+2}\nabla_{x}^{2}\nabla_{x^{\prime}}^{2}\,\bar{p}_{0,tn^{2}}([yn],[y^{\prime}n])\bigr{|}\ \leq\ C_{4}\,(1\lor t)^{-(d+2)/2}, \tag{1.17}\]
\[\mathbb{E}\Bigl{[}\Bigl{|}n^{d+2}\nabla_{-x}^{1}\nabla_{x^{\prime}}^{2}\,p_{0, tn^{2}}^{\omega}([yn],[y^{\prime}n])\Bigr{|}\Bigr{]}\ \leq\ C_{4}\,(1\lor t)^{-(d+2)/2}. \tag{1.18}\]
In particular, we have that the families
\[\big{\{}n^{d+2}\nabla_{-x}^{1}\nabla_{x^{\prime}}^{2}p_{0,tn^{2}}^{\omega}(0, [yn]):n\in\mathbb{N}\big{\}}\quad\text{and}\quad\big{\{}n^{d+2}\nabla_{x}^{2} \nabla_{x^{\prime}}^{2}p_{0,tn^{2}}^{\omega}(0,[yn]):n\in\mathbb{N}\big{\}}\]
are both tight under \(\mathbb{P}_{0,0}^{*}\).
It will be interesting to consider continuous versions of the results in this section. Note that apriori it is not known whether \(\nabla_{x}\) exists or not.
_Remark 1.10_.: Although under our assumptions, local CLT holds for the annealed heat kernel, the corresponding quenched result could fail. Indeed, in [18, Proposition 1.5], a random walk in layered conductances on \(\mathbb{Z}\times\mathbb{Z}^{d-1}\) is considered with _time-homogeneous_ conductances
\[\omega(x,x\pm e_{1})\ =\ \begin{cases}Z(x_{2}),&i=1,\\ 1,&i\in\{2,\ldots,d\},\end{cases}\qquad\forall\,x=(x_{1},x_{2})\in\mathbb{Z} \times\mathbb{Z}^{d-1},\]
where \(\{Z(x_{2}):x_{2}\in\mathbb{Z}^{d-1}\}\) are i.i.d. random variables on some probability space \((\Omega,\mathcal{F},\mathbb{P})\) taking values in \([0,\infty)\) such that
\[\mathbb{P}\big{[}Z(x_{2})>L\big{]}\ =\ L^{-\alpha},\qquad L\geq 1.\]
If \(d\geq 4\) and \(\alpha\in(1,(d-1)/2)\) then it is show that, for any \(R>0\) and \(t>0\),
\[\lim_{n\to\infty}\inf_{|y|\leq R\sqrt{t}}\frac{n^{d}p_{0,tn^{2}}^{\omega}(0,[yn ])}{k_{t}^{\Sigma}(y)}\ =\ 0\qquad\mathbb{P}\text{\,-a.s.}\]
Further, notice that in this case \(\mathbb{E}\big{[}\omega(x,x+e_{1})\big{]}<\infty\) but \(\mathbb{E}\big{[}\omega(x,x+e_{1})^{1/\alpha}\big{]}=\infty\), thus \(\omega(x,x+e_{1})\notin L^{p}(\mathbb{P})\) for \(p>2/(d-1)\), which is the sharp bound for quenched local CLT to hold, cf. [8].
### Structure of the paper
In Section 2.1 we first establish estimates on the entropy which serves as the main building block to derive in Section 2.2 and 2.3 space-derivatives on both the heat kernel and the Green kernel. In Section 3 we then prove Proposition 1.7 and Theorem 1.9. In Appendix A we verify properties of the transition semi-group of the time-inhomogeneous Markov process \(X\), whereas Appendix B just contains a technical lemma needed in the proofs.
## 2. Estimates for the space-derivative of the heat kernel
### Entropy estimates
For any (sub-) probability measure \(\mu\) on \(\mathbb{Z}^{d}\) we define the entropy, \(H(\mu)\), of \(\mu\) by
\[H(\mu)\ :=\ \sum_{x\in\mathbb{Z}^{d}}\phi(\mu(x)), \tag{2.1}\]
where \(\phi(0)=0\) and \(\phi(t):=-t\ln t\) for any \(t>0\). Clearly, \(H(\mu)\) is well-defined with values in \([0,\infty]\). In particular, for any \(t\geq 0\), we set
\[H_{t}^{\omega}\ :=\ H\big{(}\,p_{0,t}^{\omega}(0,\cdot)\,\big{)}\qquad\text{ and}\qquad H_{t}^{\bar{\omega}}\ :=\ H\big{(}\,p_{0,t}^{\bar{\omega}}(0,\cdot)\,\big{)}. \tag{2.2}\]
In order to address the question of \(\mathbb{P}\)-a.s. finiteness of both \(H_{t}^{\omega}\) and \(H_{t}^{\bar{\omega}}\) for any \(t\geq 0\), in the next lemma we first prove a quenched result that holds true without Assumption 1.1.
**Lemma 2.1**.: _There exists \(C_{5}\equiv C_{5}(d)\in(0,\infty)\) (independent of the choice of \(\omega\)) such that for every \(t>0\) we have_
\[H_{t}^{\omega}\ \leq\ C_{5}+d\ln\bigl{(}1+\mathrm{E}_{0,0}^{\omega}\big{[}|X_{t}| \big{]}\bigr{)}. \tag{2.3}\]
Proof.: The estimate is based on a method that was originally introduced by [27, p. 938] and later improved by [7]. Note that a discrete version of the Bass-Nash argument is discussed in [5, Lemma 6.14] which implies immediately (2.3). In particular, the fact that \(C_{5}\) is independent of the choice of \(\omega\) comes from a careful estimate of an infinite sum that appears in the estimate, see [5, Lemma A.36]. Strictly speaking for this argument, one needs to know apriori that \(H_{t}^{\omega}<\infty\). However, by introducing the sequence of stopping time \(T_{n}:=\inf\{t\geq 0:X_{t}\not\in B(0,n)\}\)
\(n\in\mathbb{N}\), one can get the result first for the stopped process of \((X_{t\wedge T_{n}}:t\geq 0)\), with a corresponding constant \(C_{5}\) that is independent of \(n\). The final assertion follows by monotone convergence.
Thus, together with Proposition 1.4 which relies on Assumption 1.1, it follows that
\[\bar{H}_{t}\;:=\;\mathbb{E}\big{[}H_{t}^{\omega}\big{]}\;<\;C_{5}+d\ln\!\big{(}1 +C_{1}\sqrt{t}\big{)}\qquad\Longrightarrow\qquad H_{t}^{\omega}\;<\;\infty \quad\mathbb{P}\,\mbox{-a.s.} \tag{2.4}\]
In particular, as a further consequence of (1.1), we have that \(\mathbb{E}[H_{t}^{\tilde{\omega}}]=\mathbb{E}[H_{t}^{\omega}]\) and, hence, \(H_{t}^{\tilde{\omega}}<\infty\)\(\mathbb{P}\)-a.s.
We now give fundamental properties of \(H_{t}^{\omega}\), \(H_{t}^{\tilde{\omega}}\) and \(\bar{H}_{t}\).
**Lemma 2.2**.: _Suppose Assumption 1.1 holds true. Then,_
1. _for each_ \(s,t\geq 0\) _and_ \(\mathbb{P}\)_-a.e._ \(\omega\)_, it holds that_ \(H_{t}^{\omega}\leq H_{t+s}^{\omega}\) _and_ \(H_{t}^{\tilde{\omega}}\leq H_{t+s}^{\tilde{\omega}}\)_._
2. _for each fixed_ \(s>0\)_,_ \(t\mapsto\bar{H}_{t+s}-\bar{H}_{t}\) _is decreasing for_ \(t\geq 0\)_. In other words, the function_ \(t\mapsto\bar{H}_{t}\) _is concave._
Proof.: (i) By definition of \(H_{t}^{\omega}\) and the Chapman-Kolmogorov equation, we have
\[H_{t+s}^{\omega}\;=\;\sum_{x\in\mathbb{Z}^{d}}\phi\big{(}p_{0,t+s}^{\omega}(0,x)\big{)}\;=\;\sum_{x\in\mathbb{Z}^{d}}\phi\bigg{(}\sum_{y\in\mathbb{Z}^{d}} p_{0,t}^{\omega}(0,y)\,p_{t,s+t}^{\omega}(y,x)\bigg{)}.\]
Since \(\sum_{y\in\mathbb{Z}^{d}}p_{t,s+t}^{\omega}(y,x)=1\) by Lemma 1.3-(i), an application of Jensen's inequality yields
\[H_{t+s}^{\omega}\;\geq\;\sum_{x\in\mathbb{Z}^{d}}\sum_{y\in\mathbb{Z}^{d}} \phi\big{(}p_{0,t}^{\omega}(0,y)\big{)}\,p_{t,s+t}^{\omega}(y,x)\;=\;\sum_{y \in\mathbb{Z}^{d}}\phi\big{(}p_{0,t}^{\omega}(0,y)\big{)}\;=\;H_{t}^{\omega}.\]
By the same argument the assertion also follows for \(H_{t}^{\tilde{\omega}}\).
(ii) The proof relies on the argument given in [9, Corollary 10]. For that purpose, consider the conditional entropy defined, for any \(t\geq 0\) and \(s>0\), by
\[H^{\omega}\big{(}X_{s}\,|\,X_{s+t}\big{)} \;:=\;\sum_{y\in\mathbb{Z}^{d}}\mathrm{P}_{0,0}^{\omega}\big{[}X _{s+t}=y\big{]}\sum_{x\in\mathbb{Z}^{d}}\phi\big{(}\mathrm{P}_{0,0}^{\omega} \big{[}X_{s}=x\;|\;X_{s+t}=y\big{]}\big{)}\] \[\;=\;\sum_{x,y\in\mathbb{Z}^{d}}\phi\big{(}\mathrm{P}_{0,0}^{ \omega}\big{[}X_{s}=x,X_{s+t}=y\big{]}\big{)}\,-\,\sum_{y\in\mathbb{Z}^{d}} \phi\big{(}\mathrm{P}_{0,0}^{\omega}\big{[}X_{s+t}=y\big{]}\big{)}.\]
By means of the Markov property, the first term can be computed further as
\[\sum_{x,y\in\mathbb{Z}^{d}}\phi\big{(}\mathrm{P}_{0,0}^{\omega} \big{[}X_{s}=x\big{]}\,\mathrm{P}_{s,x}^{\omega}\big{[}X_{s+t}=y\big{]}\big{)}\] \[\qquad=\;H_{s}^{\omega}\,+\,\mathrm{E}_{0,0}^{\omega}\big{[}H \big{(}p_{s,s+t}^{\omega}(X_{s},\,\cdot\,)\big{)}\big{]}.\]
By taking the expectation with respect to \(\mathbb{P}\) and using the fact that
\[\mathbb{E}\big{[}\mathrm{E}_{0,0}\big{[}H\big{(}p_{s,s+t}^{\omega}(X_{s},\cdot )\big{)}\big{]}\big{]}\;\overset{\eqref{eq:H_t}}{\Longrightarrow}\mathbb{E} \big{[}\mathrm{E}_{0,0}\big{[}H_{t}^{\omega}\circ\tau_{s,X_{s}}\big{]}\big{]}\;= \;\bar{H}_{t}, \tag{2.5}\]
we obtain
\[\mathbb{E}\big{[}H^{\omega}(X_{s}\,|\,X_{s+t})\big{]}\ =\ \bar{H}_{s}\,+\,\bar{H}_{t}\,-\, \bar{H}_{s+t}.\]
Further, we claim that
\[H^{\omega}\big{(}X_{s}\,|\,X_{s+t}\big{)}\ =\ H^{\omega}\big{(}X_{s}\,|\,X_{s+t},X_{s+t^{ \prime}}\big{)}\ \leq\ H^{\omega}\big{(}X_{s}\,|\,X_{s+t^{\prime}}\big{)}\qquad\text{for }t\leq t^{\prime}. \tag{2.6}\]
While the equality is trivial, the inequality relies on the general fact, cf. [15, Equation (2.60) and (2.92)], that for any discrete random variables \(X\), \(Y\) and \(Z\) (defined on the same probability space) \(H(X\,|\,Y,Z)\leq H(X\,|\,Z)\). Indeed, by Jensen's inequality, we have that
\[H(X\,|\,Y,Z) \ =\ \sum_{x,y,z}\mathrm{P}[X=x,Y=y,Z=z]\log\biggl{(}\frac{1}{ \mathrm{P}[X=x\ |\ Y=y,Z=z]}\biggr{)}\] \[\ \leq\ \sum_{x,z}\mathrm{P}[X=x,Z=z]\log\biggl{(}\sum_{y}\frac{ \mathrm{P}[Y=y\ |\ X=x,Z=z]}{\mathrm{P}[X=x\ |\ Y=y,Z=z]}\biggr{)}\] \[\ =\ \sum_{x,z}\mathrm{P}[X=x,Z=z]\log\biggl{(}\sum_{y}\frac{ \mathrm{P}[Y=y,Z=z]}{\mathrm{P}[X=x,Z=z]}\biggr{)}\ =\ H(X\,|\,Z),\]
where the summation is only over all those elements that are contained in the distribution of \(\mathrm{P}\circ(X,Y,Z)^{-1}\) and \(\mathrm{P}\circ(X,Z)^{-1}\), respectively. Thus, by combining (2.5) with (2.6), we obtain, for any \(t\leq t^{\prime}\),
\[\bar{H}_{s}\,+\,\bar{H}_{t}\,-\,\bar{H}_{s+t}\ \leq\ \bar{H}_{s}\,+\,\bar{H}_{t^{ \prime}}\,-\,\bar{H}_{s+t^{\prime}},\]
which completes the proof.
**Lemma 2.3**.: _Suppose Assumption 1.1 and 1.5 hold true. Then, there exists a non-random constant \(C_{6}\in(0,\infty)\) such that, for all \(s<t\),_
\[\max_{x,y\in\mathbb{Z}^{d}}\,p_{s,t}^{\omega}(x,y)\ \leq\ C_{6}\big{(}1\vee(t-s) \big{)}^{-d/2}. \tag{2.7}\]
_Moreover, the same estimate holds if \(p_{s,t}^{\omega}(x,y)\) is replaced by \(p_{s,t}^{\tilde{\omega}}(x,y)\)._
Proof.: Let \(B_{n}:=B(0,n)\) and set \(\ell_{B_{n}}(\mathbb{Z}^{d}):=\{f:\mathbb{Z}^{d}\to\mathbb{R}\,|\,f(x)=0\ \text{for }x\notin B_{n}\}\). Further, define
\[\big{(}\mathcal{L}_{t}^{\omega,B_{n}}f\big{)}(x)\ :=\ \sum_{ \begin{subarray}{c}y\in B_{n+1}\\ (x,y)\in E^{d}\end{subarray}}\omega_{t}(x,y)\,\big{(}f(y)-f(x)\big{)},\qquad \forall\ f\in\ell_{B_{n}}(\mathbb{Z}^{d}),\,x\in B_{n}.\]
Then, it holds that, for any \(f\in\ell_{B_{n}}(\mathbb{Z}^{d})\),
\[\mathcal{E}_{t}^{\omega}(f,f)\ =\ \frac{1}{2}\sum_{(x,y)\in E^{d}} \omega_{t}(x,y)\,\big{(}f(y)-f(x)\big{)}^{2}\ =\ \bigl{\langle}f,-\mathcal{L}_{t}^{\omega,B_{n}}f\bigr{\rangle}_{\ell^{2}( \mathbb{Z}^{d})},\]
where the second equality holds true due to the fact that the sums are each over finite sets. The corresponding process is a time-inhomogeneous random walk killed
upon exiting the ball \(B_{n}\) that appeared in the proof of Lemma 2.1, and its heat kernel can be written as
\[p_{s,t}^{\omega,B_{n}}(x,y)\;:=\;P_{s,x}^{\omega}\big{[}X_{t}=y,T_{n}>t\big{]},\]
where we recall that \(T_{n}:=\inf\{t\geq:X_{t}\not\in B(0,n)\}\) denotes the exit time of the ball \(B(0,n)\). In view of Assumption 1.5, we have that \(\mathbb{P}\)-a.s. there exists \(C_{\mathrm{Nash}}\in(0,\infty)\) (independent of \(n\)) such that
\[\|f\|_{\ell^{2}(\mathbb{Z}^{d})}^{2+4/d}\;\leq\;C_{\mathrm{Nash}}\,\inf_{t \geq 0}\mathcal{E}_{t}^{\omega}(f,f)\,\|f\|_{\ell^{1}(\mathbb{Z}^{d})}^{4/d}, \qquad\forall\,f\in\ell_{B_{n}}(\mathbb{Z}^{d}). \tag{2.8}\]
Using (A.3) for \(P_{s,t}^{\omega,B_{n}}f\) which is finite support, we have for \(\mathbb{P}\)-a.e. \(\omega\), a.e. \(s\in(0,t)\) and \(f\in\ell_{B_{n}}(\mathbb{Z}^{d})\),
\[\frac{\mathrm{d}}{\mathrm{d}s}\,\|P_{s,t}^{\omega,B_{n}}f\|_{\ell^{2}( \mathbb{Z}^{d})}^{2}\;=\;2\langle-\mathcal{L}_{s}^{\omega,B_{n}}P_{s,t}^{ \omega,B_{n}}f,P_{s,t}^{\omega,B_{n}}f\rangle_{\ell^{2}(\mathbb{Z}^{d})}\;=\;2 \,\mathcal{E}_{s}^{\omega}(P_{s,t}^{\omega,B_{n}}f,P_{s,t}^{\omega,B_{n}}f).\]
We note that the last equality (the Gauss-Green formula) uses exchanges of the order of sums; in this case the sums are finite, so the equality holds without any problem. For any \(f\in\ell_{B_{n}}(\mathbb{Z}^{d})\) with \(\|f\|_{\ell^{1}(\mathbb{Z}^{d})}=1\) and \(t>0\) we set \(f_{s}:=P_{s,t}^{\omega,B_{n}}f\). Then, we obtain, for \(\mathbb{P}\)-a.e. \(\omega\) and a.e. \(s\in(0,t)\),
\[u^{\prime}(s)\;:=\;\frac{\mathrm{d}}{\mathrm{d}s}\|f_{s}\|_{\ell^{2}(\mathbb{Z }^{d})}^{2}\;=\;2\,\mathcal{E}_{s}^{\omega}(f_{s},f_{s})\;\geq\;\frac{2}{C_{ \mathrm{Nash}}}\|f_{s}\|_{\ell^{2}(\mathbb{Z}^{d})}^{2+4/d}\;=:\;c_{1}\,u(s)^{1 +2/d}.\]
Set \(v(s):=u(s)^{-2/d}\), then we obtain \(v^{\prime}(s)\leq-2c_{1}/d\). Since \(\lim_{s\uparrow t}v(s)=\|f\|_{\ell^{2}(\mathbb{Z}^{d})}^{-4/d}>0\), it follows
\[-v(s)\;\leq\;\int_{s}^{t}v^{\prime}(r)\,\mathrm{d}r\;\leq\;-\frac{2c_{1}}{d} \,(t-s)\qquad\Longleftrightarrow\qquad u(s)\;\leq\;c_{2}\,(t-s)^{-d/2}.\]
Hence, \(\|P_{s,t}^{\omega,B_{n}}\|_{1\to 2}\leq c_{3}(t-s)^{-d/4}\). On the other hand, by the differential forward equation in weak sense, we find that, for \(\mathbb{P}\)-a.e. \(\omega\) and a.e. \(t\in(s,\infty)\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\big{\langle}f,\big{(}P_{s,t}^{ \omega,B_{n}}\big{)}^{*}\!g\big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}^{*} \;=\;\frac{\mathrm{d}}{\mathrm{d}t}\big{\langle}P_{s,t}^{\omega,B_{n}}f,g \big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}\;=\;\big{\langle}P_{s,t}^{\omega}( \mathcal{L}_{t}^{\omega,B_{n}}f),g\big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}\] \[\;=\;\big{\langle}\mathcal{L}_{t}^{\omega,B_{n}}f,\big{(}P_{s,t}^{ \omega,B_{n}}\big{)}^{*}\!g\big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}\]
for any \(f,g\in\ell_{B_{n}}(\mathbb{Z}^{d})\). In particular, for \(\mathbb{P}\)-a.e. \(\omega\) and a.e. \(t\in(s,\infty)\), it holds that
\[\frac{\mathrm{d}}{\mathrm{d}t}\,\|\big{(}P_{s,t}^{\omega,B_{n}} \big{)}^{*}\!f\|_{\ell^{2}(\mathbb{Z}^{d})}^{2}\;=\;2\,\big{\langle}\mathcal{ L}_{t}^{\omega,B_{n}}\big{(}P_{s,t}^{\omega,B_{n}}\big{)}^{*}\!f,\big{(}P_{s,t}^{ \omega,B_{n}}\big{)}^{*}\!f\big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}\] \[\;=\;-2\,\mathcal{E}_{t}^{\omega}\big{(}\big{(}P_{s,t}^{\omega,B_ {n}}\big{)}^{*}\!f,\big{(}P_{s,t}^{\omega,B_{n}}\big{)}^{*}\!f\big{)},\qquad \forall\,f\in\ell_{B_{n}}(\mathbb{Z}^{d}).\]
Thus, by a computation similar to the one that we did above, we obtain that \(\|\big{(}P_{s,t}^{\omega,B_{n}}\big{)}^{*}\|_{1\to 2}\leq c_{3}(t-s)^{-d/4}\). Recall that, by the Chapman-Kolmogorov equation it holds that \(P_{s,t}^{\omega,B_{n}}=P_{s,(s+t)/2}^{\omega,B_{n}}\circ P_{(s+t)/2,t}^{\omega, B_{n}}\) and \(\big{(}P_{s,t}^{\omega,B_{n}}\big{)}^{*}=\big{(}P_{(s+t)/2,t}^{\omega,B_{n}}\big{)} ^{*}\!\circ\!\big{(}P_{s,(s+t)/2}^{\omega,B_{n}}\big{)}^{*}\). Thus, by using the Cauchy-Schwarz equation, we find that
\[\|P_{s,t}^{\omega,B_{n}}\|_{1\to\infty}\] \[\|\big{(}P_{s,t}^{\omega,B_{n}}\big{)}^{*}\|_{1\to\infty}\]
Moreover, by Proposition 1.4,
\[\mathrm{P}^{\omega}_{s,x}\Big{[}\lim_{n\to\infty}T_{n}=\infty\Big{]}\;=\;1.\]
Indeed, if not we have \(\mathrm{P}^{\omega}_{s,x}\big{[}\lim_{n\to\infty}T_{n}\leq M\big{]}>\varepsilon\) for some large \(M>0\) and small \(\varepsilon>0\), which contradicts Proposition 1.4. Hence, by the monotone convergence theorem, we have, for each \(x,y\in\mathbb{Z}^{d}\),
\[p^{\omega}_{s,t}(x,y)\;=\;\mathrm{P}^{\omega}_{s,x}\big{[}X_{t}=y\big{]}\;=\; \lim_{n\to\infty}\mathrm{P}^{\omega}_{s,x}\big{[}X_{t}=y,T_{n}\geq t\big{]}\;= \;\lim_{n\to\infty}p^{\omega,B_{n}}_{s,t}(x,y).\]
In particular, by the dominated convergence theorem,
\[\big{|}\big{\langle}f,P^{\omega}_{s,t}g\big{\rangle}_{\ell^{2}(\mathbb{Z}^{d}) }\big{|}\;=\;\lim_{n\to\infty}\bigl{|}\big{\langle}f,P^{\omega,B_{n}}_{s,t}g \big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}\big{|}\qquad\forall\,f,g\in\ell^{1}( \mathbb{Z}^{d}).\]
Thus, we conclude
\[\|\big{(}P^{\omega}_{s,t}\big{)}^{*}\|_{1\to\infty}\;=\;\|P^{\omega}_{s,t}\|_{ 1\to\infty}\;\leq\;\limsup_{n\to\infty}\|P^{\omega,B_{n}}_{s,t}\|_{1\to\infty }\;\leq\;\frac{c_{3}^{2}}{(t-s)^{d/2}},\]
and the assertion follows.
**Proposition 2.4**.: _Suppose that Assumption 1.1 and 1.5 hold. Then, for any \(t_{0}>0\), there exists \(C_{7}\equiv C_{7}(t_{0})>0\) such that for all \(s\geq 0\) and \(t\geq t_{0}\), it holds that_
\[\bar{H}_{t+s}-\bar{H}_{t}\;\leq\;C_{7}\,\frac{s}{t}. \tag{2.9}\]
Proof.: First, by Lemma 2.3 it holds that \(\bar{H}_{t}\geq\frac{d}{2}\ln t-c\). Also, by Lemma 2.1 and Proposition 1.4, using Jensen's inequality we have \(\bar{H}_{t}\leq\frac{d}{2}\ln(1+t)+c\) for some \(c<\infty\). These two estimates imply that, for any \(s,t>0\),
\[\bar{H}_{t+s}-\bar{H}_{t}\;\leq\;\frac{d}{2}\,\ln\Bigl{(}1+\frac{1+s}{t} \Bigr{)}+c. \tag{2.10}\]
Let us now fix some \(t_{0}>0\). Then, for any \(t_{0}\leq t\leq s\), we immediately deduce from (2.10) that
\[\bar{H}_{t+s}-\bar{H}_{t}\;\leq\;\frac{s}{t}\,\biggl{(}\frac{t}{s}\cdot\frac{ d}{2}\ln\Bigl{(}1+\frac{1}{t_{0}}+\frac{s}{t}\Bigr{)}+c\biggr{)}\;\leq\;\frac{s}{t} \,\biggl{(}\frac{d}{2}\Bigl{(}1+\frac{1}{t_{0}}\Bigr{)}+c\biggr{)}.\]
So it remains to prove the estimate in case \(0\leq s<t\) and \(t\geq t_{0}\). For this purpose, choose \(k\in\mathbb{N}\) such that \(s(k-1)\leq t/2<ks\). Then, in view of Lemma 2.2-(ii),
\[\bar{H}_{t+s}-\bar{H}_{t}\;\leq\;\frac{1}{k}\,\sum_{i=1}^{k}\bigl{(}\bar{H}_{ t/2+is}-\bar{H}_{t/2+(i-1)s}\bigr{)}\;=\;\frac{1}{k}\,\bigl{(}\bar{H}_{t/2+ks}- \bar{H}_{t/2}\bigr{)}.\]
But,
\[\frac{1}{k}\,\bigl{(}\bar{H}_{t/2+ks}-\bar{H}_{t/2}\bigr{)}\stackrel{{ \eqref{eq:H_t+s}}}{{\leq}}\frac{2s}{t}\,\biggl{(}\frac{d}{2}\ln\Bigl{(}1+ \frac{2(1+ks)}{t}\Bigr{)}+c\biggr{)}\;\leq\;\frac{2s}{t}\,\biggl{(}\frac{d}{2} \Bigl{(}3+\frac{2}{t_{0}}\Bigr{)}+c\biggr{)}.\]
By choosing \(C_{7}(t_{0}):=d(3+2/t_{0})+2c\), the assertion follows.
For any probability measure \(\mu,\nu\) on \(\mathbb{Z}^{d}\) we further define the \(\Delta\)-distance,\(\Delta(\mu,\nu)\), between \(\mu\) and \(\nu\) by
\[\Delta(\mu,\nu)^{2}\ :=\ \sum_{\begin{subarray}{c}x\in\mathbb{Z}^{d}\\ \mu(x)+\nu(x)>0\end{subarray}}\frac{(\mu(x)-\nu(x))^{2}}{\mu(x)+\nu(x)}. \tag{2.11}\]
We also write
\[\Delta^{\omega}_{s,t}(0,x)\ :=\ \Delta\big{(}\,p^{\omega}_{0,s+t}(0,\cdot),\,p^{ \omega}_{s,s+t}(x,\cdot)\,\big{)}\qquad\forall\,x\in\mathbb{Z}^{d},\,s,t\geq 0.\]
Likewise, we define \(\Delta^{\tilde{\omega}}_{s,t}\).
**Proposition 2.5**.: _Suppose that Assumption 1.1 holds true._
1. _Then, for every_ \(t>0\)_,_ \[\mathbb{E}\big{[}\mathrm{E}^{\omega}_{0,0}\big{[}\Delta^{\omega}_{s,t}(0,X_{s })^{2}\big{]}\big{]}\ \leq\ 2\,\big{(}\bar{H}_{s+t}\,-\,\bar{H}_{t}\big{)}.\] (2.12)
2. _If Assumption_ 1.5 _is additionally satisfied, then there exists_ \(C_{8}\equiv C_{8}(t_{0})>0\) _such that the following holds for all_ \(t>t_{0}\)_,_ \[\mathbb{E}\big{[}\mathrm{E}_{0,0}\big{[}\Delta^{\omega}_{s,t}(0,X_{s})^{2} \big{]}\big{]}\ \leq\ 2\big{(}\bar{H}_{s+t/2}\,-\,\bar{H}_{t/2}\big{)}\ \leq\ C_{8}\,\frac{s}{t}.\] (2.13)
_Further, the same estimates hold if \(\mathrm{E}^{\omega}_{0,0}\big{[}\Delta^{\omega}_{s,t}(0,X_{s})^{2}\big{]}\) is replaced by \(\mathrm{E}^{\tilde{\omega}}_{0,0}\big{[}\Delta^{\tilde{\omega}}_{s,t}(0,\tilde {X}_{s})^{2}\big{]}\)._
Proof.: (i) The proof follows is based on arguments originally given in [9, Lemma 7]. First, notice that for any \(a>0\)
\[a\ln a\ \geq\ \frac{1}{2}\,\frac{(a-1)^{2}}{a+1}+a-1.\]
By choosing \(a=p^{\omega}_{s,s+t}(X_{s},y)/p^{\omega}_{0,s+t}(0,y)\), we obtain
\[\mathrm{E}^{\omega}_{0,0}\big{[}\Delta^{\omega}_{s,t}(0,X_{s})^{2} \big{]}\] \[\qquad\qquad-\ 2\,\sum_{y\in\mathbb{Z}^{d}}\Big{(}\mathrm{E}^{ \omega}_{0,0}\big{[}p^{\omega}_{0,s+t}(0,y)\big{]}\,-\,\mathrm{E}^{\omega}_{0,0}\big{[}p^{\omega}_{s,s+t}(X_{s},y)\big{]}\Big{)}\] \[\qquad\qquad=\ 2\,\sum_{y\in\mathbb{Z}^{d}}\Big{(}\phi\big{(}p^{ \omega}_{0,s+t}(0,y)\big{)}\,-\,\mathrm{E}^{\omega}_{0,0}\Big{[}\phi\big{(}p^ {\omega}_{s,s+t}(X_{s},y)\big{)}\Big{]}\Big{)},\]
where we used that the Chapman-Kolmogorov equation implies
\[\mathrm{E}^{\omega}_{0,0}\Big{[}p^{\omega}_{s,s+t}(X_{s},y)\, \ln p^{\omega}_{0,s+t}(0,y)\Big{]}\] \[\qquad=\ \sum_{x^{\prime}\in\mathbb{Z}^{d}}p^{\omega}_{0,s}(0,x^{ \prime})\,p^{\omega}_{s,s+t}(x^{\prime},y)\,\ln p^{\omega}_{0,s+t}(0,y)\ =\ -\phi\big{(}p^{\omega}_{0,s+t}(0,y)\big{)}.\]
By taking expectation with respect to \(\mathbb{P}\) and using (2.5), the assertion follows.
(ii) This follows from Proposition 2.5-(i) combined with (2.9).
### Discrete space-derivatives of the heat kernel
The main objective in this subsection is to prove Theorem 1.6.
Proof of Theorem 1.6.: (i) In view of (1.5) and Assumption 1.1-(i), it suffices to prove (1.11) for \(y=0\). The proof comprises two steps.
_Step 1._ We first show that, for any \(x,y\in\mathbb{Z}^{d}\), \(t>0\) and \(s>0\),
\[\mathbb{E}\Big{[}p_{0,s}^{\omega}(0,x)\,\big{|}p_{s,s+t}^{\omega}(x,y)-p_{0,s+ t}^{\omega}(0,y)\big{|}^{2}\Big{]}\ \leq\ c\,\frac{s}{(1\lor t)^{d+1}}. \tag{2.14}\]
By using the Chapman-Kolmogorov equation and the Cauchy-Schwarz inequality we find that
\[\big{|}p_{s,s+t}^{\omega}(x,y)\,-\,p_{0,s+t}^{\omega}(0,y)\big{|}\] \[\qquad\leq\ \sum_{z\in\mathbb{Z}^{d}}\big{|}p_{s,s+t/2}^{\omega}(x,z )\,-\,p_{0,s+t/2}^{\omega}(0,z)\big{|}\,p_{s+t/2,s+t}^{\omega}(z,y)\] (2.15) \[\qquad\qquad\
Note that, due to Assumption 1.5, \(\lim_{s\downarrow 0}s^{-1}p^{\omega}_{0,s}(0,x)=\omega_{0}(0,x)\geq C_{2}>K\) for \(\mathbb{P}\)-a.e. \(\omega\). Since \(|s^{-1}p^{\omega}_{0,s}(0,x)\lor K-s^{-1}p^{\omega}_{0,s}(0,x)|\leq K\) for all \(s\geq 0\), Lebesgue's dominated convergence theorem implies that
\[\lim_{s\downarrow 0}\mathbb{E}\Big{[}\big{|}s^{-1}p^{\omega}_{0,s}(0,x)\lor K-s^ {-1}p^{\omega}_{0,s}(0,x)\big{|}\Big{]}\ =\ 0. \tag{2.19}\]
Likewise, we obtain that
\[\limsup_{s\searrow 0}\mathbb{E}\Big{[}\big{|}p^{\omega}_{s,s+t}(0,y ^{\prime})-p^{\omega}_{0,s+t}(0,y^{\prime})\big{|}^{2}\Big{]}\\ =\ \limsup_{s\searrow 0}\mathbb{E}\bigg{[}\bigg{(}\frac{p^{ \omega}_{0,s}(0,0)\lor K}{p^{\omega}_{0,s}(0,0)\lor K}\bigg{)}\,\big{|}p^{ \omega}_{s,s+t}(0,y^{\prime})-p^{\omega}_{0,s+t}(0,y^{\prime})\big{|}^{2} \bigg{]}\\ \stackrel{{\eqref{eq:1.1}}}{{\leq}}\limsup_{s \searrow 0}\frac{1}{K}\left(\frac{cs}{(1\lor t)^{d+1}}+\mathbb{E}\Big{[} \big{|}p^{\omega}_{0,s}(0,0)\lor K-p^{\omega}_{0,s}(0,0)\big{|}\Big{]}\right) \ =\ 0, \tag{2.20}\]
where we used in the last step again Lebesgue's dominated convergence theorem, since \(|p^{\omega}_{0,s}(0,0)\lor K-p^{\omega}_{0,s}(0,0)|\leq K\) for all \(s\geq 0\), and \(\lim_{s\downarrow 0}p^{\omega}_{0,s}(0,0)=1>K\). Thus, (1.11) follows from (2.18), (2.19) and (2.20).
In order to show the last claim note that
\[\mathbb{E}\big{[}|\nabla^{1}_{x}p^{\omega}_{0,t}(y^{\prime},y)|^{2}\big{]}\ =\ \mathbb{E}\big{[}|\nabla^{1}_{x}p^{\omega}_{-t,0}(y^{\prime},y)|^{2}\big{]}\ =\ \mathbb{E}\big{[}|\nabla^{2}_{x}p^{\tilde{\omega}}_{0,t}(y,y^{\prime})|^{2} \big{]}.\]
Thus, the assertion follows by the same arguments used in Step 1 and 2.
(ii) Again, in view of (1.5) and Assumption 1.1-(i), it suffices to prove (1.12) for \(y=0\). By interpolation, the assertion (1.12) follows, once we have that shown that
\[\bigg{(}\sum_{y^{\prime}\in\mathbb{Z}^{d}}\mathbb{E}\Big{[}\big{|}\nabla^{1}_ {x}\,p^{\omega}_{0,t}(y,y^{\prime})\big{|}^{p}\Big{]}\bigg{)}^{1/p}\ \leq\ \begin{cases}C_{4}\,(1\lor t)^{-1/2},&p=1\\ C_{4}\,(1\lor t)^{-(d+2)/4},&p=2.\end{cases}\]
For \(p=2\) the proof of (1.12) goes literally along the lines of the proof of Theorem 1.6-(i), expect from the fact that (2.14) has to be replaced by
\[\sum_{y\in\mathbb{Z}^{d}}\mathbb{E}\Big{[}p^{\omega}_{0,s}(0,x)\,\big{|}p^{ \omega}_{s,s+t}(x,y)-p^{\omega}_{0,s+t}(0,y)\big{|}^{2}\Big{]}\ \leq\ c\,\frac{s}{(1\lor t)^{d/2+1}},\ \ \ \forall\,x\in\mathbb{Z}^{d},\ s>0,\]
which follows immediately by summing (2.17) over all \(y\in\mathbb{Z}^{d}\) combined with (2.13). For \(p=1\) we obtain instead of (2.14) the following estimate
\[\sum_{y\in\mathbb{Z}^{d}}\mathbb{E}\Big{[}p^{\omega}_{0,s}(0,x)\,\big{|}p^{ \omega}_{s,s+t}(x,y)-p^{\omega}_{0,s+t}(0,y)\big{|}\Big{]}\ \leq\ c\,\sqrt{\frac{s}{1\lor t}},\ \ \ \ \ \forall\,x\in\mathbb{Z}^{d},\ s>0,\]
by first summing (2.15) over all \(y\in\mathbb{Z}^{d}\), and then applying the Cauchy-Schwarz inequality and (2.13). By a minor modification of the second step in the proof of Theorem 1.6-(i), the desired claim (1.12) for \(p=1\) follows. Finally, we use that \(\mathbb{E}[|\nabla^{1}_{x}p^{\omega}_{0,t}(y^{\prime},y)|^{p}]=\mathbb{E}[| \nabla^{2}_{x}p^{\tilde{\omega}}_{0,t}(y,y^{\prime})|^{p}]\) to prove the last result.
(iii) The proof relies on the same argument as in the [17, Proof of (1.5b)]. First, by using the stationarity of \(\mathbb{P}\) with respect to space-shifts and (1.5) we find that
\[\mathbb{E}\Big{[}\nabla_{x}^{2}\nabla_{x^{\prime}}^{2}\,p_{0,t}^{\omega}(y,y^{ \prime})\Big{]}\ =\ \mathbb{E}\Big{[}\nabla_{-x}^{1}\nabla_{x^{\prime}}^{2}\,p_{0,t}^{\omega}(y,y ^{\prime})\Big{]}. \tag{2.21}\]
Hence, it is enough to prove (1.14). By using the Chapman-Kolmogorov equation, we get that
\[\nabla_{-x}^{1}\nabla_{x^{\prime}}^{2}\,p_{0,t}^{\omega}(y,y^{\prime})\ =\ \sum_{z\in\mathbb{Z}^{d}}\Big{(}\nabla_{-x}^{1}\,p_{0,t/2}^{\omega}(y,z)\Big{)} \Big{(}\nabla_{x^{\prime}}^{2}\,p_{t/2,t}^{\omega}(z,y^{\prime})\Big{)}. \tag{2.22}\]
By applying the Cauchy-Schwarz inequality, we obtain that
\[\mathbb{E}\Big{[}\big{|}\nabla_{-x}^{1}\nabla_{x^{\prime}}^{2}\,p _{0,t}^{\omega}(y,y^{\prime})\big{|}\Big{]}\] \[\qquad\leq\ \bigg{(}\sum_{z}\mathbb{E}\Big{[}\big{|}\nabla_{-x}^{1} \,p_{0,t/2}^{\omega}(y,z)\big{|}^{2}\Big{]}\bigg{)}^{\!\!1/2}\bigg{(}\sum_{z} \mathbb{E}\Big{[}\big{|}\nabla_{x^{\prime}}^{2}\,p_{t/2,t}^{\omega}(z,y^{ \prime})\big{|}^{2}\Big{]}\bigg{)}^{\!\!1/2},\]
and (1.14) follows from (1.12).
_Remark 2.6_.: In [17, Equation (1.6)], the following time derivative estimate is given when the time-dependent conductance is uniform elliptic,
\[\Big{|}\frac{\partial}{\partial t}\bar{p}_{0,t}(0,x)\Big{|}\ \leq\ c_{1}\,t^{-d/2-1}\exp \bigl{(}-c_{2}d(0,x)^{2}/t\bigr{)}.\]
However, for the case the conductances are unbounded from above, their proof does not work and we do not know how to obtain even the on-diagonal estimate.
### Off-diagonal upper bound and Green kernel estimates
In this subsection, we consider annealed off-diagonal upper bounds. For this purpose, we introduce the following deterministic kernel.
**Definition 2.7**.: For some \(\alpha\in(0,\infty)\) define, for any \(t>0\) and \(y\in\mathbb{Z}^{d}\),
\[k_{\alpha}(t,y)\ :=\ \begin{cases}|y|^{-d/2-\alpha/2},&\text{ if }t\leq|y|\\ (1\lor t)^{-d/2}\left(1\lor\frac{|y|}{\sqrt{t}}\right)^{\!\!-\alpha},&\text{ if }t\geq|y|.\end{cases} \tag{2.23}\]
Notice that the above defined kernel is stable under convolution in the following sense. Since the proof is rather technical we postpone it to Appendix B.
**Lemma 2.8**.: _Suppose that \(\alpha>d\). Then, there exists \(C_{9}\equiv C_{9}(d,\alpha)<\infty\) such that, for any \(y\in\mathbb{Z}^{d}\) and \(t>0\),_
\[\sum_{z\in\mathbb{Z}^{d}}k_{\alpha}(t/2,z)\,k_{\alpha}(t/2,y-z)\ \leq\ C_{9}\,k_{\alpha}(t,y). \tag{2.24}\]
Now, let us impose the following quenched assumption on the off-diagonal behavior of the heat kernel.
**Assumption 2.9**.: _For some \(\alpha,\beta\in(0,\infty)\) assume that there exist non-random constants \(c_{1},c_{2}\in(0,\infty)\) and a positive random variable \(N(\omega)\) such that the following holds true: for \(\mathbb{P}\)-a.e. \(\omega\) and for all \(s\geq 0\), \(t>0\) and \(y,y^{\prime}\in\mathbb{Z}^{d}\) such that \(|y-y^{\prime}|\vee\sqrt{t}\geq N_{s,y}(\omega):=N(\omega)\circ\tau_{s,y}\),_
\[p_{s,s+t}^{\omega}(y,y^{\prime})\;\leq\;c_{1}\,k_{\alpha}(t,y^{ \prime}-y). \tag{2.25}\]
_Moreover,_
\[\mathbb{P}\big{[}N_{s,y}(\omega)\geq L\big{]}\;\leq\;c_{2}\,L^{- \beta},\quad\forall\,L>0. \tag{2.26}\]
_Remark 2.10_.:
1. In the so-called Poissonian regime, namely for \(t\leq d(y,y^{\prime})\), usually the following stronger estimate holds (see for instance [4, Lemma 1.1, Theorem 1] and [6, Theorem 1.2]). \[p_{s,s+t}^{\omega}(y,y^{\prime})\;\leq\;c_{1}\exp\big{(}-c_{2}d (y,y^{\prime})(1\vee\log(d(y,y^{\prime})/t))\big{)}.\] However, in the following estimates we only need the weaker estimate in (2.25).
2. For time independent cases, this assumption (in fact, a stronger Gaussian bound) holds for supercritical percolation (see [4, Lemma 1.1, Theorem 1]) and VSRW for i.i.d. random conductance bounded from below (see [6, Theorem 1.2]. For the time-dependent case, the assumption holds if the conductance is uniformly elliptic.
**Example 2.11**.: Consider \(\omega\equiv\big{\{}\omega_{t}(e):e\in E^{d},\,t\in\mathbb{R}\big{\}}\) that satisfy Assumption 1.1, 1.5, and furthermore there exist i.i.d. \(\bar{\omega}\equiv\big{\{}\bar{\omega}(e):e\in E^{d}\big{\}}\) and \(0<c<C<\infty\) such that
\[\mathbb{P}\big{[}c\,\bar{\omega}(e)\leq\omega_{t}(e)\leq C\,\bar {\omega}(e),\quad\forall\,e\in E^{d},\,t\in\mathbb{R}\big{]}\;=\;1. \tag{2.27}\]
Then Assumption 2.9 holds, so the results in this section is applicable. Indeed, if we define
\[\tilde{d}(x,y)\;=\;\inf\!\left\{\sum\nolimits_{i=1}^{n}\bar{ \omega}(e_{i})^{-1/2}\right\}\]
for distinct points \(x,y\in\mathbb{Z}^{d}\) where the infimum is taken over paths \((e_{1},\cdots,e_{n})\) from \(x\) to \(y\), we can prove (4.6)-(4.8) of [6] (which is stronger than Assumption 2.9) by the same proof as in [6, Section 2,4] with suitable changes of constants in the inequalities.
**Lemma 2.12**.: _Suppose that Assumption 1.1, 1.5 hold and let \(p\in[1,\infty)\). Further, let Assumption 2.9 be satisfied for some \(\alpha>0\) and \(\beta\geq p\alpha/2\). Then, there exists \(C_{10}<\infty\) such that for any \(y,y^{\prime}\in\mathbb{Z}^{d}\) and \(t>0\)_
\[\mathbb{E}\big{[}p_{0,t}^{\omega}(y,y^{\prime})^{p}\big{]}^{1/p} \;\leq\;C_{10}\,k_{\alpha}(t,y^{\prime}-y). \tag{2.28}\]
Proof.: In view of (1.5) and Assumption 1.1-(i) it suffices to prove (2.28) for \(y=0\) and \(y^{\prime}\in\mathbb{Z}^{d}\) such that \(|y^{\prime}|\geq 1\) and \(0<t\leq|y^{\prime}|^{2}\). For this purpose, consider the event
\(A_{0}(\omega):=\{N_{0}(\omega)\leq|y^{\prime}|\vee\sqrt{t}\}\). Then, by Minkowski's inequality, we get that
\[\mathbb{E}\big{[}p_{0,t}^{\omega}(0,y^{\prime})^{p}\big{]}^{1/p} \stackrel{{\eqref{eq:2.1}}}{{\leq}}\mathbb{E}\big{[}p_{0,t}^{ \omega}(0,y^{\prime})^{p}\mathds{1}\!\!1_{A_{0}(\omega)}\big{]}^{1/p}+c\,(1 \lor t)^{-d/2}\,\mathbb{P}\big{[}A_{0}^{c}(\omega)\big{]}^{1/p}\] \[\stackrel{{\eqref{eq:2.2}}}{{\leq}}c\,k_{\alpha}(t, y^{\prime})+c\,(1\lor t)^{-d/2}|y^{\prime}|^{-\beta/p}.\]
Since \(\beta\geq p\alpha/2\), the assertion follows.
**Proposition 2.13**.: _Suppose that Assumption 1.1 and 1.5 hold. Additionally, let Assumption 2.9 be satisfied for some \(\alpha\geq 2\) and \(\beta>d+\alpha/2\). Then, there exist \(C_{11}<\infty\) such that for any \((0,x)\in E^{d}\), \(y,y^{\prime}\in\mathbb{Z}^{d}\) and \(t>0\),_
\[\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{\omega}(y,y^{\prime})\big{|}^ {2}\Big{]}^{1/2}\;\leq\;C_{11}(1\lor t)^{-1/2}k_{\alpha/2}(t,y^{\prime}-y). \tag{2.29}\]
Proof.: Again, in view of (1.5) and Assumption 1.1-(i) as well as Theorem 1.6-(i), it suffices to prove (2.29) for \(y=0\) and \(y^{\prime}\in\mathbb{Z}^{d}\) such that \(|y^{\prime}|\geq 1\) and \(0<t\leq|y^{\prime}|^{2}\). Set \(A_{z}(\omega):=\{N_{z}(\omega)\leq|z-y^{\prime}|\vee\sqrt{t}\}\), \(z\in\{0,x\}\). We proceed by considering the cases \(0<t\leq|y^{\prime}|\) and \(|y^{\prime}|\leq t\leq|y^{\prime}|^{2}\) separately.
First, consider the regime \(0<t<|y^{\prime}|\). Clearly, for \(|y^{\prime}|=1\) the assertion follows by bounding the heat kernel from above by one. Therefore, it remains to consider the case that \(|y^{\prime}|>1\). By Minkowski's inequality, we get for any \((0,x)\in E^{d}\)
\[\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{\omega}(0,y^{ \prime})\big{|}^{2}\Big{]}^{1/2}\] \[\stackrel{{\eqref{eq:2.1}}}{{\leq}}\mathbb{E}\Big{[} \big{|}\nabla_{x}p_{0,t}^{\omega}(0,y^{\prime})\big{|}^{2}\,\mathds{1}\!\!1_{A_ {0}(\omega)\cap A_{x}(\omega)}\Big{]}^{1/2}+\,c\,(1\lor t)^{-d/2}\,\,\mathbb{P} \big{[}A_{0}^{c}(\omega)\cup A_{x}^{c}(\omega)\big{]}^{1/2}\] \[\stackrel{{\eqref{eq:2.1}}}{{\leq}}c\,|y^{\prime}|^{ -d/2-\alpha/2}+\,c\,(1\lor t)^{-d/2}\,|y^{\prime}|^{-\beta/2},\]
where we used in the last step that \(\sqrt{t}\vee|y^{\prime}-x|\geq(\sqrt{t}\vee|y^{\prime}|)/2\). Indeed, since \(y^{\prime}\in\mathbb{Z}^{d}\) with \(|y^{\prime}|>1\) and \((0,x)\in E^{d}\) we have that \(|y^{\prime}-x|\geq 1=|x|\) which implies that
\[\sqrt{t}\vee|y^{\prime}|\;\leq\;\sqrt{t}\vee\big{(}|y^{\prime}-x|+|x|\big{)}\; \leq\;2\big{(}\sqrt{t}\vee|y^{\prime}-x|\big{)}.\]
Since \(\alpha\geq 2\) and \(\beta\geq d+\alpha/2\) we therefore obtain the desired bound, namely
\[\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{\omega}(0,y^{\prime})\big{|}^ {2}\Big{]}^{1/2}\;\leq\;c(1\lor t)^{-1/2}\,k_{\alpha/2}(t,y^{\prime}).\]
Next, consider the case that \(|y^{\prime}|\leq t\leq|y^{\prime}|^{2}\). By using similar arguments as in the proof of Theorem 1.6-(i) (_Step 2_). we first obtain that
\[\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{\omega}(0,y^{\prime})\big{|}^ {2}\Big{]}^{1/2}\stackrel{{\eqref{eq:2.2}}}{{\leq}}\limsup_{s \downarrow 0}\mathbb{E}\Big{[}\big{|}p_{s,s+t}^{\omega}(x,y^{\prime})-p_{0,s+t}^{ \omega}(0,y^{\prime})\big{|}^{2}\,\Big{]}^{1/2} \tag{2.30}\]
Thus, by using again Minkowski's inequality, (2.7) and (2.25), we further get that, for any \(s\geq 0\),
\[\limsup_{s\downarrow 0}\mathbb{E}\Big{[}\big{|}p^{\omega}_{s,s+t}(x,y ^{\prime})-p^{\omega}_{0,s+t}(0,y^{\prime})\big{|}^{2}\,\Big{]}^{1/2}\] \[\qquad\stackrel{{\eqref{eq:2.19}}}{{\leq}}\limsup_{s \downarrow 0}\mathbb{E}\Big{[}\big{|}p^{\omega}_{s,s+t}(x,y^{\prime})-p^{ \omega}_{0,s+t}(0,y^{\prime})\big{|}^{2}\,\mbox{\rm 1 \kern-3.5pt1}_{A_{0}(\omega)\cap A_{x}(\omega)}\Big{]}^{1/2}+\,c\,t^{-d/2}\,|y^{ \prime}|^{-\beta/2}\] \[\qquad\stackrel{{\eqref{eq:2.19}}}{{\leq}}\limsup_{s \downarrow 0}\biggl{(}\frac{1}{Ks}\,\mathbb{E}\Big{[}p^{\omega}_{0,s}(0,x)\, \big{|}p^{\omega}_{s,s+t}(x,y^{\prime})-p^{\omega}_{0,s+t}(0,y^{\prime})\big{|} ^{2}\,\mbox{\rm 1\kern-3.5pt1}_{A_{0}(\omega)\cap A_{x}(\omega)}\Big{]}\biggr{)} ^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
_._
3. _If_ \(d\geq 1\) _and Assumption_ 2.9 _is satisfied for_ \(\alpha>2d\) _and_ \(\beta>d+\alpha/2\)_, then there exists_ \(C_{14}<\infty\) _such that, for any_ \((0,x),(0,x^{\prime})\in E^{d}\) _and_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _with_ \(y\neq y^{\prime}\)_,_ \[\mathbb{E}\Bigg{[}\Big{|}\int_{0}^{\infty}\nabla_{x}^{1}\,\nabla_{x^{\prime}}^ {2}\,p_{0,t}^{\omega}(y,y^{\prime})\,\mathrm{d}t\Bigg{|}\Bigg{]}\ \leq\ C_{14}\,|y-y^{\prime}|^{-d}.\] (2.35)
Proof.: First of all notice that by Lemma 2.3 and Theorem 1.6 the integrals appearing on the left-hand side of (2.33), (2.34) and (2.35), respectively, are well defined. Moreover, in view of (1.5) and Assumption 1.1-(i) it suffices to show the assertion for \(y=0\). Moreover, set \(r:=|y^{\prime}|\) and assume w.l.o.g. that \(r\geq 4\).
(i) By applying Minkowski's integral inequality, cf. [22, Theorem 202], together with the estimate (2.28), we obtain that, for any \(p\in[1,\infty)\), \(\alpha>0\) and \(\beta\geq p\alpha/2\)
\[\mathbb{E}\Bigg{[}\Big{|}\int_{0}^{\infty}p_{0,t}^{\omega}(0,y^{\prime})\, \mathrm{d}t\Big{|}^{p}\Bigg{]}^{1/p}\ \leq\ \int_{0}^{\infty}\mathbb{E}\Big{[}\big{|}p_{0,t}^{\omega}(0,y^{\prime})\big{|} ^{p}\Big{]}^{1/p}\mathrm{d}t\overset{\eqref{eq:2.28}}{\leq}C_{10}\,\int_{0}^{ \infty}k_{\alpha}(t,y^{\prime})\,\mathrm{d}t.\]
Thus, by estimating the resulting integral on the intervals \([0,r)\), \([r,r^{2})\) and \([r^{2},\infty)\) separately, we get, for any \(\alpha>d-2\) and \(\beta\geq p\alpha/2\)
\[\int_{0}^{\infty}k_{\alpha}(t,y^{\prime})\,\mathrm{d}t\] \[\qquad\leq\ \int_{0}^{r}r^{-d/2-\alpha/2}\,\mathrm{d}t+\,\int_{r }^{r^{2}}t^{-d/2}\,\bigg{(}\frac{r}{\sqrt{t}}\bigg{)}^{-\alpha}\mathrm{d}t+\, \int_{r^{2}}^{\infty}t^{-d/2}\,\mathrm{d}t\ \leq\ c\,r^{-(d-2)},\]
which concludes the proof of (2.33).
(ii) By applying again Minkowski's integral inequality, we first get that
\[\mathbb{E}\Bigg{[}\Big{|}\int_{0}^{\infty}\nabla_{x}^{1}\,p_{0,t }^{\omega}(0,y^{\prime})\Big{|}^{2}\Bigg{]}^{1/2}\\ \leq\ \int_{0}^{r^{2}}\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t }^{\omega}(0,y^{\prime})\big{|}^{2}\Big{]}^{1/2}\,\mathrm{d}t\,+\int_{r^{2}}^ {\infty}\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{\omega}(0,y^{\prime} )\big{|}^{2}\Big{]}^{1/2}\,\mathrm{d}t. \tag{2.36}\]
Thus, by Proposition 2.13, we find that
\[\int_{r^{2}}^{\infty}\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{\omega} (0,y^{\prime})\big{|}^{2}\Big{]}^{1/2}\,\mathrm{d}t\ \leq\ \int_{r^{2}}^{\infty}C_{11}\,t^{-(d+1)/2}\,\mathrm{d}t\ \leq\ c\,r^{-(d-1)}.\]
Likewise, by splitting the domain of integration of the first integral on the right-hand side of (2.36) into the contributions coming from the intervals \([0,r)\) and \([r,r^{2}]\) and using that \(\alpha>2(d-1)\), we obtain
\[\int_{0}^{r^{2}}\mathbb{E}\Big{[}\big{|}\nabla_{x}^{1}\,p_{0,t}^{ \omega}(0,y^{\prime})\big{|}^{2}\Big{]}^{1/2}\,\mathrm{d}t\\ \leq\ c\left(\int_{0}^{r}(1\lor t)^{-1/2}\,r^{-d/2-\alpha/4}\, \mathrm{d}t\,+\int_{r}^{r^{2}}t^{-(d+1)/2}\bigg{(}\frac{r}{\sqrt{t}}\bigg{)}^ {-\alpha/2}\,\mathrm{d}t\right)\ \leq\ c\,r^{-(d-1)}.\]
Hence, by combining the estimates above, the assertion (2.34) follows.
(iii) First, an application of (2.22) and the Cauchy-Schwarz inequality yields that
\[\mathbb{E}\Bigg{[}\bigg{|}\int_{0}^{\infty}\nabla^{1}_{-x}\,\nabla^ {2}_{x^{\prime}}\,p^{\omega}_{0,t}(0,y^{\prime})\,\mathrm{d}t\bigg{|}\Bigg{]}\] \[\qquad\leq\ \int_{0}^{\infty}\sum_{z\in\mathbb{Z}^{d}}\mathbb{E} \Big{[}\big{|}\nabla^{1}_{x}\,p^{\omega}_{0,t/2}(0,z)\big{|}^{2}\Big{]}^{1/2} \,\mathbb{E}\Big{[}\big{|}\nabla^{2}_{x^{\prime}}\,p^{\omega}_{t/2,t}(z,y^{ \prime})\big{|}^{2}\Big{]}^{1/2}\,\mathrm{d}t\] \[\overset{\eqref{eq:2.24}}{\leq}C^{2}_{11}\,\int_{0}^{\infty}(1 \lor t)^{-1}\sum_{z\in\mathbb{Z}^{d}}k_{\alpha/2}(t/2,z)\,k_{\alpha/2}(t/2,y^{ \prime}-z)\,\mathrm{d}t\] \[\overset{\eqref{eq:2.24}}{\leq}C_{9}C^{2}_{11}\,\int_{0}^{\infty }(1\lor t)^{-1}\,k_{\alpha/2}(t,y^{\prime})\,\mathrm{d}t.\]
In order to estimate the remaining integral, we decompose the domain of integration into the contributions coming from the intervals \([0,r)\), \([r,r^{2})\) and \([r^{2},\infty)\). Since \(\alpha>2d\), we get that
\[\int_{0}^{\infty} (1\lor t)^{-1}\,k_{\alpha/2}(t,y^{\prime})\,\mathrm{d}t\] \[\leq\ \int_{0}^{r}(1\lor t)^{-1}r^{-d/2-\alpha/4}\,\mathrm{d}t\,+ \int_{r}^{r^{2}}t^{-d/2-1}\left(\frac{r}{\sqrt{t}}\right)^{-\alpha/2}\mathrm{ d}t\,+\int_{r^{2}}^{\infty}t^{-d/2-1}\,\mathrm{d}t\] \[\leq\ c\,r^{-d},\]
which concludes the proof of (2.35).
## 3. Annealed CLT and annealed local limit theorems
In this section, we discuss annealed CLT and establish annealed local limit theorems under mild conditions.
We first give a proof of Proposition 1.4. For any \(t\in\mathbb{R}\), let \(T_{t}\colon L^{2}(\Omega,\mathbb{P})\to L^{2}(\Omega,\mathbb{P})\) be the map defined by \(T_{t}\varphi:=\varphi\circ\tau_{t,0}\). By Assumption 1.1-(ii), \(\{T_{t}:t\in\mathbb{R}\}\) is a strongly continuous contraction semi-group on \(L^{2}(\Omega,\mathbb{P})\), cf. [23, Section 7.1]. Its infinitesimal generator \(D_{0}\colon\mathrm{dom}(D_{0})\subset L^{2}(\Omega,\mathbb{P})\to L^{2}( \Omega,\mathbb{P})\) is defined by
\[D_{0}\varphi\ :=\ \lim_{t\to 0}\frac{1}{t}\,\big{(}T_{t}\varphi-\varphi \big{)}, \tag{3.1}\]
where \(\mathrm{dom}(D_{0})\) denotes the set of all \(\varphi\in L^{2}(\Omega,\mathbb{P})\) such that the limit above exists. Notice that \(\mathrm{dom}(D_{0})\) is dense in \(L^{2}(\Omega,\mathbb{P})\). Moreover, by Assumption 1.1-(i) it holds that \(\langle D_{0}\varphi,\psi\rangle_{L^{2}(\mathbb{P})}=-\langle\varphi,D_{0} \psi\rangle_{L^{2}(\mathbb{P})}\) and \(\langle D_{0}\varphi,\varphi\rangle_{L^{2}(\mathbb{P})}=0\) for all \(\varphi,\psi\in\mathrm{dom}(D_{0})\).
Consider the operator \(\mathcal{L}\colon\mathcal{D}(D_{0})\to L^{2}(\mathbb{P})\),
\[(\mathcal{L}\varphi)(\omega)\ =\ (D_{0}\varphi)(\omega)\,+\,\sum_{y\sim 0} \omega_{0}(0,y)\,\big{(}\varphi(\tau_{0,y}\omega)-\varphi(\omega)\big{)}, \tag{3.2}\]
and denote by \(\mathcal{L}^{*}\) the \(L^{2}(\mathbb{P})\)-adjoint operator of \(\mathcal{L}\). Let \(\mathcal{C}\) be a common core of \(\mathcal{L}\) and \(\mathcal{L}^{*}\). We define a semi-norm, \(\|\cdot\|_{H^{1}(\mathbb{P})}\), on \(\mathcal{C}\) by \(\|\varphi\|_{H^{1}(\mathbb{P})}^{2}:=\langle\varphi,-\mathcal{L}\varphi \rangle_{L^{2}(\mathbb{P})}\). Again,
Assumption 1.1-(i) ensures that
\[\|\varphi\|^{2}_{H^{1}(\mathbb{P})}\ =\ \frac{1}{2}\sum_{x\sim 0}\mathbb{E}\Big{[} \omega_{0}(0,x)\left(\varphi(\tau_{x}\omega)-\varphi(\omega)\right)^{2}\Big{]}, \qquad\varphi\in\mathcal{C}.\]
Let \(H^{1}(\mathbb{P})\) be the completion of \(\mathcal{C}\) with respect to \(\|\cdot\|_{H^{1}(\mathbb{P})}\) taken modulo suitable equivalence classes of elements of \(\mathcal{C}\). We write \(H^{-1}(\mathbb{P})\) to denote the dual of \(H^{1}(\mathbb{P})\). For \(\psi\in L^{2}(\mathbb{P})\), let
\[\|\psi\|^{2}_{H^{-1}(\mathbb{P})}\ :=\ \sup_{\varphi\in\mathcal{C}}\Bigl{(}2 \langle\varphi,\psi\rangle_{L^{2}(\mathbb{P})}-\|\varphi\|^{2}_{H^{1}(\mathbb{ P})}\Bigr{)}\ \in\ [0,\infty].\]
One may identify the Hilbert space \(H^{-1}(\mathbb{P})\) with the completion of equivalence classes of element of \(\{\varphi\in H^{1}(\mathbb{P}):\|\varphi\|_{H^{-1}(\mathbb{P})}<\infty\}\) with respect to \(\|\cdot\|_{H^{-1}(\mathbb{P})}\). For more details we refer to [24, Section 2.2].
Given all this notions, we first prove Proposition 1.4 that the maximal mean displacement of the process up to time \(T>0\) is of order \(\sqrt{T}\).
Proof of Proposition 1.4.: For any \(i\in\{1,\ldots,d\}\), set \(\mathbb{R}\times\mathbb{Z}^{d}\ni(t,x)\mapsto f_{i}(t,x)=x^{i}\), where \(x^{i}\) is the \(i\)th component of \(x\). Since \(X\) is conservative, it follows that, for almost every \(\omega\),
\[M^{i}_{t}\ :=\ f_{i}(t,X_{t})-f_{i}(0,X_{0})-\int_{0}^{t}(\partial_{s}f_{i})(s,X_{s} )+\bigl{(}\mathcal{L}^{\omega}_{s}f_{i}(s,\cdot)\bigr{)}(X_{s})\ \mathrm{d}s \tag{3.3}\]
is a \((\mathrm{P}^{\omega}_{0,0},\{\mathcal{F}_{t}:t\geq 0\})\)-martingale with respect to the natural augmented filtration generated by the process \(X\). Note that the additive functional appearing on the right-hand side of (3.3) can be expressed as
\[\int_{0}^{t}(\partial_{s}f_{i})(s,X_{s})+\bigl{(}\mathcal{L}^{\omega}_{s}f_{i} (s,\cdot)\bigr{)}(X_{s})\ \mathrm{d}s\ =\ \int_{0}^{t}\delta^{i}(\omega)\circ\tau_{s,X_{s}}\ \mathrm{d}s,\]
where \(\delta^{i}(\omega)=\sum_{x\sim 0}\omega_{0}(0,x)x^{i}\) is the so-called local drift. Hence, by using Doob's \(L^{p}\)-inequality, we obtain
\[\mathbb{E}\biggl{[} \mathrm{E}^{\omega}_{0,0}\biggl{[}\sup_{0\leq t\leq T}\bigl{|}X^{ i}_{t}\bigr{|}\biggr{]}\biggr{]}\] \[\ \leq\ \mathbb{E}\bigl{[}\mathrm{E}^{\omega}_{0,0}\bigl{[} \langle M^{i}\rangle_{T}\bigr{]}\bigr{]}^{1/2}\,+\,\mathbb{E}\biggl{[}\mathrm{E }^{\omega}_{0,0}\biggl{[}\sup_{0\leq t\leq T}\biggl{|}\int_{0}^{t}\delta^{i}( \omega)\circ\tau_{s,X_{s}}\ \mathrm{d}s\biggr{|}\biggr{]}\biggr{]}.\]
By Fubini's theorem and the stationarity of the process as seen from the particle, it follows that \(\mathbb{E}\bigl{[}\mathrm{E}^{\omega}_{0,0}\bigl{[}\langle M^{i}\rangle_{T} \bigr{]}\bigr{]}=T\sum_{x\sim 0}\mathbb{E}\left[\omega_{0}(0,x)(x^{i})^{2}\right]\). In order to derive an upper bound on the additive functional, set \(\delta^{i}_{K}(\omega):=\sum_{x\sim 0}(\omega_{0}(0,x)\wedge K)\,x^{i}\). Obviously, \(\delta^{i}_{K}\in L^{2}(\mathbb{P})\) for any \(K<\infty\). Moreover, in view of Assumption 1.1(i), we get, for
any \(\varphi\in H^{1}(\mathbb{P})\),
\[\langle\delta^{i}_{K},\varphi\rangle_{L^{2}(\mathbb{P})} =\ \frac{1}{2}\,\sum_{x\sim 0}\mathbb{E}\big{[}(\omega_{0}(0,x)\wedge K )(\varphi(\tau_{x}\omega)-\varphi(\omega))x^{i}\big{]}.\] \[\leq\ \|\varphi\|_{H^{1}(\mathbb{P})}\,\bigg{(}\frac{1}{2}\sum_{x \sim 0}\mathbb{E}\big{[}\omega_{0}(0,x)(x^{i})^{2}\big{]}\bigg{)}^{1/2}.\]
Hence, \(\delta^{i}_{K}\in H^{-1}(\mathbb{P})\cap L^{2}(\mathbb{P})\) with \(\|\delta^{i}_{K}\|^{2}_{H^{-1}(\mathbb{P})}\leq\frac{1}{2}\sum_{x\sim 0} \mathbb{E}\big{[}\omega_{0}(0,x)(x^{i})^{2}\big{]}\). Thus, by [24, Lemma 2.4], we obtain that
\[\mathbb{E}\bigg{[}\mathrm{E}_{0,0}^{\omega}\bigg{[}\sup_{0\leq t \leq T}\bigg{(}\int_{0}^{t}\delta^{i}_{K}(\omega)\circ\tau_{s,X_{s}}\,\mathrm{ d}s\bigg{)}^{\!\!2}\bigg{]}\bigg{]}\ \leq\ 24T\,\|\delta^{i}_{K}\|^{2}_{H^{-1}(\mathbb{P})}.\]
Hence,
\[\mathbb{E}\bigg{[}\mathrm{E}_{0,0}^{\omega}\bigg{[}\sup_{0\leq t \leq T}\big{|}X^{i}_{t}\big{|}\bigg{]}\bigg{]}\] \[\leq\ c\,\sqrt{T}\,\bigg{(}\sum_{x\sim 0}\ \mathbb{E}\big{[} \omega_{0}(0,x)(x^{i})^{2}\big{]}\bigg{)}^{1/2}+T\ \mathbb{E}\big{[}|\delta^{i}_{K}(\omega)-\delta^{i}(\omega)|\big{]}.\]
Since, \(\delta^{i}\in L^{1}(\mathbb{P})\), \(\mathbb{E}\big{[}|\delta^{i}_{K}(\omega)-\delta^{i}(\omega)|\big{]}\leq 2 \,\mathbb{E}\big{[}|\delta^{i}(\omega)|\big{]}\) and \(\delta^{i}_{K}(\omega)\to\delta^{i}(\omega)\) as \(K\to\infty\) for \(\mathbb{P}\)-a.e. \(\omega\), the assertion (1.7) follows from Lebesgue's dominated convergence theorem. Finally, for any \(t>0\), the tightness of the family \(\{X^{(n)}_{t}:n\in\mathbb{N}\}\) follows immediately from (1.7) by Markov's inequality.
In order to prove the annealed CLT, we follow the common approach for proving the corresponding quenched result by introducing the so-called _harmonic_ coordinates, \(\Phi:\Omega\times\mathbb{R}\times\mathbb{Z}^{d}\to\mathbb{R}^{d}\) and the _corrector_, \(\chi\colon\Omega\times\mathbb{R}\times\mathbb{Z}^{d}\to\mathbb{R}^{d}\). The latter solves component-wise the time-inhomogeneous Poisson equation
\[\partial_{t}u+\mathcal{L}^{\omega}_{t}u\ =\ \mathcal{L}^{\omega}_{t}\Pi,\]
where \(\Pi\) denotes the identity map on \(\mathbb{Z}^{d}\). As a consequence, the harmonic coordinates defined by
\[\Phi(\omega,t,x)\ =\ x-\chi(\omega,t,x) \tag{3.4}\]
is component-wise a time-space harmonic function. By inspecting the proofs of [1, Theorem 2.5 and Corollary 2.7], the existence of \(\Phi\) and \(\chi\) are ensured if Assumption 1.1 and 1.5 holds true. Moreover, in view of [1, Proposition 3.3], we also know that, for any \(v\in\mathbb{R}^{d}\), \(\delta>0\), \(t\in(0,\infty)\) and \(K\in(0,\infty)\),
\[\lim_{n\to\infty}\frac{1}{n^{d+2}}\int_{0}^{tn^{2}}\!\!\sum_{x\in B (0,Kn)}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\((v\cdot\Phi_{t}:t\geq 0)\) is a \({\rm P}^{\omega}_{0,0}\)-martingale for \(\mathbb{P}\)-a.e. \(\omega\) with \(v\cdot\Phi_{0}=0\) those quadratic variation process is given by
\[\langle v\cdot\Phi\rangle_{t}\;=\;\int_{0}^{t}\sum_{y}\omega_{0}(0,y)\big{(}v \cdot\Phi(\omega,0,y)\big{)}^{2}\circ\tau_{s,X_{s}}\,{\rm d}s.\]
In particular, by using the Cauchy-Schwarz inequality and the stationary of the process as seen from the particle, we obtain that, for any \(0\leq s\leq t<\infty\),
\[\mathbb{E}\big{[}{\rm E}^{\omega}_{0,0}\big{[}|v\cdot\Phi_{t}-v \cdot\Phi_{s}|\big{]}\big{]}^{2}\;\leq\;\mathbb{E}\big{[}{\rm E}^{\omega}_{0,0 }\big{[}\langle v\cdot\Phi\rangle_{t}-\langle v\cdot\Phi\rangle_{s}\big{]} \big{]}\;=\;\big{(}v\cdot\Sigma^{2}v\big{)}(t-s), \tag{3.6}\]
where
\[v\cdot\Sigma^{2}v\;:=\;\mathbb{E}\bigg{[}{\sum}_{y\in\mathbb{Z}^{d}}\,\omega_ {0}\big{(}v\cdot\Phi(\omega,0,y)\big{)}^{2}\bigg{]}.\]
Now, fix some \(t>0\) and \(v\in\mathbb{R}^{d}\). Then, from [1, Proposition 4.4] we deduce that, for \(\mathbb{P}\)-a.e. \(\omega\), the law of the diffusively rescaled random variable \(n^{-1}v\cdot\Phi_{tn^{2}}\) under \({\rm P}^{\omega}_{0,0}\) converges weakly to the centered Gaussian distribution, \(\mathcal{N}(0,t(v\cdot\Sigma^{2}v))\). Hence, by applying Lebesque's theorem, we obtain that the law of \(n^{-1}v\cdot\Phi_{tn^{2}}\) under \({\rm P}^{*}_{0,0}\) also converges weakly to \(\mathcal{N}(0,t(v\cdot\Sigma^{2}v))\). Therefore, by Slutzki's theorem, it suffices to show that, for any \(\delta>0\)
\[\lim_{n\to\infty}\mathbb{E}\Big{[}{\rm P}^{\omega}_{0,0}\Big{[} \Big{|}n^{-1}v\cdot\chi_{tn^{2}}\Big{|}>\delta\Big{]}\Big{]}\;=\;0 \tag{3.7}\]
in order to conclude that the law of \(v\cdot X_{t}^{(n)}\) under \({\mathbb{P}}^{*}_{0,0}\) converges to \(\mathcal{N}(0,t(v\cdot\Sigma^{2}v))\) which, by an application of the Cramer-Wold device, see e.g. [20, Theorem 3.10.6], completes the proof Proposition 1.7.
In order to show (3.7), fix some \(\delta>0\) and \(\varepsilon>0\). Further, set \(K:=\varepsilon\sqrt{t}/(3C_{1})\) and choose some \(\tau\in(0,t\wedge(\delta\varepsilon)^{2}/(16C_{6}^{2}))\). Then,
\[\mathbb{E}\Big{[}{\rm P}^{\omega}_{0,0}\Big{[}|n^{-1}v\cdot \chi_{tn^{2}}|>\delta\Big{]}\Big{]}\] \[\qquad\leq\;\mathbb{E}\Big{[}{\rm P}^{\omega}_{0,0}\bigg{[}\! \max_{0\leq s\leq tn^{2}}|X_{s}|>Kn\bigg{]}\bigg{]}\,+\,\mathbb{E}\Big{[}{\rm P }^{\omega}_{0,0}\Big{[}\Big{|}n^{-1}v\cdot\chi_{tn^{2}}\big{|}>\delta,T_{Kn}> tn^{2}\Big{]}\Big{]}\] \[\qquad\stackrel{{\eqref{eq:Cramer-Wold-device}}}{{\leq}} \frac{C_{1}\sqrt{t}}{K}\,+\,\mathbb{E}\Big{[}{\rm P}^{\omega}_{0,0}\Big{[} \Big{|}n^{-1}v\cdot\chi_{tn^{2}}\big{|}>\delta,T_{Kn}>tn^{2}\Big{]}\Big{]}, \tag{3.8}\]
where we recall that \(T_{Kn}=\inf\{t\geq 0:X_{t}\not\in B(0,Kn)\}\) denotes the first exit time from the ball \(B(0,Kn)\). Since \(\tau\in(0,t)\), we can bound the second term in (3.8) by
\[\mathbb{E}\Big{[}{\rm P}^{\omega}_{0,0}\Big{[}|n^{-1}v\cdot\chi_{ tn^{2}}|>\delta,T_{Kn}>tn^{2}\Big{]}\Big{]}\] \[\qquad\leq\;\frac{2}{\delta\tau n}\int_{t-\tau}^{t}\mathbb{E} \Big{[}{\rm E}^{\omega}_{0,0}\Big{[}|v\cdot\chi_{tn^{2}}-v\cdot\chi_{sn^{2}}| \Big{]}\Big{]}\Big{]}\,{\rm d}s\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad+\; \frac{1}{\tau}\int_{t-\tau}^{t}\mathbb{E}\Big{[}{\rm P}^{\omega}_{0,0}\Big{[} \Big{|}|v\cdot\chi_{sn^{2}}|>\delta n/2,T_{Kn}>sn^{2}\Big{]}\Big{]}\,{\rm d}s. \tag{3.9}\]
Since, for any \(0\leq s\leq t<\infty\), an application of the Markov property, (1.5) and the stationarity of the process as seen from the particle yields that
\[\mathbb{E}\Big{[}\mathrm{E}_{0,0}^{\omega}\Big{[}\big{|}v\cdot X_{t}-v\cdot X_{s} \big{|}\Big{]}\Big{]}\;=\;\mathbb{E}\Big{[}\mathrm{E}_{0,0}^{\omega}\Big{[} \big{|}v\cdot X_{t-s}\big{|}\Big{]}\Big{]}\stackrel{{\eqref{eq
where
\[J_{1}(\delta,n) :=\ \left(1-\frac{1}{n^{d}}\sum_{z\in\mathbb{Z}^{d}}f_{\delta}(z/n) \right)n^{d+1}\nabla_{e_{i}}^{2}\bar{p}_{0,tn^{2}}(0,[yn]),\] \[J_{2}(\delta,n) :=\ \sum_{z\in\mathbb{Z}^{d}}f_{\delta}(z/n)\,\Big{(}n\nabla_{e_{i}}^{2} \bar{p}_{0,tn^{2}}(0,[yn])-n\nabla_{e_{i}}^{2}\bar{p}_{0,tn^{2}}(0,z)\Big{)},\] \[J_{3}(\delta,n) :=\ \sum_{z\in\mathbb{Z}^{d}}\Bigl{(}n\nabla_{-e_{i}}f_{\delta}(z/n) -\bigl{(}-\partial_{i}f_{\delta}\bigr{)}(z/n)\Bigr{)}\,\bar{p}_{0,tn^{2}}(0,z),\] \[J_{4}(\delta,n) :=\ \mathbb{E}\Bigl{[}\mathbb{E}_{0,0}^{\omega}\Bigl{[}(-\partial _{i}f_{\delta})\bigl{(}X_{t}^{(n)}\bigr{)}\Bigr{]}\Bigr{]}\,-\,\int_{\mathbb{ R}^{d}}(-\partial_{i}f_{\delta})(z)\,k_{t}^{\Sigma}(z)\,\mathrm{d}z,\] \[J_{5}(\delta) :=\ \int_{\mathbb{R}^{d}}f_{\delta}(z)\,\Bigl{(}\bigl{(}\partial_{i }k_{t}^{\Sigma}\bigr{)}(z)-\bigl{(}\partial_{i}k_{t}^{\Sigma}\bigr{)}(y) \Bigr{)}\,\mathrm{d}z.\]
Thus, in order to prove (3.10) it suffices to show that
\[\limsup_{\delta\downarrow 0}\limsup_{n\to\infty}|J_{i}(\delta,n)|\ =\ 0\quad\forall\,i\in\{1,\ldots,4\}\qquad\text{and}\qquad\limsup_{\delta \downarrow 0}|J_{5}(\delta)|\ =\ 0.\]
First, from Theorem 1.6-(i) is follows that
\[\bigl{|}n^{d+1}\nabla_{e_{i}}^{2}\bar{p}_{0,tn^{2}}(0,[yn])\bigr{|}\ \leq\ t^{-(d+1)/2},\qquad\forall\,n\in\mathbb{N}. \tag{3.11}\]
Therefore, the convergence of \(|J_{1}(\delta,n)|\) to zero \(n\) tends to infinity is an immediate consequence of the approximation of the integral \(1=\int_{\mathbb{R}^{d}}f_{\delta}(z)\,\mathrm{d}z\) by a \(d\)-dimensional Riemann sum. In order to handle the summand involving \(|J_{2}|\), notice that an application of Theorem 1.6-(iii) yields
\[\max_{(0,x),(0,x^{\prime})\in E^{d}}\sup_{z,z^{\prime}\in\mathbb{Z}^{d}}|n^{d+ 2}\nabla_{x}^{2}\nabla_{x^{\prime}}^{2}\bar{p}_{0,tn^{2}}(z,z^{\prime})|\ \leq\ C_{4}\,t^{-(d+2)/2}\qquad\forall\,n\in\mathbb{N}. \tag{3.12}\]
This estimate implies that, for any \(z\in nC(y,\delta)\cap\mathbb{Z}^{d}\),
\[n^{d+1}\bigl{|}\nabla_{e_{i}}^{2}\bar{p}_{0,tn^{2}}(0,[yn])-\nabla_{e_{i}}^{2} \bar{p}_{0,tn^{2}}(0,z)\bigr{|}\stackrel{{\eqref{eq:n}}}{{\leq}} C_{4}\,t^{-(d+2)/2}\,\biggl{|}\frac{[yn]-z}{n}\biggr{|}_{\infty}.\]
Hence,
\[|J_{2}(\delta,n)|\ \leq\ C_{4}t^{-(d+2)/2}\,\biggl{(}\delta+\frac{2}{n} \biggr{)}\,\frac{1}{n^{d}}\sum_{z\in\mathbb{Z}^{d}}f_{\delta}(z).\]
This estimate clearly implies that \(\limsup_{\delta\downarrow 0}\limsup_{n\to\infty}|J_{2}(\delta,n)|=0\). Next, we consider the term \(|J_{3}|\). Since \(f_{\delta}\) is assumed to be twice continuously differentiable, we get that \(n\bigl{|}n\nabla_{-e_{i}}f_{\delta}(z/n)-\bigl{(}-\partial_{i}f_{\delta}\bigr{)} (z/n)\bigr{|}\leq\sup_{x\in\mathbb{R}^{d}}|(\partial_{ii}^{2}f_{\delta})(x)|<\infty\) for any \(z\in\mathbb{R}^{d}\). Since \(\sum_{z\in\mathbb{Z}^{d}}\bar{p}_{0,tn^{2}}(0,z)=1\), we therefore obtain that, for any \(\delta>0\),
\[|J_{3}(\delta,n)|\ \leq\ \frac{1}{n}\,\sup_{x\in\mathbb{R}^{d}}|(\partial_{ii}^{2} f_{\delta})(x)|\ \underset{n\to\infty}{\longrightarrow}\ 0.\]
Further, from the specifying properties of the function \(f_{\delta}\) it follows that \(-\partial_{i}f_{\delta}\) is bounded and continuous. Hence, the convergence \(\lim_{n\to\infty}|J_{4}(\delta,n)|=0\) for any
\(\delta>0\) is an immediate consequence of the annealed CLT as stated in Proposition 1.7. Finally, since \(x\mapsto k_{t}^{\Sigma}(x)\) is a smooth function that vanishes at infinity, we obtain that
\[\delta^{-1}\,\big{|}\big{(}\partial_{i}k_{t}^{\Sigma}\big{)}(z)-\big{(}\partial_ {i}k_{t}^{\Sigma}\big{)}(y)\big{|}\;\leq\;\sup_{x\in\mathbb{R}^{d}}\big{|} \big{(}\partial_{ii}^{2}k_{t}^{\Sigma}\big{)}(x)\big{|}\;<\;\infty\qquad\forall \,z\in C(y,\delta),\]
which implies that \(\lim_{\delta\downarrow 0}|J_{5}(\delta)|=0\). This completes the proof of (3.10).
Let us now address the proof of (1.16). By following literally the covering argument as in [2, Proof of Theorem 3.1 - Step 2], we conclude from (3.10) that, for any \(0<T_{0}<T_{1}<\infty\), \(K\subset\mathbb{R}^{d}\) compact and \(i\in\{1,\ldots,d\}\),
\[\lim_{n\to\infty}\sup_{y\in K}\sup_{t\in[T_{0},T_{1}]}\big{|}n^{d+1}\,\nabla_ {e_{i}}^{2}\bar{p}_{0,tn^{2}}(0,[yn])-\big{(}\partial_{i}k_{t}^{\Sigma}\big{)} (y)\big{|}\;=\;0. \tag{3.13}\]
Finally, in view of (3.11), it holds that, for any \(i\in\{1,\ldots,d\}\),
\[\lim_{t\to\infty}\sup_{y\in K}\bigl{|}n^{d+1}\nabla_{e_{i}}^{2}\bar{p}_{0,tn^ {2}}(0,[yn])\big{|}\;=\;0\qquad\text{and}\qquad\lim_{t\to\infty}\sup_{y\in K} \bigl{|}\big{(}\partial_{i}k_{t}^{\Sigma}\big{)}(y)\big{|}\;=\;0.\]
Hence, (3.13) can be extended to hold uniformly on \([T_{0},\infty)\) for any \(T_{0}>0\) which is the desired conclusion.
Using the local CLT we can deduce near-diagonal lower bounds for the annealed heat kernel and its first derivative. Let us stress that the usual arguments to obtain on-diagonal lower bound (see for instance [25, Proposition 4.3.4]) cannot be used in the time-dependent case since the proof uses crucially the symmetry of the heat kernel. In particular, this shows that our upper bound is correct, at least near the diagonal. Notice that a local CLT and corresponding lower bound for the annealed second derivative is still open.
**Corollary 3.1**.: _Suppose that Assumption 1.1 and 1.5 hold true. Then,_
1. _there exists_ \(n_{1}\in\mathbb{N}\) _and_ \(C_{15}>0\) _such that for all_ \(t\geq n_{1}\) _and_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _with_ \(|y^{\prime}-y|\leq\sqrt{t}\)__ \[\bar{p}_{0,t}(y,y^{\prime})\;\geq\;C_{15}\,t^{-d/2}.\] (3.14)
2. _for any_ \(\varepsilon\in(0,1)\) _there exists_ \(n_{2}\equiv n_{2}(\varepsilon)\in\mathbb{N}\) _and_ \(C_{16}\equiv C_{16}(\varepsilon)>0\) _such that for all_ \(t\geq n_{2}\) _and_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _with_ \(\varepsilon\sqrt{t}\leq|y-y^{\prime}|\leq\sqrt{t}\)__ \[\big{|}\nabla_{x}^{2}\,\bar{p}_{0,t}(y,y^{\prime})\big{|}\;\geq\;C_{16}\,t^{-( d+1)/2},\qquad\forall\,(0,x)\in E^{d},\] (3.15) \[\big{|}\nabla_{x}^{1}\,\bar{p}_{0,t}(y,y^{\prime})\big{|}\;\geq\;C_{16}\,t^{-( d+1)/2},\qquad\forall\,(0,x)\in E^{d}.\] (3.16)
Proof.: Again, by (1.5) and Assumption 1.1-(i) it suffices to prove the above assertions for \(y=0\). The proof of Corollary 3.1-(i) follows from Theorem 1.9 by exactly the arguments as given in [3, Lemma 5.3].
Let us now address the proof of (ii). For this purpose, set \(n=\lfloor\sqrt{t}\rfloor\) and fix some unit vector \(e_{i}\sim 0\), \(i\in\{1,\ldots,d\}\) (the case of \(-e_{i}\) follows analogously). Then, for any \(t\geq 1\) and \(\varepsilon\sqrt{t}\leq|y^{\prime}|\leq\sqrt{t}\) we have that \(t/n^{2}\in[1,4]\) and \(|y^{\prime}|/n\in[\varepsilon,2]\). In particular, there exists \(C_{16}\equiv C_{16}(\varepsilon)>0\) such that \(|(\partial_{i}k_{s}^{\Sigma})(x)|\geq 2C_{16}\) for any
\(i\in\{1,\ldots,d\}\), \(s\in[1,4]\) and \(x\in\mathbb{R}^{d}\) such that \(\varepsilon\leq|x|\leq 2\). Moreover, Theorem 1.9 implies that there exists \(n(\varepsilon)\in\mathbb{N}\) such that
\[\sup_{|x|\leq 1}\sup_{s\geq 1}\Big{|}n^{d+1}\nabla_{e_{i}}^{2}\bar{p}_{0,sn^{2} }(0,[xn])-\big{(}\partial_{i}k_{s}^{\Sigma}\big{)}(x)\Big{|}\ \leq\ C_{16},\qquad\forall\,n\geq n( \varepsilon).\]
Hence, by setting \(n_{2}\equiv n_{2}(\varepsilon):=n(\varepsilon)^{2}\) we obtain that
\[\big{|}\nabla_{e_{i}}^{2}\bar{p}_{0,tn^{2}}(0,y^{\prime})\big{|}\] \[\qquad\geq\ n^{-(d+1)}\bigg{(}\big{(}\partial_{i}k_{t/n^{2}}^{ \Sigma}\big{)}(y^{\prime}/n)\,-\,\sup_{|x|\leq 1}\sup_{s\geq 1}\Big{|}n^{d+1} \nabla_{x}^{2}\bar{p}_{0,sn^{2}}(0,[xn])-\big{(}\partial_{i}k_{s}^{\Sigma} \big{)}(x)\Big{|}\bigg{)}\] \[\qquad\geq\ C_{16}\,t^{-(d+1)/2},\]
for all \(t\geq n_{2}\) and \(y^{\prime}\in\mathbb{Z}^{d}\) such that \(\varepsilon\sqrt{t}\leq|y^{\prime}|\leq\sqrt{t}\). Finally, the proof of (3.16) follows directly from (3.15) together with the fact that \(\nabla_{x}^{2}\bar{p}_{0,t}(0,y^{\prime})=\nabla_{-x}^{1}\bar{p}_{0,t}(0,y^{ \prime})\) for all \((0,x)\in E^{d}\).
**Corollary 3.2**.: _Suppose that Assumption 1.1 and 1.5 hold true. Additionally, assume that, for any \(x,y\in\mathbb{Z}^{d}\),_
\[\mathbb{P}\big{[}\omega_{t}(x,y)=\omega_{-t}(x,y)\quad\forall\,t\in\mathbb{R} \big{]}\ =\ 1. \tag{3.17}\]
_Then, for any \(t>0\) and \(y\in\mathbb{R}^{d}\),_
\[\lim_{n\to\infty}\mathbb{E}\Big{[}\big{|}n^{d}p_{0,tn^{2}}^{\omega}(0,[yn])-k_ {t}^{\Sigma}(y)\big{|}^{2}\Big{]}\ =\ 0. \tag{3.18}\]
Proof.: The proof comprises three steps.
_Step 1._ We will first show that, for any \(t>0\),
\[\lim_{n\to\infty}\frac{1}{n^{d}}\sum_{z\in\mathbb{Z}^{d}}\mathbb{V}\mathrm{ar} \Big{[}n^{d}p_{0,tn^{2}}^{\omega}(0,z)\Big{]}\ =\ 0. \tag{3.19}\]
So, fix some \(t>0\). Then, by applying the Chapman-Kolmogorov equation, it follows that, for any \(n\in\mathbb{N}\),
\[\bar{p}_{0,2tn^{2}}(0,0)\ =\ \sum_{z\in\mathbb{Z}^{d}}\bigg{(}\bar{p}_{0,tn^{2}} (0,z)\,\bar{p}_{tn^{2},2tn^{2}}(z,0)\,+\,\mathbb{C}\mathrm{ov}\Big{[}p_{0,tn^ {2}}^{\omega}(0,z);p_{tn^{2},2tn^{2}}^{\omega}(z,0)\Big{]}\bigg{)}.\]
By exploiting the stationarity of the law \(\mathbb{P}\) with respect to time-space shift implied by Assumption 1.1-(i) and using (1.5), we get that \(\bar{p}_{tn^{2},2tn^{2}}(z,0)=\bar{p}_{0,tn^{2}}(0,-z)\). Further, by additionally using Lemma A.2, the above covariance can be rewritten as
\[\mathbb{C}\mathrm{ov}\Big{[}p_{0,tn^{2}}^{\omega}(0,z);p_{tn^{2}, 2tn^{2}}^{\omega}(z,0)\Big{]}\] \[\qquad\stackrel{{\eqref{eq:1.5}}}{{=}}\mathbb{C} \mathrm{ov}\Big{[}p_{-tn^{2},0}^{\omega}(-z,0);p_{0,tn^{2}}^{\omega}(0,-z) \Big{]}\stackrel{{\eqref{eq:2.4}}}{{=}}\mathbb{C}\mathrm{ov} \Big{[}p_{0,tn^{2}}^{\tilde{\omega}}(0,-z);p_{0,tn^{2}}^{\omega}(0,-z)\Big{]}.\]
Notice that, under the additional assumption (3.17), we have that
\[\mathbb{C}\mathrm{ov}\Big{[}p_{0,tn^{2}}^{\tilde{\omega}}(0,-z);p_{0,tn^{2}}^{ \omega}(0,-z)\Big{]}\ =\ \mathbb{V}\mathrm{ar}\Big{[}p_{0,tn^{2}}^{\omega}(0,z)\Big{]} \qquad\forall\,z\in\mathbb{Z}^{d}.\]
Thus, for any \(n\in\mathbb{N}\) and \(k\in\mathbb{N}\), the left-hand side of (3.19) can be bounded from above by
\[\frac{1}{n^{d}}\sum_{z\in\mathbb{Z}^{d}}\mathbb{V}\mathrm{ar}\Big{[}n^{d}p_{0,tn ^{2}}^{\omega}(0,z)\Big{]}\ \leq\ I_{1}(n)+I_{2}(n,k)+I_{3}(n,k),\]
where
\[I_{1}(n) :=\ \big{|}n^{d}\bar{p}_{0,2tn^{2}}(0,0)-k_{2t}^{\Sigma}(0)\big{|}\] \[I_{2}(n,k) :=\ \sum_{z\in\mathbb{Z}^{d}\setminus B(0,kn)}n^{d}\bar{p}_{0,tn^{2} }(0,z)\,\bar{p}_{0,tn^{2}}(0,-z)\] \[I_{3}(n,k) :=\ \frac{1}{n^{d}}\sum_{z\in B(0,kn)}\big{|}n^{d}\bar{p}_{0,tn^{2} }(0,z)\,n^{d}\bar{p}_{0,tn^{2}}(0,-z)-k_{2t}^{\Sigma}(0)\big{|}\]
Thus, (3.19) follows once we have shown that the terms \(I_{1}\), \(I_{2}\) and \(I_{3}\) vanishes when taking first the limit \(n\to\infty\) and then \(k\to\infty\).
Clearly, Theorem 1.9-(i) implies that \(\lim_{n\to\infty}I_{1}(n)=0\). Moreover, by using Lemma 2.3, the Markov inequality and Proposition 1.4,
\[I_{2}(n,k)\ \leq\ c\,t^{-d/2}\ \mathbb{E}\big{[}\mathrm{P}_{0,0}^{\omega} \big{[}|X_{tn^{2}}|\geq kn\big{]}\big{]}\ \leq\ c\,\frac{t^{-(d-1)/2}}{k},\]
which implies that \(\limsup_{k\to\infty}\limsup_{n\to\infty}I_{2}(n,k)=0\). Thus, it remains to consider the term \(I_{3}\). But, by an elementary computation that involves multiple applications of Theorem 1.9-(i), we obtain that
\[\limsup_{n\to\infty}I_{3}(n,k)\ \leq\ \Bigg{|}\int_{\mathbb{R}^{d}}1\!\!1_{\{|x| \leq k\}}\,k_{t}^{\Sigma}(x)\,k_{t}^{\Sigma}(0-x)\,\mathrm{d}x\,-\,k_{2t}^{ \Sigma}(0)\Bigg{|}.\]
Therefore, by using again the Chapman-Kolmogorov equation as well as the monotone convergence theorem which yields
\[\lim_{k\to\infty}\int_{\mathbb{R}^{d}}1\!\!1_{\{|x|>k\}}\,k_{t}^{\Sigma}(x)\,k _{t}^{\Sigma}(0-x)\,\mathrm{d}x\ =\ 0,\]
we conclude that \(\limsup_{k\to\infty}\limsup_{n\to\infty}I_{3}(n,k)=0\).
_Step 2._ Fix some \(t>0\) and \(y\in\mathbb{R}^{d}\). For any \(\delta>0\), \(n\in\mathbb{N}\) and \(z\in B([yn],\delta n)\) let \(\gamma_{z}\) be a shortest path in the graph metric between \([yn]\) and \(z\), that is, \(\gamma_{z}=(y_{0},\ldots,y_{k})\) with \(y_{0}=[yn]\), \(y_{k}=z\) and \(\{y_{i-1},y_{i}\}\in E^{d}\) for all \(i\in\{1,\ldots,k\}\) and \(k\leq c_{1}\delta n\) for some \(c_{1}\in[1,\infty)\) that is independent of \(\delta\) and \(n\). Then, by applying the Minkowski inequality, we get
\[\mathbb{V}\mathrm{ar}\Big{[}n^{d}\,p_{0,tn^{2}}^{\omega}(0,[yn]) \Big{]}^{1/2}\] \[\qquad\leq\ \mathbb{V}\mathrm{ar}\Big{[}n^{d}\,p_{0,tn^{2}}^{ \omega}(0,z)\Big{]}^{1/2}+\sum_{i=1}^{k}\max_{(0,x)\in E^{d}}\mathbb{V} \mathrm{ar}\Big{[}n^{d}\,\nabla_{x}^{2}\,p_{0,tn^{2}}^{\omega}(y_{i-1},y_{i}) \Big{]}^{1/2}.\]
By using Theorem 1.6-(i), we get that, for any \((0,x)\in E^{d}\),
\[\mathbb{V}\mathrm{ar}\Big{[}n^{d}\,\nabla_{x}^{2}p_{0,tn^{2}}^{\omega}(y_{i-1},y_ {i})\Big{]}^{1/2}\ \leq\ 2\,\mathbb{E}\Big{[}|n^{d}\,\nabla_{x}^{2}\,p_{0,tn^{2}}^{\omega}(y_{i-1},y_{ i})|^{2}\Big{]}^{1/2}\stackrel{{\eqref{eq:2011}}}{{\leq}}\frac{2C_{3}}{n\,t^{(d +1)/2}}.\]
By combining the estimates above, we obtain that, for any \(\delta>0\),
\[\mathbb{V}\mathrm{ar}\Big{[}n^{d}\,p_{0,tn^{2}}^{\omega}(0,[yn])\Big{]}\ \leq\ \frac{2}{|B([yn],\delta n)|}\,\sum_{z\in B([yn],\delta n)}\mathbb{V} \mathrm{ar}\Big{[}n^{d}\,p_{0,tn^{2}}^{\omega}(0,z)\Big{]}\,+\,\frac{2c\delta ^{2}}{t^{d+1}}.\]
Thus, by applying (3.19), we conclude the right-hand side of the above estimate vanishes when we first let \(n\to\infty\) and then \(\delta\downarrow 0\). This proves that
\[\lim_{n\to\infty}\mathbb{V}\mathrm{ar}\Big{[}n^{d}p_{0,tn^{2}}^{\omega}(0,[yn]) \Big{]}\ =\ 0. \tag{3.20}\]
_Step 3_. Finally, for any \(t>0\) and \(y\in\mathbb{R}\), we rewrite the left-hand side of (3.18) as
\[\mathbb{E}\Big{[}\big{|}n^{d}p_{0,tn^{2}}^{\omega}(0,[yn])-k_{t}^ {\Sigma}(y)\big{|}^{2}\Big{]}\] \[\qquad=\ \mathbb{V}\mathrm{ar}\Big{[}n^{d}p_{0,tn^{2}}^{\omega}(0,[yn ])\Big{]}\,+\,\big{|}n^{d}\,\bar{p}_{0,tn^{2}}(0,[yn])-k_{t}^{\Sigma}(y)\big{|} ^{2}. \tag{3.21}\]
Now, by applying Theorem 1.9-(i) and (3.20), we immediately obtain that the right-hand side of (3.21) vanishes as \(n\) tends to infinity. This concludes the proof.
_Remark 3.3_.: Assumption 1.1-(i) and (3.17) implies that, for any \(x,y\in\mathbb{Z}^{d}\),
\[\mathbb{P}\big{[}\omega_{t}(x,y)=\omega_{0}(x,y)\quad\forall\,t\in\mathbb{R} \big{]}\ =\ 1.\]
Therefore, Corollary 3.2 applies to the case of time-independent conductances only.
**Corollary 3.4**.: _Suppose that Assumption 1.1 and 1.5 hold true._
1. _If_ \(d\geq 3\) _and Assumption_ 2.9 _is satisfied for some_ \(\alpha>d\) _and_ \(\beta\geq\alpha/2\)_, then for any compact subset_ \(K\subset\mathbb{R}^{d}\setminus\{0\}\)__ \[\lim_{n\to\infty}\sup_{y\in K}\left|n^{d-2}\,\int_{0}^{\infty}\bar{p}_{0,t}(0,[yn ])\,\mathrm{d}t-\int_{0}^{\infty}k_{t}^{\Sigma}(y)\,\mathrm{d}t\right|\ =\ 0.\] (3.22) _In particular, there exists_ \(C_{17}>0\) _such that, for any_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _with_ \(y\neq y^{\prime}\)_,_ \[\int_{0}^{\infty}\bar{p}_{0,t}(y,y^{\prime})\,\mathrm{d}t\ \geq\ C_{17}\,|y-y^{\prime}|^{-d+2}.\] (3.23)
2. _If_ \(d\geq 2\) _and Assumption_ 2.9 _is satisfied for some_ \(\alpha\geq 2(d-1)\) _and_ \(\beta>d+\alpha/2\)_, then for any compact subset_ \(K\subset\mathbb{R}^{d}\setminus\{0\}\) _and_ \(i\in\{1,\ldots,d\}\)__ \[\lim_{n\to\infty}\sup_{y\in K}\left|n^{d-1}\,\int_{0}^{\infty}\nabla_{e_{i}}^{1 }\,\bar{p}_{0,t}(0,[yn])\,\mathrm{d}t-\int_{0}^{\infty}\big{(}\partial_{i}k_{t }^{\Sigma}\big{)}(y)\,\mathrm{d}t\right|\ =\ 0.\] (3.24) _Moreover, there exists_ \(C_{18}\in(0,\infty)\) _such that, for any_ \((0,x)\in E^{d}\) _and_ \(y,y^{\prime}\in\mathbb{Z}^{d}\) _with_ \(y\neq y^{\prime}\)_,_ \[\left|\int_{0}^{\infty}\nabla_{x}^{1}\,\bar{p}_{0,t}(y,y^{\prime})\,\mathrm{d} t\right|\ \geq\ C_{18}\,|y-y^{\prime}|^{-d+1}.\] (3.25)
Proof.: (i) Fix some compact subset \(K\subset\mathbb{R}^{d}\setminus\{0\}\). To lighten notation, we set
\[\bar{g}(y,y^{\prime})\;:=\;\int_{0}^{\infty}\bar{p}_{0,t}(y,y^{\prime})\,{\rm d }t\qquad\text{and}\qquad g^{\Sigma}(y)\;:=\;\int_{0}^{\infty}k_{t}^{\Sigma}(y) \,{\rm d}t.\]
Fix some \(y\in K\). Then, for any \(\delta>0\) and \(n\in\mathbb{N}\), the left-hand side of (3.22) is bounded from above by
\[\sup_{y\in K}\left|n^{d-2}\,\bar{g}(0,[yn])-g^{\Sigma}(y)\right|\;\leq\;I_{1}( \delta,n)+I_{2}(\delta,n)+I_{3}(\delta), \tag{3.26}\]
where
\[I_{1}(\delta,n) \;:=\;\int_{\delta}^{\infty}\sup_{y\in K}\left|n^{d}\bar{p}_{0, tn^{2}}(0,[yn])-k_{t}^{\Sigma}(y)\right|{\rm d}t,\] \[I_{2}(\delta,n) \;:=\;\sup_{y\in K}\,n^{d-2}\int_{0}^{\delta n^{2}}\bar{p}_{0,t} (0,[yn])\,{\rm d}t,\] \[I_{3}(\delta) \;:=\;\sup_{y\in K}\,\int_{0}^{\delta}k_{t}^{\Sigma}(y)\,{\rm d}t.\]
Thus, it suffices to show that the right-hand side of (3.26) vanishes when we first take the limit \(n\to\infty\) and then \(\delta\downarrow 0\).
First, by Theorem 1.9, the function \(f_{n}(t):=\sup_{y\in K}|n^{d}\bar{p}_{0,tn^{2}}(0,[yn])-k_{t}(y)|\) converges pointwise to zero as \(n\) tends to infinity. Moreover, in view of Lemma 2.3 and the properties of \(k^{\Sigma}\), it holds that there exists \(c_{1}<\infty\) such that \(f_{n}(t)\leq c_{1}t^{-d/2}\) for any \(n\in\mathbb{N}\). Since for any \(d\geq 3\) and \(\delta>0\) the function \(t\mapsto t^{-d/2}\) is integrable over \([\delta,\infty)\), an application of Lebesque's dominated convergence theorem yields that \(\lim_{n\to\infty}I_{1}(\delta,n)=0\).
Next, consider the term \(I_{2}\). Since \(K\) is a compact subset of \(\mathbb{R}^{d}\setminus\{0\}\) and the map \(y\mapsto|y|\) is continuous, there exist \(0<k_{*}\leq k_{**}<\infty\) such that \(\min_{y\in K}|y|\geq k_{*}\) and \(\max_{y\in K}|y|\leq k_{**}\). Then, for any fix \(\delta>0\) there exists \(n(\delta,k_{*},k_{**})\in\mathbb{N}\) such that for any \(n\geq n(\delta,k_{*},k_{**})\) we have that \(|[yn]/n|\geq k_{*}/2\) and \(|[yn]|\leq\delta n^{2}\) for any \(y\in K\). In particular, by applying Lemma 2.12 with \(p=1\), \(\alpha>d\) and \(\beta\geq\alpha/2\) we obtain that, for any \(y\in K\),
\[n^{d-2}\int_{0}^{\delta n^{2}}\bar{p}_{0,t}(0,[yn])\,{\rm d}t\] \[\stackrel{{\eqref{eq:2.28}}}{{\leq}}C_{10}\,n^{d-2} \bigg{(}\int_{0}^{[[yn]]}k_{\alpha}(t,[yn])\,{\rm d}t+\int_{|[yn]|}^{\delta n ^{2}}k_{\alpha}(t,[yn])\,{\rm d}t\bigg{)}\] \[\;\leq\;c\,\bigg{(}n^{d/2-\alpha/2-1}\,k_{*}^{-d/2-\alpha/2+1}\,+ \,\delta^{-d/2+\alpha/2+1}\,k_{*}^{-\alpha}\bigg{)}.\]
Hence,
\[\limsup_{\delta\downarrow 0}\limsup_{n\to\infty}I_{2}(\delta,n)\;=\;0.\]
Finally, let us address the term \(I_{3}\). Since \(\sup_{t\geq 0}\sup_{y\in K}k_{t}^{\Sigma}(y)\leq c(k_{*})<\infty\), we get that \(I_{3}(\delta)\leq\delta c(k_{*})\), and \(\limsup_{\delta\downarrow 0}I_{3}(\delta)\) follows immediately. This completes the proof of (3.22).
It remains to show the claimed lower bound for the annealed Green's kernel. In view of (1.5) and Assumption 1.1-(i) it suffices to prove (3.23) for \(y=0\). First, note that there exists \(c_{1}\in(0,\infty)\) such that \(g^{\Sigma}(x)\geq 2c_{1}|x|^{2-d}\) for all \(x\in\mathbb{R}^{d}\setminus\{0\}\). Moreover, (3.22) implies that there exists \(n_{0}\in\mathbb{N}\) such that, for any \(n\geq n_{0}\),
\[\sup_{1/2\leq|x|\leq 2}\bigl{|}n^{d-2}\bar{g}(0,[xn])-g^{\Sigma}(x)\bigr{|}\; \leq\;c_{1}\]
Hence, for any \(y^{\prime}\in\mathbb{Z}^{d}\) such that \(n:=|y^{\prime}|\geq n_{0}\), we obtain that
\[\bar{g}(0,y^{\prime})\;\geq\;n^{2-d}\left(n^{d-2}g^{\Sigma}(y^{\prime}/n)-\sup _{1/2\leq|x|\leq 2}\bigl{|}n^{d-2}\bar{g}(0,[xn])-g^{\Sigma}(x)\bigr{|}\right)\; \geq\;c_{1}|y^{\prime}|^{2-d}.\]
Thus, by setting \(C_{17}:=\min\{|y^{\prime}|^{d-2}\bar{g}(0,y^{\prime}):y^{\prime}\in\mathbb{Z}^ {d}\text{ s.th. }|y^{\prime}|<n_{0}\}\wedge c_{1}\), the assertion (3.23) follows.
(ii) The proof of (ii) relies on the same arguments as in (i) except for the fact that in the proof of (3.24) we apply Theorem 1.6-(i) instead of Lemma 2.3 and Proposition 2.13 instead of Lemma 2.12. Moreover, we use that there exists \(c(k_{*},k_{**})<\infty\) such that \(\sup_{t\geq 0}\sup_{y\in K}\bigl{(}\partial_{i}k_{t}^{\Sigma}\bigr{)}(y)\leq c (k_{*},k_{**})\) for any \(i\in\{1,\ldots,d\}\).
## Appendix A Forward and backward equation for the semigroup
Let us briefly recall the construction of the time-inhomogeneous Markov process \(X\) starting at time \(s\in\mathbb{R}\) in \(x\in\mathbb{Z}^{d}\), cf. [1, Section 4]. Let \((E_{n}:n\in\mathbb{N})\) be a sequence of independent \(\operatorname{Exp}(1)\)-distributed random variables. Further, set \(\pi_{t}^{\omega}(x,y):=\omega_{t}(x,y)/\mu_{t}^{\omega}(x)\mathds{1}_{(x,y) \in E^{d}}\), where \(\mu_{t}^{\omega}(x):=\sum_{y:(x,y)\in E^{d}}\omega_{t}(x,y)\) for any \(t\in\mathbb{R}\), \(x\in\mathbb{Z}^{d}\). We specify both the sequence of jump times, \((J_{n}:n\in\mathbb{N}_{0})\) and positions, \((Y_{n}:n\in\mathbb{N}_{0})\), inductively. For this purpose, set \(J_{0}=s\) and \(Y_{0}=x\). Suppose that, for any \(n\geq 1\), we have already constructed the random variables \((J_{0},Y_{0},\ldots,J_{n-1},Y_{n-1})\). Then, \(J_{n}\) is given by
\[J_{n}\;=\;J_{n-1}\,+\,\inf\biggl{\{}t\geq 0\;:\;\int_{J_{n-1}}^{J_{n-1}+t}\mu_{ u}^{\omega}(Y_{n-1})\,\mathrm{d}u\geq E_{n}\biggr{\}},\]
and at the jump time \(J_{n}\) the distribution of \(Y_{n}\) is given by \(\pi_{J_{n}}^{\omega}(Y_{n-1},\cdot)\). Since, under Assumption 1.1, \(\sup_{n\in\mathbb{N}_{0}}J_{n}=\infty\) the Markov process \(X\) is given by
\[X_{t}\;=\;Y_{n}\quad\text{on}\quad[J_{n},J_{n+1})\qquad\forall\,n\in\mathbb{N }_{0}.\]
Note that, under \(\mathrm{P}_{s,x^{\prime}}^{\omega}\), \(J_{0}=s\) and \(Y_{0}=x\) almost surely, the conditional law of \(J_{n}\) given \((J_{0},Y_{0},\ldots,J_{n-1},Y_{n-1})\) (also called survival distribution with time-dependent hazard rate \(\mu_{t}(Y_{n-1})\)) is
\[\mu_{t}(J_{n-1})\,\mathrm{e}^{-\int_{J_{n-1}}^{t}\mu_{u}^{\omega}(x)\,\mathrm{d }u}\,\mathds{1}_{t>J_{n-1}}\,\mathrm{d}t,\]
and the conditional law of \(Y_{n}\) given \((J_{0},Y_{0},\ldots,J_{n-1},Y_{n-1},Y_{n})\) is \(\pi_{J_{n}}^{\omega}(Y_{n-1},\cdot)\).
Note that for the Markov process \(X\) as constructed above the strong Markov property holds true. Thus, an application of the strong Markov property yields that \((P^{\omega}_{s,t}:t\geq s)\) satisfies the integrated backward equation, that is, for \(\mathbb{P}\)-a.e. \(\omega\),
\[(P^{\omega}_{s,t}f)(x)\ =\ \mathrm{e}^{-\int_{s}^{t}\mu^{\omega}_{u}(x)\, \mathrm{d}u}f(x)\,+\,\int_{s}^{t}\mathrm{e}^{-\int_{s}^{r}\mu^{\omega}_{u}(x) \,\mathrm{d}u}\sum_{y:(x,y)\in E^{d}}\!\!\!\omega_{r}(x,y)\,(P^{\omega}_{r,t}f) (y)\,\mathrm{d}r\] (A.1)
for any \(f\in\ell^{\infty}(\mathbb{Z}^{d})\), \(-\infty<s<t<\infty\) and \(x\in\mathbb{Z}^{d}\).
**Proposition A.1**.: _For \(\mathbb{P}\)-a.e. \(\omega\), every \(x,y\in\mathbb{Z}^{d}\) and \(f\in\ell^{\infty}(\mathbb{Z}^{d})\) the following hold._
1. _For every_ \(t\in\mathbb{R}\)_, the map_ \((-\infty,t)\ni s\mapsto p^{\omega}_{s,t}(x,y)\) _is absolute continuous and hence differentiable for almost every_ \(s\in(-\infty,t)\)_. In particular,_ \(\lim_{s\uparrow t}p^{\omega}_{s,t}(x,y)=p^{\omega}_{t,t}(x,y)=1\!\!1_{y}(x)\)_._
2. _For every_ \(s\in\mathbb{R}\)_, the map_ \((s,\infty)\ni t\mapsto p^{\omega}_{s,t}(x,y)\) _is continuous. Hence, it holds that_ \(\lim_{t\downarrow s}p^{\omega}_{s,t}(x,y)=p^{\omega}_{s,s}(x,y)=1\!\!1_{y}(x)\)_._
3. (Backward equation) _It holds that for every_ \(t\in\mathbb{R}\)_,_ \[-\partial_{s}\,(P^{\omega}_{s,t}f)(x)\ =\ \big{(}\mathcal{E}^{\omega}_{s}(P^{ \omega}_{s,t}f)\big{)}(x),\qquad\text{for a.e. }s\in(-\infty,t).\] (A.2) _In particular, for any function_ \(g\) _with finite support and_ \(t\in\mathbb{R}\)_,_ \[\partial_{s}\big{\langle}g,P^{\omega}_{s,t}f\big{\rangle}_{\ell^{2}(\mathbb{ Z}^{d})}\ =\ \mathcal{E}^{\omega}_{s}\big{(}g,P^{\omega}_{s,t}f\big{)},\qquad\text{for a.e. }s\in(-\infty,t).\] (A.3)
Proof.: (i) By (1.2) the map \(t\mapsto\mu^{\omega}_{t}(x)\) is \(\mathbb{P}\)-a.s. locally integrable for every \(x\in\mathbb{Z}^{d}\). Hence, the absolute continuity of the Lebesgue integral implies that, for every \(\varepsilon>0\), there exists \(\delta\equiv\delta(x)>0\) such that
\[\int_{D}\mu^{\omega}_{u}(x)\,\mathrm{d}u\ <\ \varepsilon\qquad\forall\,D\in \mathcal{B}(\mathbb{R})\text{ with Lebesgue measure less than }\delta.\]
Thus, it remains to observe that, for each \(f\in\ell^{\infty}(\mathbb{Z}^{d})\) and \(t\in\mathbb{R}\), using the integrated backward equation as well as the Cauchy-Schwarz inequality,
\[\sum_{i=1}^{n}\big{|}(P^{\omega}_{s_{i},t}f)(x)-(P^{\omega}_{r_{i },t}f)(x)\big{|}\\ \leq\ 2\|f\|_{\infty}\,\sum_{i=1}^{n}\!\Big{(}1-\mathrm{e}^{-\int _{r_{i}}^{s_{i}}\mu^{\omega}_{u}(x)\,\mathrm{d}u}\Big{)}\ \leq\ 2\|f\|_{\infty}\left(\int_{D}\mu^{\omega}_{u}(x)\, \mathrm{d}u\right)\ <\ 2\varepsilon\|f\|_{\infty}\]
for any union \(D=\bigcup_{i=1}^{n}(r_{i},s_{i})\) of pairwise disjoint intervals \((r_{i},s_{i})\subset(-\infty,t]\) of total length less than \(\delta\).
(ii) This can be deduced from the absolute continuity of the Lebesgue integral. Indeed, \(\mathbb{P}\)-a.s., for any \(f\in\ell^{\infty}(\mathbb{Z}^{d})\) and \(r,t\in(s,\infty)\) with \(r<t\),
\[\big{|}(P^{\omega}_{s,r}f)(x)-(P^{\omega}_{s,t}f)(x)\big{|} =\ \big{|}\big{(}P^{\omega}_{s,r}(f-P^{\omega}_{r,t}f)\big{)}(x) \big{|}\\ \leq\ 2\|f\|_{\infty}\,\sum_{y\in\mathbb{Z}^{d}}p^{\omega}_{s,r}(x, y)\Big{(}1-\mathrm{e}^{-\int_{r}^{t}\mu^{\omega}_{u}(y)\,\mathrm{d}u}\Big{)}.\]
Thus, by applying Lebesgue's dominated convergence theorem, it follows that
\[\lim_{t\downarrow r}|(P^{\omega}_{s,r}f)(x)-(P^{\omega}_{s,t}f)(x)|\ =\ 0.\]
(iii) Note that the right-hand side of (A.1) can be rewritten as
\[\operatorname{e}^{-\int_{s}^{t}\mu^{\omega}_{u}(x)\,\mathrm{d}u}f(x)\,+\, \operatorname{e}^{\int_{0}^{s}\mu^{\omega}_{u}(x)\,\mathrm{d}u}\,\int_{s}^{t} \operatorname{e}^{-\int_{0}^{r}\mu^{\omega}_{u}(x)\,\mathrm{d}u}\,\sum_{y:(x, y)\in E^{d}}\omega_{r}(x,y)(P^{\omega}_{r,t}f)(y)\,\mathrm{d}r.\]
Since \(t\mapsto\mu^{\omega}_{t}(x)\) is locally integrable, the differential form of the backward equation in weak sense follows from [13, Theorem 6.3.6] together with an application of the chain and product rule.
**Lemma A.2**.: _Define \(\tilde{\omega}_{t}(e):=\omega_{-t}(e)\) for any \(t\in\mathbb{R}\) and \(e\in E^{d}\). Then,_
\[p^{\omega}_{s,t}(x;y)\ =\ p^{\tilde{\omega}}_{-t,-s}(y;x),\qquad\forall\,x,y \in\mathbb{Z}^{d},\quad s\leq t.\] (A.4)
Proof.: Write \(B_{n}:=B(0,n)\) and \(T_{n}:=\inf\{t\geq 0:|X_{t}|>n\}\) with \(\inf\emptyset:=\infty\). We denote by \(p^{\omega,B_{n}}_{s,t}(x,y):=\mathrm{P}^{\omega}_{s,x}\big{[}X_{t}=y,\,t<T_{n} \big{]}\) the heat kernel associated with the process \(X\) killed upon exiting \(B_{n}\), and we write \((P^{\omega,B_{n}}_{s,t}:t\geq s)\) for the corresponding transition semigroup. Recall that the associated time-dependent generator, still denoted by \(\mathcal{L}^{\omega}_{t}\), is acting on functions with Dirichlet boundary condition. By similar arguments as in Proposition A.1 one can establish a backward equation for \((P^{\omega,B_{n}}_{s,t}:t\geq s)\), which gives, for any \(s\leq t\),
\[\partial_{u}\big{\langle}P^{\tilde{\omega},B_{n}}_{-u,-s}g,P^{ \omega,B_{n}}_{u,t}f\big{\rangle}_{\ell^{2}(B_{n})}\] \[\qquad=\ \big{\langle}\mathcal{L}^{\tilde{\omega}}_{-u}P^{ \tilde{\omega},B_{n}}_{-u,-s}g,P^{\omega,B_{n}}_{u,t}f\big{\rangle}_{\ell^{2} (B_{n})}-\big{\langle}P^{\tilde{\omega},B_{n}}_{-u,-s}g,\mathcal{L}^{\omega}_{ u}P^{\omega,B_{n}}_{u,t}f\big{\rangle}_{\ell^{2}(B_{n})}\ =\ 0,\]
where we used in the last step that \(\mathcal{L}^{\tilde{\omega}}_{-u}=\mathcal{L}^{\omega}_{u}\). By integration over \([s,t]\) we get
\[\big{\langle}P^{\tilde{\omega},B_{n}}_{-t,-s}g,f\big{\rangle}_{\ell^{2}(B_{n} )}-\big{\langle}g,P^{\omega,B_{n}}_{s,t}f\big{\rangle}_{\ell^{2}(B_{n})}\ =\ 0,\]
and by choosing \(f=\mathbbm{1}_{\{y\}}\) and \(g=\mathbbm{1}_{\{x\}}\) we obtain \(p^{\tilde{\omega},B_{n}}_{-t,-s}(y;x)=p^{\omega,B_{n}}_{s,t}(x;y)\). Finally, since, by a similar reasoning as in the proof of Lemma 2.3, \(\lim_{n\to\infty}p^{\omega,B_{n}}_{s,t}(x;y)=p^{\omega}_{s,t}(x;y)\) for all \(x,y\in\mathbb{Z}^{d}\), \(t\geq s\) and \(\omega\in\Omega\), the result follows by taking the limit \(n\to\infty\).
**Proposition A.3** (Forward equation).: _For \(\mathbb{P}\)-a.e. \(\omega\), every \(x\in\mathbb{Z}^{d}\), \(s\in\mathbb{R}\) and finitely supported \(f:\mathbb{Z}^{d}\to\mathbb{R}\), the map \(t\mapsto(P^{\omega}_{s,t}f)(x)\) is differentiable at almost every \(t\in(s,\infty)\) and_
\[\partial_{t}(P^{\omega}_{s,t}f)(x)\ =\ \big{(}P^{\omega}_{s,t}(\mathcal{L}^{ \omega}_{t}f)\big{)}(x),\qquad\text{for a.e. }t\in(s,\infty).\] (A.5)
_In particular, for \(\mathbb{P}\)-a.e. \(\omega\), the function \((t,x)\mapsto u(t,x)=p^{\omega}_{0,t}(0,x)\) solves_
\[\partial_{t}u(t,x)\ =\ (\mathcal{L}^{\omega}_{t}u(t,\cdot))(x),\qquad\forall\,x \in\mathbb{Z}^{d}\text{ and a.e. }t\in(0,\infty).\]
Proof.: This follows from the backward equation in Proposition A.1 and Lemma A.2. Indeed, let \(\tilde{\omega}\) be defined as in Lemma A.2, then we have for any \(f,g\in\ell^{2}(\mathbb{Z}^{d})\),
\[\big{\langle}P^{\omega}_{s,t}f,g\big{\rangle}_{\ell^{2}(\mathbb{Z}^{d})}\ =\ \big{\langle}f,(P^{\omega}_{s,t})^{*}g\big{\rangle}_{\ell^{2}( \mathbb{Z}^{d})}\stackrel{{\text{(\ref{eq:def
## Appendix B Technical estimates
Proof of Lemma 2.8.: Fix some \(t>0\) and \(y\in\mathbb{Z}^{d}\), and set \(\mathbb{H}:=\{z\in\mathbb{Z}^{d}:|z|\leq|y-z|\}\). First, we claim that
\[k_{\alpha}(t/2,y-z)\ \leq\ 2^{d/2+\alpha/2}\,k_{\alpha}(t,y)\qquad\forall\,z \in\mathbb{H}.\] (B.1)
Indeed, in case that either \(t/2<|y|/2\) or \(t/2\geq|y-z|\), the estimate (B.1) follows directly from the definition of the kernel \(k_{\alpha}\), cf. (2.23), together with the fact that \(|y-z|\geq|y|/2\) for any \(z\in\mathbb{H}\). On the other hand, if \(|y|/2\leq t/2<|y-z|\) we get that
\[k_{\alpha}(t/2,y-z) \ =\ |y-z|^{-d/2-\alpha/2}\] \[\ \leq\ k_{\alpha}(t,y)\,\big{(}1\lor t\big{)}^{d/2}\big{(}1\lor|y |\big{)}^{\alpha/2}\,|y-z|^{-d/2-\alpha/2}\ \leq\ 2^{d/2+\alpha/2}\,k_{\alpha}(t,y),\]
where we used in the last step that \(0<t/2<|y-z|\) implies that \(|y-z|\geq 1\).
Let us now address the proof of (2.24). By symmetry, we obtain that
\[\sum_{z\in\mathbb{Z}^{d}}k_{\alpha}(t/2,z)\,k_{\alpha}(t/2,y-z)\ \leq\ 2\,\sum_{z\in\mathbb{H}}k_{\alpha}(t/2,z)\,k_{\alpha}(t/2,y-z).\]
Thus, in view of (B.1), the assertion of Lemma 2.8 will follow, once we have shown that for any \(\alpha>d\) there exists \(c<\infty\) such that, for any \(t>0\),
\[\sum_{z\in\mathbb{Z}^{d}}k_{\alpha}(t,z)\ =\ \sum_{|z|\leq t}\big{(}1\lor t\big{)}^ {-d/2}\,\bigg{(}1\vee\frac{|z|}{\sqrt{t}}\bigg{)}^{-\alpha}+\sum_{|z|>t}|z|^{- d/2-\alpha/2}\ \leq\ c.\] (B.2)
Note that for any \(t<1\) the first sum consists of a single element, namely \(z=0\), and is therefore bounded from above by \(1\). On the other hand, for any \(t\geq 1\), we can rewrite the first sum on the right-hand side of (B.2) as
\[\sum_{|z|\leq\sqrt{t}}t^{-d/2}+\sum_{\sqrt{t}<|z|\leq t}t^{-d/2} \,\bigg{(}\frac{|z|}{\sqrt{t}}\bigg{)}^{-\alpha}\] \[\
Acknowledgment.: This work was initiated while Jean-Dominique Deuschel was visiting RIMS in Kyoto. The research of Takashi Kumagai is supported by JSPS KAKENHI Grant Number JP22H00099 and by the Alexander von Humboldt Foundation.
|
2302.14230 | Optimal Priors for the Discounting Parameter of the Normalized Power
Prior | The power prior is a popular class of informative priors for incorporating
information from historical data. It involves raising the likelihood for the
historical data to a power, which acts as discounting parameter. When the
discounting parameter is modelled as random, the normalized power prior is
recommended. In this work, we prove that the marginal posterior for the
discounting parameter for generalized linear models converges to a point mass
at zero if there is any discrepancy between the historical and current data,
and that it does not converge to a point mass at one when they are fully
compatible. In addition, we explore the construction of optimal priors for the
discounting parameter in a normalized power prior. In particular, we are
interested in achieving the dual objectives of encouraging borrowing when the
historical and current data are compatible and limiting borrowing when they are
in conflict. We propose intuitive procedures for eliciting the shape parameters
of a beta prior for the discounting parameter based on two minimization
criteria, the Kullback-Leibler divergence and the mean squared error. Based on
the proposed criteria, the optimal priors derived are often quite different
from commonly used priors such as the uniform prior. | Yueqi Shen, Luiz M. Carvalho, Matthew A. Psioda, Joseph G. Ibrahim | 2023-02-28T01:28:18Z | http://arxiv.org/abs/2302.14230v2 | # Optimal Priors for the Discounting Parameter of the Normalized Power Prior
###### Abstract
The power prior is a popular class of informative priors for incorporating information from historical data. It involves raising the likelihood for the historical data to a power, which acts as discounting parameter. When the discounting parameter is modelled as random, the normalized power prior is recommended. In this work, we prove that the marginal posterior for the discounting parameter for generalized linear models converges to a point mass at zero if there is any discrepancy between the historical and current data, and that it does not converge to a point mass at one when they are fully compatible. In addition, we explore the construction of optimal priors for the discounting parameter in a normalized power prior. In particular, we are interested in achieving the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict. We propose intuitive procedures for eliciting the shape parameters of a beta prior for the discounting parameter based on two minimization criteria, the Kullback-Leibler divergence and the mean squared error. Based on the proposed criteria, the optimal priors derived are often quite different from commonly used priors such as the uniform prior.
_Keywords--_ Bayesian analysis; Clinical trial; Normalized power prior; Power prior.
## 1 Introduction
The power prior (Ibrahim and Chen, 2000) is a popular class of informative priors that allow the incorporation of historical data through a tempering of the likelihood. It is constructed by raising the historical data likelihood to a power \(a_{0}\), where \(0\leq a_{0}\leq 1\). The discounting parameter
can be fixed or modelled as random. When it is modelled as random and estimated jointly with other parameters of interest, the normalized power prior (Duan et al., 2006) is recommended as it appropriately accounts for the normalizing constant necessary for forming the correct joint prior distribution (Neuenschwander et al., 2009). Many extensions of the power prior and the normalized power prior have been developed. Banbeta et al. (2019) develop the dependent and robust dependent normalized power priors which allow dependent discounting parameters for multiple historical datasets. When the historical data model contains only a subset of covariates currently of interest and the historical information may not be equally informative for all parameters in the current analysis, Boonstra and Barbaro (2020) propose an extension of the power prior that adaptively combines a prior based upon the historical information with a variance-reducing prior that shrinks parameter values toward zero.
The power prior and the normalized power prior have been shown to have several desirable properties. Ibrahim et al. (2003) show that the power prior defines an optimal class of priors in the sense that it minimizes a convex combination of Kullback-Leibler (KL) divergences between a distribution based on no incorporation of historical data and a distribution based on completely pooling the historical and current data. Ye et al. (2022) prove that the normalized power prior minimizes the expected weighted KL divergence similar to the one in Ibrahim et al. (2003) with respect to the marginal distribution of the discounting parameter. They also prove that if the prior on \(a_{0}\) is non-decreasing and if the difference between the sufficient statistics of the historical and current data is negligible from a practical standpoint, the marginal posterior mode of \(a_{0}\) is close to one. Carvalho and Ibrahim (2021) show that the normalized power prior is always well-defined when the initial prior is proper, and that, viewed as a function of the discounting parameter, the normalizing constant is a smooth and strictly convex function. Neelon and O'Malley (2010) show through simulations that for large datasets, the normalized power prior may result in more downweighting of the historical data than desired. Han et al. (2022) point out that the normalizing constant might be infinite for \(a_{0}\) values close to zero with conventionally used improper priors on \(a_{0}\), in which case the optimal \(a_{0}\) value might be lower than the suggested value. Despite the aforementioned research, there has not been any theoretical investigation of the asymptotic properties of the normalized power prior when the historical and current datasets are discrepant.
Many empirical Bayes-type approaches have been developed to adaptively determine the discounting parameter. For example, Gravestock and Held (Gravestock et al., 2017; Gravestock and Held, 2019) propose to set \(a_{0}\) to the value that maximizes the marginal likelihood. Liu (2018) proposes choosing \(a_{0}\) based on the p-value for testing the compatibility of the current and historical data. Bennett et al. (2021) propose using an equivalence probability weight and a weight based on tail area probabilities to assess the degree of agreement between the historical and current control data for cases with binary outcomes. Pan et al. (2017) propose the calibrated power prior, where \(a_{0}\) is defined as a function of a congruence measure between the historical and current data. The function which links \(a_{0}\) and the congruence measure is prespecified and calibrated through simulation. While these empirical Bayes approaches shed light on the choice of \(a_{0}\), there has not been any fully Bayesian approach based on an optimal prior on \(a_{0}\).
In this work, we first explore the asymptotic properties of the normalized power prior when the historical and current data are fully compatible (i.e., the sufficient statistics of the two datasets are equal) or incompatible (i.e., the sufficient statistics of the two datasets have some non-zero difference). We prove that for generalized linear models (GLMs) utilizing a normalized power prior, the marginal posterior distribution of \(a_{0}\) converges to a point mass at zero if there is any discrepancy between the historical and current data. When the historical and current data are fully compatible, the asymptotic distribution of the marginal posterior of \(a_{0}\) is derived for GLMs; we note that it
does not concentrate around one. Secondly, we propose a novel fully Bayesian approach to elicit the shape parameters of the beta prior on \(a_{0}\) based on two optimality criteria, Kullback-Leibler (KL) divergence and mean squared error (MSE). For the first criterion, we propose as optimal the beta prior whose shape parameters result in a minimized weighted average of KL divergences between the marginal posterior for \(a_{0}\) and user-specified target distributions based on hypothetical scenarios where there is no discrepancy and where there is a maximum tolerable discrepancy. This class of priors on \(a_{0}\) based on the KL criterion is optimal in the sense that it is the best possible beta prior at balancing the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict. For the second criterion, we propose as optimal the beta prior whose shape parameters result in a minimized weighted average of the MSEs based on the posterior mean of the parameter of interest when its hypothetical true value is equal to its estimate using the historical data, or when it differs from its estimate by the maximum tolerable amount. We study the properties of the proposed approaches _via_ simulations for the _i.i.d._ normal and Bernoulli cases as well as for the normal linear model. Two real-world case studies of clinical trials with binary outcomes and covariates demonstrate the performance of the optimal priors compared to conventionally used priors on \(a_{0}\), such as a uniform prior.
## 2 Asymptotic Properties of the Normalized Power Prior
Let \(D\) denote the current data and \(D_{0}\) denote the historical data. Let \(\theta\) denote the model parameters and \(L(\theta|D)\) denote a general likelihood function. The power prior (Ibrahim and Chen, 2000) is formulated as
\[\pi(\theta|D_{0},a_{0})\propto L(\theta|D_{0})^{a_{0}}\pi_{0}( \theta),\]
where \(0\leq a_{0}\leq 1\) is the discounting parameter which discounts the historical data likelihood, and \(\pi_{0}(\theta)\) is the initial prior for \(\theta\). The discounting parameter \(a_{0}\) can be fixed or modelled as random. Modelling \(a_{0}\) as random allows researchers to account for uncertainty when discounting historical data and to adaptively learn the appropriate level of borrowing. Duan et al. (2006) propose the _normalized power prior_, given by
\[\pi(\theta,a_{0}|D_{0})=\pi(\theta|D_{0},a_{0})\pi(a_{0})=\frac{L (\theta|D_{0})^{a_{0}}\pi_{0}(\theta)}{c(a_{0})}\pi(a_{0}), \tag{1}\]
where \(c(a_{0})=\int L(\theta|D_{0})^{a_{0}}\pi_{0}(\theta)d\theta\) is the normalizing constant. The normalized power prior is thus composed of a conditional prior for \(\theta\) given \(a_{0}\) and a marginal prior for \(a_{0}\).
Ideally, the posterior distribution of \(a_{0}\) with the normalized power prior would asymptotically concentrate around zero when the historical and current data are in conflict, and around one when they are compatible. In this section, we study the asymptotic properties of the normalized power prior for the exponential family of distributions as well as GLMs. Specifically, we are interested in exploring the asymptotic behaviour of the posterior distribution of \(a_{0}\) when the historical and current data are incompatible and when they are compatible, respectively.
### Exponential Family
First, we study the asymptotic properties of the normalized power prior for the exponential family of distributions. The density of a random variable \(Y\) in the one-parameter exponential family has the form
\[p(y|\theta)=q(y)\exp\left(y\theta-b(\theta)\right), \tag{2}\]
where \(\theta\) is the canonical parameter and \(q(\cdot)\) and \(b(\cdot)\) are known functions. Suppose \(D=(y_{1},\ldots,y_{n})\) is a sample of \(n\)_i.i.d._ observations from an exponential family distribution in the form of (2). The likelihood is then given by
\[L(\theta|D)=Q(D)\exp\left(\sum_{i=1}^{n}y_{i}\theta-nb(\theta)\right),\]
where \(Q(D)=\prod_{i=1}^{n}q(y_{i})\). Suppose \(D_{0}=(y_{01},\ldots,y_{0n_{0}})\) is a sample of \(n_{0}\)_i.i.d._ observations from the same exponential family. The likelihood for the historical data raised to the power \(a_{0}\) is
\[[L(\theta|D_{0})]^{a_{0}}=Q(D_{0})^{a_{0}}\exp\left(a_{0}\left[\sum_{i=1}^{n_{ 0}}y_{0i}\theta-n_{0}b(\theta)\right]\right),\]
where \(Q(D_{0})=\prod_{i=1}^{n_{0}}q(y_{0i})\). Using the normalized power prior defined in (1), the joint posterior of \(\theta\) and \(a_{0}\) is given by
\[\pi(\theta,a_{0}|D,D_{0})\propto L(\theta|D)\pi(\theta,a_{0}|D_{0})=L(\theta| D)\frac{L(\theta|D_{0})^{a_{0}}\pi_{0}(\theta)}{c(a_{0})}\pi(a_{0}).\]
The marginal posterior of \(a_{0}\) is given by
\[\pi(a_{0}|D,D_{0})=\int\pi(\theta,a_{0}|D,D_{0})d\theta\propto\int L(\theta|D )\frac{L(\theta|D_{0})^{a_{0}}\pi_{0}(\theta)}{c(a_{0})}\pi(a_{0})d\theta. \tag{3}\]
With these calculations in place, the question now arises as to what prior should be given to \(a_{0}\). One commonly used class of priors on \(a_{0}\) is the beta distribution (Ibrahim and Chen, 2000). Let \(\alpha_{0}\) and \(\beta_{0}\) denote the shape parameters of the beta distribution. We first prove that the marginal posterior of \(a_{0}\) (3) with \(\pi(a_{0})=\text{beta}(\alpha_{0},\beta_{0})\) converges to a point mass at zero for a fixed, non-zero discrepancy between \(\bar{y}\) and \(\bar{y}_{0}\).
**Theorem 2.1**.: _Suppose \(y_{1},\ldots,y_{n}\) and \(y_{01},\ldots,y_{n_{0}}\) are independent observations from the same exponential family distribution (2). Suppose also that the difference in the estimates of the canonical parameter \(\theta\) is fixed and equal to \(\delta\), i.e., \(|\dot{b}^{-1}(\bar{y})-\dot{b}^{-1}(\bar{y}_{0})|=\delta\), and \(\frac{n_{0}}{n}=r\), where \(\delta>0\) and \(r>0\) are constants, and \(\dot{b}(\cdot)=\partial_{\theta}b(\cdot)\). Then, the marginal posterior of \(a_{0}\) using the normalized power prior (3) with a \(\text{beta}(\alpha_{0},\beta_{0})\) prior on \(a_{0}\) converges to a point mass at \(0\). That is, \(\lim\limits_{n\to\infty}\frac{\int_{0}^{\epsilon}\pi(a_{0}|D,D_{0},\alpha_{0}, \beta_{0})da_{0}}{\int_{0}^{\epsilon}\pi(a_{0}|D,D_{0},\alpha_{0},\beta_{0})da _{0}}=1\) for any \(\epsilon>0\)._
Proof.: See section A.2.
Theorem 2.1 asserts that the normalized power prior is sensitive to any discrepancy between the sufficient statistics in large samples, as the mass of the marginal distribution of \(a_{0}\) will concentrate near zero as the sample size increases for any fixed difference \(\delta\). The natural question to then ask is whether Theorem 2.1 has a sort of converse in that the posterior should concentrate around one under compatibility. We derive the asymptotic marginal posterior distribution of \(a_{0}\) when \(\bar{y}=\bar{y}_{0}\) and show that it does not converge to a point mass at one.
**Corollary 2.1**.: _Suppose \(y_{1},\ldots,y_{n}\) and \(y_{01},\ldots,y_{n_{0}}\) are independent observations from the same exponential family distribution (2). Suppose \(\bar{y}=\bar{y}_{0}\) and \(\frac{n_{0}}{n}=r\) where \(r>0\) is a constant. The marginal posterior of \(a_{0}\) using the normalized power prior, as specified in (3), converges to_
\[\tilde{\pi}(a_{0}|D,D_{0})=\frac{\sqrt{\frac{ra_{0}}{ra_{0}+1}}\pi(a_{0})}{\int _{0}^{1}\sqrt{\frac{ra_{0}}{ra_{0}+1}}\pi(a_{0})da_{0}}\]
_as \(n\to\infty\)._
Proof.: See section A.3.
Corollary 2.1 shows that the normalized power prior fails to fully utilize the historical data when the means of the historical data and the current data are equal for a generic, non-degenerate prior on \(a_{0}\). However, if \(\pi(a_{0})\) is chosen to be concentrated near one, then the marginal posterior of \(a_{0}\) may be concentrated near one.
### Generalized Linear Models
The ability to deal with non _i.i.d._ data and incorporate covariates is crucial to the applicability of the normalized power prior; we thus now extend these results to generalized linear models (GLMs). We first define the GLM with a canonical link and fixed dispersion parameter. Let \(y_{i}\) denote the response variable and \(x_{i}\) denote a \(p\)-dimensional vector of covariates for subject \(i=1,\ldots,n\). Let \(\beta=(\beta_{1},\ldots,\beta_{p})^{\prime}\) be a \(p\)-dimensional vector of regression coefficients. The GLM with a canonical link is given by
\[p(y_{i}|x_{i},\beta,\phi)=q(y_{i},\phi)\exp\{\phi^{-1}[y_{i}x_{i}^{\prime}\beta -b(x_{i}^{\prime}\beta)]\}. \tag{4}\]
Without loss of generality, we assume \(\phi=1\). Let \(D=\{(y_{i},x_{i}),i=1,\ldots,n\}\equiv(n,Y_{n\times 1},X_{n\times p})\) where \(Y=(y_{1},\ldots,y_{n})^{\prime}\) and \(X=(x_{1},\ldots,x_{n})^{\prime}\). Assuming the \(y_{i}\)'s are (conditionally) independent, the likelihood is given by
\[L(\beta|D)=Q(Y)\exp\left(\sum_{i=1}^{n}y_{i}x_{i}^{\prime}\beta-\sum_{i=1}^{n} b(x_{i}^{\prime}\beta)\right),\]
where \(Q(Y)=\prod_{i=1}^{n}q(y_{i},1)\). Let \(\hat{\beta}\) denote posterior mode of \(\beta\) obtained by solving \(\partial_{\beta}\log L(\beta|D)=0\). Let \(D_{0}=\{(y_{0i},x_{0i}),i=1,\ldots,n_{0}\}\equiv(n_{0},Y_{0n_{0}\times 1},X_{0n_{0} \times p})\) where \(Y_{0}=(y_{01},\ldots,y_{0n_{0}})^{\prime}\) and \(X_{0}=(x_{01},\ldots,x_{0n_{0}})^{\prime}\). Assuming the \(y_{0i}\)'s are (conditionally) independent, the historical data likelihood raised to the power \(a_{0}\) is given by
\[[L(\beta|D_{0})]^{a_{0}}=Q(Y_{0})^{a_{0}}\exp\left(a_{0}\left[\sum_{i=1}^{n_{ 0}}y_{0i}x_{0i}^{\prime}\beta-\sum_{i=1}^{n}b(x_{0i}^{\prime}\beta)\right] \right),\]
where \(Q(Y_{0})=\prod_{i=1}^{n_{0}}q(y_{0i},1)\). Let \(c^{*}(a_{0})=\int L(\beta|y_{0})^{a_{0}}\pi_{0}(\beta)d\beta\). Using the normalized power prior defined in (1), the joint posterior of \(\beta\) and \(a_{0}\) is given by
\[\pi(\beta,a_{0}|D,D_{0})\propto L(\beta|D)\pi(\beta,a_{0}|D_{0})=L(\beta|D) \frac{L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)}{c^{*}(a_{0})}\pi(a_{0}).\]
Let \(\hat{\beta}_{0}\) denote posterior mode of \(\beta\) obtained by solving \(\partial_{\beta}\log\left[\frac{L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)}{c^{*}(a_ {0})}\right]=0.\) The marginal posterior of \(a_{0}\) is given by
\[\pi(a_{0}|D,D_{0})=\int\pi(\beta,a_{0}|D,D_{0})d\beta\propto\int L(\beta|D) \frac{L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)}{c^{*}(a_{0})}\pi(a_{0})d\beta. \tag{5}\]
Now we extend Theorem 2.1 to GLMs.
**Theorem 2.2**.: _Suppose \(X\) is \(n\times p\) of rank \(p\) and \(X_{0}\) is \(n_{0}\times p\) of rank \(p\). Suppose \(\hat{\beta}-\hat{\beta}_{0}=\delta\) where \(\delta\neq 0\) is a constant vector, and \(\frac{n_{0}}{n}=r\) where \(r>0\) is a constant scalar. Assume \(n\left[\frac{\partial^{2}\log[L(\beta|D)]}{\partial\beta_{i}\partial\beta_{j}} \right]^{-1}\) and \(n_{0}a_{0}\left[\frac{\partial^{2}\log[L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)]}{ \partial\beta_{i}\partial\beta_{j}}\right]^{-1}\) do not depend on \(n\) and \(a_{0}\). Then, the marginal posterior of \(a_{0}\) using the normalized power prior (5) with a \(\mathrm{beta}(\alpha_{0},\beta_{0})\) prior on \(a_{0}\) converges to a point mass at zero. That is, \(\lim\limits_{n\to\infty}\frac{\int_{0}^{\pi}(a_{0}|D,D_{0},\alpha_{0},\beta_{0} )da_{0}}{\int_{0}^{1}\pi(a_{0}|D,D_{0},\alpha_{0},\beta_{0})da_{0}}=1\) for any \(\epsilon>0\)._
Proof.: See section A.4.
Theorem 2.2 asserts that the normalized power prior is sensitive to discrepancies in the historical and current data in the presence of covariates. The mass of the marginal distribution of \(a_{0}\) will concentrate near zero as the sample size increases for any fixed discrepancy between the historical and current data, assuming \(\frac{1}{n}X^{\prime}X\) and \(\frac{1}{n_{0}}X^{\prime}_{0}X_{0}\) are fixed, i.e., \(n\left[\frac{\partial^{2}\log[L(\beta|D)]}{\partial\beta_{i}\partial\beta_{j}} \right]^{-1}\) and \(n_{0}a_{0}\left[\frac{\partial^{2}\log[L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)]}{ \partial\beta_{i}\partial\beta_{j}}\right]^{-1}\) do not depend on \(n\) and \(a_{0}\). Next, we derive the asymptotic marginal posterior distribution of \(a_{0}\) when the sufficient statistics and covariate (design) matrices of the historical and current data equal.
**Corollary 2.2**.: _Suppose \(X\) is \(n\times p\) of rank \(p\) and \(X_{0}\) is \(n_{0}\times p\) of rank \(p\). Let \(Y=(y_{1},\ldots,y_{n})^{\prime}\) and \(Y_{0}=(y_{01},\ldots,y_{0n_{0}})^{\prime}\). Consider the GLM in (4). If \(n=n_{0}\), \(X=X_{0}\), and \(X^{\prime}Y=X^{\prime}_{0}Y_{0}\), then the marginal posterior of \(a_{0}\) using the normalized power prior, as specified in (5), converges to_
\[\tilde{\pi}(a_{0}|X,Y,X_{0},Y_{0})=\frac{\left(\frac{a_{0}}{a_{0}+1}\right)^{ \frac{p}{2}}\pi(a_{0})}{\int_{0}^{1}\left(\frac{a_{0}}{a_{0}+1}\right)^{\frac{p }{2}}\pi(a_{0})da_{0}},\]
_as \(n\to\infty\)._
Proof.: See section A.5.
Corollary 2.2 states that the marginal posterior of \(a_{0}\) using the normalized power prior does not converge to a point mass at one when the sufficient statistics and the covariates of the historical and current data are equal. We also observe that as \(p\) approaches infinity, the marginal posterior of \(a_{0}\) specified above converges to a point mass at one. The form of the asymptotic marginal posterior of \(a_{0}\) suggests that the normalized power prior may be sensitive to overfitting when the historical and current datasets are compatible.
In Theorem 2.3 we also relax the previous result by deriving the asymptotic marginal posterior distribution of \(a_{0}\) assuming only that the sufficient statistics of the historical and current data are not equal. This means that the covariate matrices need not be equal so long as the sufficient statistics \(X^{\prime}Y\) and \(X^{\prime}_{0}Y_{0}\) are, increasing the applicability of the result.
**Theorem 2.3**.: _Suppose \(X\) is \(n\times p\) of rank \(p\) and \(X_{0}\) is \(n_{0}\times p\) of rank \(p\). Let \(Y=(y_{1},\ldots,y_{n})^{\prime}\) and \(Y_{0}=(y_{01},\ldots,y_{0n_{0}})^{\prime}\). Consider the GLM in (4), where \(\frac{n_{0}}{n}=r\) and \(r>0\) is a constant. If \(X^{\prime}Y=X^{\prime}_{0}Y_{0}\) and \(X\neq X_{0}\), then the marginal posterior of \(a_{0}\) using the normalized power prior, as specified in (5), is asymptotically proportional to_
\[\pi(a_{0})\cdot\frac{|\hat{\Sigma}_{g}|^{1/2}}{|\hat{\Sigma}_{k}|^{1/2}}\exp \left\{-n[g_{n}(\hat{\beta})-k_{n}(\tilde{\beta})]\right\},\]
_where the definitions of \(g_{n}(\beta)\), \(k_{n}(\beta)\) and \(\frac{|\hat{\Sigma}_{g}|^{1/2}}{|\hat{\Sigma}_{k}|^{1/2}}\) can be found in section A.5 of the appendix._
Proof.: See section A.6.
Corollary 2.2 and Theorem 2.3 show that, for GLMs, the marginal posterior of \(a_{0}\) using the normalized power prior does not converge to a point mass at one when the sufficient statistics of the historical and current data are equal. From Theorems 2.1-2.3, we conclude that, asymptotically, the normalized power prior is sensitive to discrepancies between the historical and current data, but cannot fully utilize the historical information when there are no discrepancies.
Optimal Beta Priors for \(a_{0}\)
### Kullback-Leibler Divergence Criterion
In this section, we propose a prior based on minimizing the KL divergence of the marginal posterior of \(a_{0}\) to two reference distributions. This prior is optimal in the sense that it is the best possible beta prior at balancing the dual objectives of encouraging borrowing when the historical and current data are compatible and limiting borrowing when they are in conflict.
Let \(\bar{y}_{0}\) denote the mean of the historical data and \(\bar{y}\) denote the mean of the hypothetical current data. Let \(\pi_{1}(a_{0})\equiv\text{beta}(c,1)\) (\(c\gg 1\) is fixed) and \(\pi_{2}(a_{0})\equiv\text{beta}(1,c)\). The distributions \(\pi_{1}(a_{0})\) and \(\pi_{2}(a_{0})\) represent two ideal scenarios, where \(\pi_{1}(a_{0})\) is concentrated near one and \(\pi_{2}(a_{0})\) is concentrated near zero. The KL-based approach computes the hyperparameters (\(\alpha_{0}\) and \(\beta_{0}\)) for the beta prior on \(a_{0}\) that will minimize a convex combination of two KL divergences; one is the KL divergence between \(\pi_{1}(a_{0})\) and the marginal posterior of \(a_{0}\) when \(\bar{y}=\bar{y}_{0}\), while the other is the KL divergence between \(\pi_{2}(a_{0})\) and the marginal posterior of \(a_{0}\) when there is a user-specified difference between \(\bar{y}\) and \(\bar{y}_{0}\).
Let \(d=\bar{y}-\bar{y}_{0}\), representing the difference between the means of the hypothetical current data and the historical data. Our approach is centered on a user-specified **maximum tolerable difference** (MTD), \(d_{\text{MTD}}\). Let \(\pi^{*}(a_{0})\) denote the marginal posterior of \(a_{0}\) when \(d=0\). Let \(\pi_{\text{MTD}}(a_{0})\) denote the marginal posterior of \(a_{0}\) when \(d=d_{\text{MTD}}\). For \(d=0\), we want \(\pi^{*}(a_{0})\) to resemble \(\pi_{1}(a_{0})\) and for \(d=d_{\text{MTD}}\), we want \(\pi_{\text{MTD}}(a_{0})\) to resemble \(\pi_{2}(a_{0})\). The distributions \(\pi_{1}(a_{0})\) and \(\pi_{2}(a_{0})\) have been chosen to correspond to cases with substantial and little borrowing, respectively. Therefore, our objective is to solve for \(\alpha_{0}>0\) and \(\beta_{0}>0\) to minimize
\[K(\alpha_{0},\beta_{0})=wKL(\pi^{*}(a_{0}),\pi_{1}(a_{0}))+(1-w)KL(\pi_{\text {MTD}}(a_{0}),\pi_{2}(a_{0})).\]
Here \(0<w<1\) is a scalar and \(KL(p,q)\) for distributions \(P\) and \(Q\) with P as reference is defined as
\[KL(p,q)=\int\log\left(\frac{p(x)}{q(x)}\right)dP(x)=E_{p}[\log(p)]-E_{p}[\log (q)].\]
The scalar \(w\) weights the two competing objectives. For \(w>0.5\), the objective to encourage borrowing is given more weight, and for \(w<0.5\), the objective to limit borrowing is given more weight.
Below we demonstrate the simulation results using this method for the _i.i.d._ normal case, the _i.i.d._ Bernoulli case and the normal linear model. We compare the marginal posterior of \(a_{0}\) using the KL-based optimal prior with that using the uniform prior. For all simulations in this section, we choose \(w=0.5\) so that the two competing objectives are given equal weight. We choose \(c=10\) so that \(\pi_{1}(a_{0})\) and \(\pi_{2}(a_{0})\) represent cases with substantial and little borrowing, respectively.
#### 3.1.1 Normal _i.i.d._ Case
We assume \(y_{1},\ldots,y_{n}\) and \(y_{01},\ldots,y_{0n_{0}}\) are _i.i.d._ observations from N(\(\mu\), \(\sigma^{2}\)) where \(\sigma^{2}=1\). We choose \(\bar{y}_{0}=1.5\) and \(n=n_{0}=30\). The objective function \(K(\cdot,\cdot)\) is computed using numerical integration and optimization is performed using the optim() function in (base) R (R Core Team, 2022).
In Figure 1, the first figure of each row plots the historical and current data likelihoods if the hypothetical degree of conflict is equal to \(d_{\text{MTD}}\). For each row of the figure below, the maximum tolerable difference \(d_{\text{MTD}}\) is chosen to be 0.5, 1 and 1.5, and the corresponding optimal prior is derived for each value of \(d_{\text{MTD}}\). For each optimal prior, we vary the observed sample mean, denoted
by \(\bar{y}_{\rm obs}\), to evaluate the posterior based on the optimal prior for different observed current data. We use \(d_{\rm obs}=\bar{y}_{\rm obs}-\bar{y}_{0}\) to represent the difference between the means of the observed current data and the historical data. For columns 2-4, \(d_{\rm obs}\) is chosen to be 0, 1 and 1.5, respectively. Note that the values of \(d_{\rm MTD}\) and \(d_{\rm obs}\) are relative to the choices of \(\sigma^{2}\), \(n\) and \(n_{0}\). For example, for larger \(n\), \(d_{\rm MTD}\) would need to be decreased to produce a similar plot to Figure 1.
From columns 2-4, we observe that when \(d_{\rm MTD}=0.5\), very little conflict is tolerated, and the resulting optimal prior does not strongly encourage either borrowing substantially or borrowing little. As \(d_{\rm MTD}\) becomes larger, larger conflict is allowed and the optimal prior shifts more towards \(\pi_{1}(a_{0})\). We also observe that when \(d_{\rm MTD}=1\) (the optimal hyperparameters are \(\alpha_{0}=1\) and \(\beta_{0}=0.4\)) and \(d_{\rm MTD}=1.5\) (the optimal hyperparameters are \(\alpha_{0}=2.6\) and \(\beta_{0}=0.5\)), the marginal posterior of \(a_{0}\) with the optimal prior more closely mimics the target distribution when \(d_{\rm obs}=0\), i.e., the observed current and historical data are fully compatible. As \(d_{\rm obs}\) increases, the marginal posterior shifts toward zero. This behaviour is highly desirable as it achieves both goals of encouraging borrowing when the datasets are compatible and limiting borrowing when they are incompatible.
We can compare the marginal posterior of \(a_{0}\) using the optimal prior with that using a uniform prior in Figure 1. We observe that while the marginal posterior on \(a_{0}\) with the uniform prior is very responsive to conflict, it does not concentrate around one even when the datasets are fully compatible. We conclude that when \(d_{\rm MTD}\) is chosen to be reasonably large, the optimal prior on \(a_{0}\) achieves a marginal posterior that is close to the target distribution when the datasets are fully compatible, while remaining responsive to conflict in the data.
Table 1 shows the posterior mean and variance of the mean parameter \(\mu\) for various combinations of \(d_{\rm MTD}\) and \(d_{\rm obs}\) values corresponding to the scenarios in Figure 1. The posterior mean and variance of \(\mu\) with a normalized power prior are computed using the R package _BayesPPD_(Shen et al., 2022). Again, \(\bar{y}_{0}\) is fixed at 1.5. Since \(\bar{y}_{\rm obs}\geq\bar{y}_{0}\), within each row, the posterior mean of \(\mu\) is always smaller than \(\bar{y}_{\rm obs}\) due to the incorporation of \(\bar{y}_{0}\). We can also compare the results by column. Note that for fixed \(d_{obs}\) (or equivalently \(\bar{y}_{\rm obs}\)), if more historical information is borrowed, the posterior mean of \(\mu\) will be smaller. When \(d_{\rm obs}=0\), the posterior mean stays constant while the variance decreases as \(d_{\rm MTD}\) increases. If the maximum tolerable difference is large, more historical information is borrowed, leading to reduced variance. When \(d_{\rm obs}=0.5\), the posterior of \(\mu\) decreases as more borrowing occurs when \(d_{\rm MTD}\) increases. When \(d_{\rm obs}=1\) or 1.5, the posterior of \(\mu\) first increases and then decreases, as \(d_{\rm MTD}\) increases. This is a result of two competing phenomena interacting; as \(d_{\rm MTD}\) increases, the optimal prior gravitates towards encouraging borrowing; however, since \(d_{\rm obs}\) is very large, the marginal posterior of \(a_{0}\) moves toward zero even though the prior moves toward one. In conclusion, we argue that the posterior estimates of \(\mu\) with the optimal prior respond in a desirable fashion to changes in the data.
\begin{table}
\begin{tabular}{l l l l l} & \(d_{\rm obs}=0\) & \(d_{\rm obs}\)=0.5 & \(d_{\rm obs}\)=1 & \(d_{\rm obs}\)=1.5 \\ \hline \(d_{\rm MTD}\)=0.5 & 1.5 (0.022) & 1.85 (0.026) & 2.32 (0.036) & 2.87 (0.036) \\ \(d_{\rm MTD}\)=1 & 1.5 (0.019) & 1.82 (0.026) & 2.36 (0.042) & 2.93 (0.035) \\ \(d_{\rm MTD}\)=1.5 & 1.5 (0.018) & 1.78 (0.020) & 2.19 (0.040) & 2.84 (0.039) \\ \end{tabular}
\end{table}
Table 1: Posterior mean (variance) of \(\mu\) for the normal _i.i.d._ case
Figure 1: Simulation results for the normal _i.i.d._ case, where \(\sigma^{2}=1\), \(\bar{y}_{0}=1.5\) and \(n=n_{0}=30\). The first figure of each row plots the historical (black solid line) and current (black dashed line) data likelihoods if the hypothetical degree of conflict is equal to \(d_{\rm MTD}\). For each row of the figure, the maximum tolerable difference \(d_{\rm MTD}\) is chosen to be 0.5, 1 and 1.5, and the corresponding optimal prior (pink dotted line) is derived for each value of \(d_{\rm MTD}\). For each optimal prior, we vary \(d_{\rm obs}=\bar{y}_{\rm obs}-\bar{y}_{0}\) to evaluate the performance of the optimal prior for different observed data. For columns 2-4, \(d_{\rm obs}\) is chosen to be 0, 1 and 1.5, respectively. The black and blue curves correspond to \(\pi_{1}(a_{0})\equiv{\rm beta}(10,1)\) and \(\pi_{2}(a_{0})\equiv{\rm beta}(1,10)\), respectively. The purple dashed line represents the marginal posterior of \(a_{0}\) with the optimal prior for a given \(d_{\rm obs}\). The grey dashed line plots the marginal posterior of \(a_{0}\) with the uniform prior.
#### 3.1.2 Bernoulli Model
For the Bernoulli model, we assume \(y_{1},\ldots,y_{n}\) and \(y_{01},\ldots,y_{0n_{0}}\) are _i.i.d._ observations from a Bernoulli distribution with mean \(\mu\). Again, we choose \(n=n_{0}=30\) and optimization is performed analogously to the normal case.
For each row of Figure 2 below, the maximum tolerable difference \(d_{\rm MTD}\) is chosen to be 0.2, 0.4 and 0.6, and the corresponding optimal prior is derived for each value of \(d_{\rm MTD}\). For each optimal prior, we vary the observed \(\bar{y}_{\rm obs}\) to evaluate the performance of the optimal prior for different observed data. For columns 2-4, \(d_{\rm obs}=\bar{y}_{\rm obs}-\bar{y}_{0}\) is chosen to be 0, 0.4 and 0.6, respectively. Values of \(\bar{y}_{0}\) and \(\bar{y}_{\rm obs}\) are chosen so that the variance stays constant for different values of \(d_{\rm MTD}\) or \(d_{\rm obs}\).
The optimal marginal prior and posterior of \(a_{0}\) for Bernoulli data are similar to those of the normal model. We observe that when the datasets are perfectly compatible, i.e., \(d_{\rm obs}=0\), the marginal posterior of \(a_{0}\) with the optimal prior concentrates around one when \(d_{\rm MTD}\) is relatively large. When \(d_{\rm obs}\) increases to 0.4 or 0.6, the marginal posterior of \(a_{0}\) concentrates around zero when \(d_{\rm MTD}\) is relatively large. The optimal prior becomes increasingly concentrated near one as \(d_{\rm MTD}\) increases. Compared to the marginal posterior with the uniform prior, the optimal prior on \(a_{0}\) achieves a marginal posterior that closely mimics the target distribution when the datasets are fully compatible, while remaining responsive to conflict in the data.
#### 3.1.3 Normal Linear Model
Suppose \(y_{01},\ldots,y_{0n_{0}}\) are independent observations from the historical data where \(y_{0i}\sim N(\beta_{0}+\beta_{1}x_{0i},\sigma^{2})\) for the \(i\)-th observation and \(x_{0i}\) is a single covariate. Also suppose \(y_{1},\ldots,y_{n}\) are independent observations from the current data where \(y_{j}\sim N(\beta_{0}+\beta_{1}x_{j}+d_{\rm MTD},\sigma^{2})\) for the \(j\)-th observation and \(x_{j}\) is a single covariate. We vary \(d_{\rm MTD}\) to represent different degrees of departure of the intercept of the simulated current data to the intercept of the historical data. We choose \(\beta_{0}=1.5\), \(\beta_{1}=-1\), \(\sigma^{2}=1\) and \(n=n_{0}=30\). We choose \(d_{\rm MTD}=0.1,0.5\), and 1 and \(d_{\rm obs}=0,0.5\), and 1. The objective function \(K\) is computed using Monte Carlo integration and optimization is performed using the optim() function in R.
Figure 3 shows the optimal prior and optimal posterior for \(a_{0}\) as well as the posterior of \(a_{0}\) with the uniform prior for various \(d_{\rm MTD}\) and \(d_{\rm obs}\) values. We observe that when the datasets are perfectly compatible, i.e., \(d_{\rm obs}=0\), the marginal posterior of \(a_{0}\) with the optimal prior concentrates around one when \(d_{\rm MTD}\) is relatively large. When \(d_{\rm obs}\) increases to 1, the marginal posterior of \(a_{0}\) concentrates around zero. The optimal prior becomes increasingly concentrated near one as \(d_{\rm MTD}\) increases. Compared to the marginal posterior with the uniform prior, the optimal prior for \(a_{0}\) achieves a marginal posterior that closely mimics the target distribution when the datasets are fully compatible, while remaining responsive to conflict in the data.
### Mean Squared Error Criterion
In this section, we derive the optimal prior for \(a_{0}\) based on minimizing the MSE. This prior is optimal in the sense that it minimizes the weighted average of the MSEs of the posterior mean of the parameter of interest when its hypothetical true value is equal to its estimate using the historical data, or when it differs from its estimate by the maximum tolerable amount. Suppose \(y_{1},\ldots,y_{n}\) and \(y_{01},\ldots,y_{0n_{0}}\) are observations from a distribution with mean parameter \(\mu\). Let \(\mu^{*}\) denote the true value of \(\mu\). Let \(\bar{\mu}\) denote the posterior mean of \(\mu\) using the normalized power prior.
Figure 2: Simulation results for the Bernoulli _i.i.d._ case, where \(\sigma^{2}=1\) and \(n=n_{0}=30\). The first figure of each row plots the historical (black solid line) and current (black dashed line) data likelihoods if the hypothetical degree of conflict is equal to \(d_{\rm MTD}\). For each row of the figure, the maximum tolerable difference \(d_{\rm MTD}\) is chosen to be 0.5, 1 and 1.5, and the corresponding optimal prior (pink dotted line) is derived for each value of \(d_{\rm MTD}\). For each optimal prior, we vary \(d_{\rm obs}=\bar{y}_{\rm obs}-\bar{y}_{0}\) to evaluate the performance of the optimal prior for different observed data. For columns 2-4, \(d_{\rm obs}\) is chosen to be 0, 1 and 1.5, respectively. The black and blue curves correspond to \(\pi_{1}(a_{0})\equiv{\rm beta}(10,1)\) and \(\pi_{2}(a_{0})\equiv{\rm beta}(1,10)\), respectively. The purple dashed line represents the marginal posterior of \(a_{0}\) with the optimal prior for a given \(d_{\rm obs}\). The grey dashed line plots the marginal posterior of \(a_{0}\) with the uniform prior.
Figure 3: Simulation results for the normal linear model with one covariate where \(\beta_{0}=1.5\), \(\beta_{1}=-1\), \(\sigma^{2}=1\) and \(n=n_{0}=30\). The first figure of each row shows the historical (black solid line) and current (black dashed line) data likelihoods as a function of the intercept if the hypothetical degree of conflict is equal to \(d_{\rm MTD}\). For each row of the figure, the maximum tolerable difference \(d_{\rm MTD}\) is chosen to be 0.1, 0.5 and 1, and the corresponding optimal prior (pink dotted line) is derived for each value of \(d_{\rm MTD}\). For each optimal prior, we vary \(d_{\rm obs}\) to represent different degrees of departure of the intercept of current data to the intercept of historical data. For columns 2-4, \(d_{\rm obs}\) is chosen to be 0, 0.5 and 1, respectively. The black and blue curves correspond to \(\pi_{1}(a_{0})\equiv{\rm beta}(10,1)\) and \(\pi_{2}(a_{0})\equiv{\rm beta}(1,10)\), respectively. The purple dashed line represents the marginal posterior of \(a_{0}\) with the optimal prior for a given \(d_{\rm obs}\). The grey dashed line plots the marginal posterior of \(a_{0}\) with the uniform prior.
Then, the MSE of \(\bar{\mu}\) is
\[\text{MSE}(\mu^{*})=\int\left[\bar{\mu}(y)-\mu^{*}\right]^{2}p(y|\mu^{*})dy.\]
In the regression setting, \(\mu\) is replaced by the regression coefficients \(\beta\).
Let \(\bar{y}_{0}\) denote the mean of the historical data. We aim to find the hyperparameters, \(\alpha_{0}\) and \(\beta_{0}\), for the beta prior for \(a_{0}\) that will minimize
\[w\text{MSE}(\mu^{*}=\bar{y}_{0})+(1-w)\text{MSE}(\mu^{*}=\bar{y}_{0}+d_{\text{ MTD}}),\]
where \(d_{\text{MTD}}\) is the maximum tolerable difference. Again, we use \(d_{\text{obs}}=\bar{y}_{\text{obs}}-\bar{y}_{0}\) to represent the difference between the means of the observed current data and the historical data.
#### 3.2.1 Normal _i.i.d._ Case
We demonstrate the use of this criterion for the normal _i.i.d._ case. Suppose \(y_{1},\ldots,y_{n}\) and \(y_{01},\ldots,y_{0n_{0}}\) are _i.i.d._ observations from N(\(\mu\), \(\sigma^{2}\)) where \(\sigma^{2}=1\) and \(n=n_{0}=30\). In this example, we fix \(\mu^{*}\) and \(y_{MTD}\) at 1.5, and define \(d_{\text{MTD}}=\bar{y}_{0}-\bar{y}\) and \(d_{\text{obs}}=\bar{y}_{0}-\bar{y}\). The posterior mean of \(\mu\) is computed using Monte Carlo integration and optimization is performed using a grid search. The optimal prior, optimal posterior, and the posterior using the uniform prior for \(a_{0}\) are plotted in Figure 4. When \(d_{\text{MTD}}=0.5\), the optimal prior is unimodal with mode around 0.3. When \(d_{\text{MTD}}=1\), the optimal prior is concentrated near zero. When \(d_{\text{MTD}}=1.5\), the optimal prior is U-shaped and favouring either strong or weak borrowing. When \(d_{\text{MTD}}\) is small, the algorithm cannot distinguish between the two competing scenarios in the objective function and the resulting optimal prior concentrates around 0.5. When \(d_{\text{MTD}}\) is large, the optimal prior will favour the two scenarios equally. For columns 2-4, \(d_{\text{obs}}\) is chosen to be 0, 1 and 1.5. The marginal posterior using the optimal prior concentrates more around zero as \(d_{\text{obs}}\) increases for a given \(d_{\text{MTD}}\). Comparing Figures 4 and 1, we observe that the optimal prior derived using the MSE criterion is more conservative in the sense that it tends to discourage borrowing than that derived using the KL criterion.
Table 2 shows the MSE for the optimal prior, beta\((1,1)\) (uniform) and beta\((2,2)\) as well as the percent reduction of MSE of the optimal prior compared to the uniform prior. We can see that the percent reduction of MSE increases as \(d_{\text{MTD}}\) increases. Table 3 displays the decomposition of MSE into bias squared and variance for the three choices of priors. When \(d_{\text{MTD}}=0.5\) or 1, the prior discourages borrowing which results in smaller bias and larger variance. When \(d_{\text{MTD}}=1.5\), the model can distinguish easily between the two contrasting objectives, leading to smaller bias and smaller variance. Additional simulation results for varying choices of \(n\) and \(n_{0}\) are provided in section B of the appendix.
## 4 Case Studies
We now illustrate the proposed methodologies by analysing two clinical trial case studies. First, we study an important application in a pediatric trial where historical data on adults is available. This constitutes a situation of increased importance due to the difficulty in enrolling pediatric patients in clinical trials (U.S. Food and Drug Administration, 2016). Then, we study a classical problem in the analysis clinical trials: using information from a previous study. This is illustrated with data on trials of interferon treatment for melanoma.
Figure 4: Simulation results for the normal _i.i.d._ case when minimizing a convex combination of MSEs when \(n=n_{0}=30\). The first figure of each row shows the historical (black solid line) and current (black dashed line) data likelihoods if the hypothetical degree of conflict is equal to \(d_{\rm MTD}\). The mean of the hypothetical current data is fixed at 1.5. For each row of the figure, the maximum tolerable difference \(d_{\rm MTD}\) is chosen to be 0.5, 1 and 1.5, and the corresponding optimal prior (pink dotted line) is derived for each value of \(d_{\rm MTD}\). For each optimal prior, we vary \(d_{\rm obs}=\bar{y}_{0}-\bar{y}_{\rm obs}\). For columns 2-4, \(d_{\rm obs}\) is chosen to be 0, 1 and 1.5, respectively. The purple dashed line represents the marginal posterior of \(a_{0}\) with the optimal prior for a given \(d_{\rm obs}\). The grey dashed line plots the marginal posterior of \(a_{0}\) with the uniform prior.
\begin{table}
\begin{tabular}{l c c c c} & Optimal Prior & Beta\((1,1)\) & Beta\((2,2)\) & Percent Reduction of MSE, \\ & & & & Optimal Prior vs. beta\((1,1)\) \\ \hline \(d_{\text{MTD}}=0.5\) & 0.054 & 0.057 & 0.057 & 5\% \\ \(d_{\text{MTD}}=1\) & 0.063 & 0.069 & 0.079 & 9\% \\ \(d_{\text{MTD}}=1.5\) & 0.052 & 0.059 & 0.067 & 12\% \\ \end{tabular}
\end{table}
Table 2: MSE for different prior choices and percent reduction of MSE of the optimal prior compared to the uniform prior
\begin{table}
\begin{tabular}{l c c c} & Optimal Prior & Beta\((1,1)\) & Beta\((2,2)\) \\ \hline Bias\({}^{2}\) & & & \\ \(d_{\text{MTD}}=0.5\) & 0.011 & 0.015 & 0.018 \\ \(d_{\text{MTD}}=1\) & 0.005 & 0.012 & 0.025 \\ \(d_{\text{MTD}}=1.5\) & 0.003 & 0.006 & 0.015 \\ Variance & & & \\ \(d_{\text{MTD}}=0.5\) & 0.043 & 0.042 & 0.039 \\ \(d_{\text{MTD}}=1\) & 0.058 & 0.057 & 0.054 \\ \(d_{\text{MTD}}=1.5\) & 0.049 & 0.053 & 0.052 \\ \end{tabular}
\end{table}
Table 3: Bias and variance decomposition for different prior choices
### Pediatric Lupus Trial
Enrolling patients for pediatric trials is often difficult due to the small number of available patients, parental concern regarding safety and technical limitations (Pisoda and Xue, 2020). For many pediatric trials, additional information must be incorporated for any possibility of establishing efficacy (Pisoda and Xue, 2020). The use of Bayesian methods is natural for extrapolating adult data in pediatric trials through the use of informative priors, and is demonstrated in FDA guidance on complex innovative designs (U.S. Food and Drug Administration, 2019).
Belimumab (Benlysta) is a biologic for the treatment of adults with active, autoantibody-positive systemic lupus erythematosus (SLE). It was proposed that the indication for Belimumab can be expanded to include the treatment of children (Pisoda and Xue, 2020). The clinical trial PLUTO (Brunner et al., 2020) has been conducted to examine the effect of Belimumab on children 5 to 17 years of age with active, seropositive SLE who are receiving standard therapy. The PLUTO study has a small sample size due to the rarity of childhood-onset SLE. There have been two previous phase 3 trials, BLISS-52 and BLISS-76 (Furie et al., 2011; Navarra et al., 2011), which established efficacy of belimumab plus standard therapy for adults. The FDA review of the PLUTO trial submission used data from the adult trials to inform the approval decision (Pisoda and Xue, 2020). All three trials employ the same composite primary outcome, the SLE Responder Index (SRI-4).
We conduct a Bayesian analysis of the PLUTO study incorporating information from the adult studies, BLISS-52 and BLISS-76, using a normalized power prior. We derive the optimal priors on \(a_{0}\) based on the KL criterion and the MSE criterion.
Our parameter of interest is the treatment effect of Belimumab for children, denoted by \(\beta\). The total sample size of the pooled adult data (BLISS-52 and BLISS-76) is \(n_{0}=1125\) and the treatment effect is \(0.481\). We choose \(d_{\mathrm{MTD}}=0.481\) to be the maximum tolerable difference. The pediatric data has a sample size of 92 and the estimated treatment effect is \(0.371\). We use the asymptotic normal approximation to the logistic regression model with one covariate (the treatment indicator). We choose \(w=0.5\) and \(n=100\) (sample size of the simulated current dataset). For the KL criterion, the objective function \(K\) is computed using Monte Carlo integration and optimization is performed using the optim() function in R. For the MSE criterion, the posterior mean of \(\beta\) is computed using the R package _hdbayes_ (Alt, 2022) and optimization is performed using a grid search where the values of \(\alpha_{0}\) and \(\beta_{0}\) range from \(0.5\) to \(6\) with an increment of \(0.5\). The optimal priors derived using the KL criterion and MSE criterion are displayed in Figure 5. The optimal prior derived using the KL criterion is beta\((5.5,5.5)\), which is a unimodal and symmetric distribution centred at \(0.5\). The optimal prior derived using the MSE criterion is beta\((2,5)\), which is a unimodal distribution with mode around \(0.2\). Table 4 provides the posterior mean, standard deviation and \(95\%\) credible interval for \(\beta\) using the optimal priors and several other beta priors for comparison. We observe that while the posterior means remain the same, the posterior standard deviation is lowest with the optimal prior derived using the KL criterion. In this case, the optimal prior using the KL criterion borrowed the most historical information out of the five prior choices being considered.
### Melanoma Trial
Interferon Alpha-2b (IFN) is an adjuvant chemotherapy for deep primary or regionally metastatic melanoma. IFN was used in two phase 3 randomized controlled clinical trials, E1684 and E1690 (Kirkwood et al., 1996). In this example, we choose overall survival (indicator for death) as the primary outcome. We conduct a Bayesian analysis of the E1690 trial incorporating information from the E1684 trial, using a normalized power prior. We include three covariates in the analysis,
\begin{table}
\begin{tabular}{l c c c} & Mean & SD & 95\% Credible Interval \\ \hline beta\((5.5,5.5)^{1}\) & 0.47 & 0.16 & (0.15, 0.79) \\ beta\((2,5)^{2}\) & 0.47 & 0.21 & (0.04, 0.89) \\ beta\((1,1)\) & 0.47 & 0.18 & (0.12, 0.83) \\ beta\((2,2)\) & 0.47 & 0.17 & (0.13, 0.79) \\ beta\((0.5,0.5)\) & 0.47 & 0.17 & (0.12, 0.81) \\ \end{tabular} \({}^{1}\) Optimal by the KL criterion
\({}^{2}\) Optimal by the MSE criterion
\end{table}
Table 4: pediatric lupus trial: posterior mean, standard deviation, and 95% credible interval for \(\beta\)
Figure 5: After combining studies BLISS-52 and BLISS-76 for adults, the total sample size is \(n_{0}=1125\) and log odds ratio for treatment vs. control group is 0.481. We choose \(d_{\rm MTD}=0.481\) to be the maximum tolerable difference. The pediatric data has a sample size of \(n=92\). The actual observed log odds ratio is 0.371. The figure on the left displays the optimal prior (pink dotted line) and posterior (purple dashed line) derived using the KL criterion. The figure on the right displays the optimal prior for \(a_{0}\) and the posterior derived using the MSE criterion. The posterior of \(a_{0}\) using the uniform prior (grey dashed line) is also shown.
the treatment indicator, sex and the logarithm of age. As before, we obtain the optimal priors for \(a_{0}\) based on both the KL criterion and the MSE criterion.
Our parameter of interest is the treatment effect of IFN, denoted by \(\beta\). The total sample size of the E1684 trial is \(n_{0}=285\) and the treatment effect is \(-0.423\). We choose \(d_{\rm MTD}=0.423\) to be the maximum tolerable difference. The E1690 trial has a sample size of 427 and the treatment effect is 0.098. We use the asymptotic normal approximation to the logistic regression model with three covariates. We choose \(w=0.5\), \(d_{\rm MTD}=0.423\) and \(n=400\) (sample size of the simulated current dataset). For the KL criterion, the objective function \(K\) is computed using Monte Carlo integration and optimization is performed using the optim() function in R. For the MSE criterion, the posterior mean of \(\beta\) is computed using the R package _hdbayes_(Alt, 2022) and optimization is performed using a grid search where the values of \(\alpha_{0}\) and \(\beta_{0}\) range from 0.5 to 6 with an increment of 0.5. The optimal priors derived using the KL criterion and MSE criterion are displayed in Figure 6. The optimal prior derived using the KL criterion is beta\((0.7,1.5)\), which has mass around zero. For the MSE criterion, the optimal prior derived is beta\((5.5,3)\), which is unimodal with mode around 0.7. This is likely due to the fact that \(d_{\rm MTD}\) is small relative to the total sample size of 712 - see also simulations in section B of the appendix. Because the observed difference is larger than \(d_{\rm MTD}\), the marginal posterior of \(a_{0}\) has mode around 0.4, which discourages more strongly than the prior. Table 5 provides the posterior mean, standard deviation and 95% credible interval for \(\beta\) using the optimal priors and several other beta priors for comparison. The posterior mean is the largest for the optimal prior derived by the KL criterion and the beta\((0.5,0.5)\) prior, indicating that the least historical information is borrowed. The optimal prior derived using the MSE criterion borrows the most, resulting in the smallest posterior mean and smallest variance.
## 5 Discussion
In this paper, we have explored the asymptotic properties of the normalized power prior when the historical and current data are compatible and when they are incompatible. We have proposed two criteria based on which the optimal hyperparameters of the prior for \(a_{0}\) can be derived. While the exact values of the hyperparameters can be obtained using our objective functions, we suggest the following rules of thumb for estimating the optimal prior given different choices of the maximum tolerable difference. When the KL criterion is used, a beta distribution centered around 0.5, such as the beta\((2,2)\), is optimal for small values (when plots of the current and historical data likelihoods
\begin{table}
\begin{tabular}{l l l l} & Mean & SD & 95\% Credible Interval \\ \hline beta\((0.7,1.5)^{1}\) & 0.05 & 0.19 & (-0.33, 0.43) \\ beta\((5.5,3)^{2}\) & -0.01 & 0.17 & (-0.35, 0.33) \\ beta\((1,1)\) & 0.04 & 0.19 & (-0.32, 0.42) \\ beta\((2,2)\) & 0.03 & 0.18 & (-0.34, 0.40) \\ beta\((0.5,0.5)\) & 0.05 & 0.19 & (-0.33, 0.41) \\ \end{tabular} \({}^{1}\) Optimal by the KL criterion
\({}^{2}\) Optimal by the MSE criterion
\end{table}
Table 5: Melanoma trial: posterior mean, standard deviation and 95% credible interval for \(\beta\)
Figure 6: The total sample size of the E1684 trial is \(n_{0}=285\) and log odds ratio for treatment vs. control group is \(-0.423\). We choose \(d_{\rm MTD}=0.423\) to be the maximum tolerable difference. The E1690 trial has sample size \(n=427\). The observed log odds ratio is \(0.098\). The figure on the left displays the optimal prior (pink dotted line) and posterior (purple dashed line) derived using the KL criterion. The figure on the right displays the optimal prior for \(a_{0}\) and the posterior derived using the MSE criterion. The posterior of \(a_{0}\) using the uniform prior (grey dashed line) is also shown.
substantially overlap) of maximum tolerable difference, while a beta distribution with mean close to \(1\), such as the beta\((2,0.5)\), should be used for large values of maximum tolerable difference. When the MSE criterion is used, a beta distribution with mean less than \(0.5\), such as the beta\((3,6)\), is optimal for small values of maximum tolerable difference, while a beta distribution with modes at zero and one, as for example a beta\((0.5,0.5)\), should be used for large values of maximum tolerable difference. The MSE criterion is a more conservative criterion, in the sense that it tends to discourage borrowing, than the KL criterion. Potential future work includes extending our method to survival and longitudinal outcomes, as well as accommodating dependent discounting parameters when multiple historical datasets are available.
## Appendix A Proofs from Section 2
### Technical conditions
We start our presentation by stating technical conditions under which the limiting theorems presented in section 2 hold. Then, we state an important result (Theorem A.1) in Chen (1985) which gives support to many of the proofs herein. In what follows, we will follow Chen (1985) in establishing the necessary conditions for the limiting posterior density to be normal. Let the parameter space of interest be \(\Theta\) and a \(p\)-dimensional Euclidean space and let \(B_{r}(a)=\{\theta\in\Theta:|\theta-a|\leq r\}\) be a neighbourhood of size \(r\) of the point \(a\in\Theta\). Also, write \(L_{n}(\theta):=\sum_{i=1}^{n}\log f(x\mid\theta)\).
**Theorem A.1**.: _[Bayes Central Limit Theorem (Chen (1985))] Suppose that for each \(n>N\) with \(N>0\), \(L_{n}\) attains a strict local maximum \(\hat{\theta}_{n}\) such that \(L_{n}^{\prime}(\theta):=\frac{\partial}{\partial\theta}L_{n}(\theta)=0\) and the Hessian \(L_{n}^{\prime\prime}(\theta):=\frac{\partial^{2}}{\partial\theta^{2}}L_{n}(\theta)\) is negative-definite for all \(\theta\in\Theta\)._
_Moreover, suppose \(\hat{\theta}_{n}\) converges almost surely to \(\theta_{0}\in\Theta\) as \(n\to\infty\) and the prior density \(\pi(\theta)\) is positive and continuous at \(\theta_{0}\). Assume that the following conditions hold:_
* _The largest eigenvalue of_ \(\left[L^{\prime\prime}(\theta)\right]^{-1}\to 0\) _a.s. as_ \(n\to\infty\)_;_
* _For_ \(\varepsilon>0\) _there exists (a.s.)_ \(N_{\varepsilon}>0\) _and_ \(r>0\) _such that for all_ \(n>\max\left\{N,N_{\varepsilon}\right\}\) _and_ \(\theta\in B_{r}(\hat{\theta}_{n})\)_,_ \(L^{\prime\prime}(\theta)\) _is well-defined and_ \[I_{p}-A(\varepsilon)\leq L^{\prime\prime}(\theta)\left[L^{\prime\prime}( \theta)\right]^{-1}\leq I_{p}+A(\varepsilon),\] _where_ \(I_{p}\) _is the_ \(p\)_-dimensional identity matrix and_ \(A(\varepsilon\) _is a_ \(p\times p\) _positive semidefinite matrix whose largest eigenvalue goes to zero as_ \(\varepsilon\to 0\)_._
* _The sequence of posterior distributions_ \(p_{n}(\theta\mid x)\) _satisfies, as_ \(n\to\infty\)_,_ \[\int_{\Theta\setminus B_{r}(\hat{\theta}_{n})}p_{n}(t\mid x)\,dt\to 0,a.s.,\] _for_ \(r>0\)_, i.e., the sequence of posteriors is_ consistent _at_ \(\hat{\theta}_{n}\)_. Here we have assumed that the support of the posterior distributions is_ \(\Theta\)_, but this could be replaced by a sequence_ \(\Theta_{n}\)_._
_Then we say that the posteriors converge in distribution to a normal with parameters \(\hat{\theta}_{n}\) and \(\left[-L^{\prime\prime}(\hat{\theta}_{n})\right]^{-1}\). For notational convenience we will (somewhat informally) write_
\[p_{n}(\theta|x)\to N_{p}\left(\hat{\theta}_{n},\left[-L^{\prime\prime}(\hat{ \theta}_{n})\right]^{-1}\right),\]
_as \(n\to\infty\)._
### Proof of Theorem 2.1
Now we move on to present a proof for Theorem 2.1 in section 2, which discusses the concentration of the posterior of \(a_{0}\) at zero as the sample sizes increase in the case when there is some discrepancy between the historical and current data sets.
Proof.: We first utilise Theorem A.1 to rewrite the limiting marginal posterior distribution of \(a_{0}\). Under the regularity conditions, as \(n\to\infty\),
\[L(\theta|D) \to N(\hat{\theta},v),\quad\text{and}\] \[\frac{1}{c(a_{0})}L(\theta|D_{0})^{a_{0}}\pi_{0}(\theta) \to N(\hat{\theta}_{0},v_{0}(a_{0})),\]
where asymptotically \(\hat{\theta}=\dot{b}^{-1}(\bar{y})\), \(\hat{\theta}_{0}=\dot{b}^{-1}(\bar{y}_{0})\), \(v=(n\ddot{b}(\hat{\theta}))^{-1}\), and \(v_{0}(a_{0})=(a_{0}n_{0}\ddot{b}(\hat{\theta}_{0}))^{-1}\). For simplicity of notation, let \(v_{0}=v_{0}(a_{0})\), \(\ddot{b}^{-1}=\ddot{b}^{-1}(\bar{y})\) and \(\dot{b}_{0}^{-1}=\ddot{b}^{-1}(\bar{y}_{0})\). Then the kernel of the marginal posterior of \(a_{0}\) becomes
\[\pi^{*}(a_{0}|D_{0},D,\alpha_{0},\beta_{0})\equiv \int L(\theta|D)\frac{L(\theta|D_{0})^{a_{0}}\pi_{0}(\theta)}{c(a_ {0})}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\,d\theta,\] \[\to a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\int N(\hat{\theta},v )N(\hat{\theta}_{0},v_{0}(a_{0}))d\theta,\] \[\propto a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}v_{0}^{-\frac{1}{2}} \int\exp\left\{-\frac{1}{2v}(\theta-\hat{\theta})^{2}\right\}\exp\left\{- \frac{1}{2v_{0}}(\theta-\hat{\theta}_{0})^{2}\right\}d\theta\] \[\propto a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\left(\frac{v+v_{0}}{v }\right)^{-\frac{1}{2}}\exp\left\{-\frac{1}{2}\left[\frac{v\hat{\theta}_{0}^{2 }-v_{0}\hat{\theta}^{2}-2v\hat{\theta}\hat{\theta}_{0}}{(v_{0}+v)v}\right] \right\},\] \[= a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\left(\frac{v+v_{0}}{ v}\right)^{-\frac{1}{2}}\exp\left\{\frac{v_{0}\hat{\theta}^{2}-v(\delta^{2}-\hat{ \theta}^{2})}{2(v_{0}+v)v}\right\}\ (\text{since}\ |\hat{\theta}-\hat{\theta}_{0}|= \delta),\] \[= a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\left(\frac{v+v_{0}}{ v}\right)^{-\frac{1}{2}}\exp\left\{\frac{\hat{\theta}^{2}}{2v}-\frac{\delta^{2}}{2(v +v_{0})}\right\},\] \[= a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\left(\frac{v+v_{0}}{ v}\right)^{-\frac{1}{2}}\exp\left\{\frac{\hat{\theta}^{2}}{2v}-\frac{na_{0}r\delta^{2} }{2(\dot{b}_{0}^{-1}+a_{0}r\dot{b}^{-1})}\right\}.\]
Then the marginal posterior of \(a_{0}\) becomes
\[\pi(a_{0}|D_{0},D,\alpha_{0},\beta_{0})= \frac{\pi^{*}(a_{0}|D_{0},D,\alpha_{0},\beta_{0})}{\int\pi^{*}(a_ {0}|D_{0},D,\alpha_{0},\beta_{0})\,da_{0}}, \tag{6}\] \[\to \frac{a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}[\ddot{b}^{-1}+ (a_{0}r\ddot{b}_{0})^{-1}]^{-\frac{1}{2}}\exp\left\{-\frac{na_{0}r\delta^{2}}{ 2(\dot{b}_{0}^{-1}+a_{0}r\dot{b}^{-1})}\right\}}{\int a_{0}^{\alpha_{0}-1}(1-a _{0})^{\beta_{0}-1}[\ddot{b}^{-1}+(a_{0}r\ddot{b}_{0})^{-1}]^{-\frac{1}{2}} \exp\left\{-\frac{na_{0}r\delta^{2}}{2(\dot{b}_{0}^{-1}+a_{0}r\dot{b}^{-1})} \right\}\,da_{0}},\] (7) \[= \frac{a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\left[\frac{a_{0}r }{1+a_{0}r\frac{\dot{b}^{-1}}{\dot{b}_{0}^{-1}}}\right]^{\frac{1}{2}}\exp \left\{-\frac{na_{0}r\delta^{2}}{2\dot{b}_{0}^{-1}\left(1+a_{0}r\frac{\dot{b} ^{-1}}{\dot{b}_{0}^{-1}}\right)}\right\}da_{0}}. \tag{8}\]
Let \(h(a_{0})=\frac{a_{0}r\delta^{2}}{2\bar{b}_{0}^{-1}\left(1+a_{0}r\frac{\bar{b}^{-1 }}{\bar{b}_{0}^{-1}}\right)}\) and \(f(a_{0})=\left[\frac{a_{0}r}{1+a_{0}r\frac{\bar{b}^{-1}}{\bar{b}_{0}^{-1}}} \right]^{\frac{1}{2}}\). Then the denominator is
\[A=\int_{0}^{1}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}f(a_{0})\exp\left\{-nh (a_{0})\right\}da_{0}.\]
Let \(A=A_{1}+A_{2}\) where
\[A_{1}=\int_{0}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}f(a_{0}) \exp\left\{-nh(a_{0})\right\}da_{0}\]
and
\[A_{2}=\int_{\epsilon}^{1}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}f(a_{0}) \exp\left\{-nh(a_{0})\right\}da_{0}.\]
We want to show \(\lim\limits_{n\to\infty}\frac{A_{2}}{A_{1}}=0\).
First, we can see that
\[h^{\prime}(a_{0})=\frac{r\delta^{2}}{2\bar{b}_{0}^{-1}}\left(\frac{a_{0}}{1+a _{0}r\frac{\bar{b}^{-1}}{\bar{b}_{0}^{-1}}}\right)^{\prime}=\frac{r\delta^{2} }{2\bar{b}_{0}^{-1}}\left(\frac{1+a_{0}r\frac{\bar{b}^{-1}}{\bar{b}_{0}^{-1}} -a_{0}r\frac{\bar{b}^{-1}}{\bar{b}_{0}^{-1}}}{(1+a_{0}r\frac{\bar{b}^{-1}}{ \bar{b}_{0}^{-1}})^{2}}\right)=\frac{r\delta^{2}}{2\bar{b}_{0}^{-1}}\left(1+a _{0}r\frac{\bar{b}^{-1}}{\bar{b}_{0}^{-1}}\right)^{-2}>0.\]
Then \(\inf_{x\in[\epsilon,1]}h(x)=h(\epsilon).\) We can also see that \(h^{\prime}(a_{0})\) is continuous since \(1+a_{0}r\frac{\bar{b}^{-1}}{\bar{b}_{0}^{-1}}\) is nonzero on \((0,1)\).
We then observe that
\[f^{\prime}(a_{0})=\frac{1}{2}\left[\frac{a_{0}r}{1+a_{0}r\frac{\bar{b}^{-1}}{ \bar{b}_{0}^{-1}}}\right]^{-\frac{1}{2}}\frac{r}{(1+a_{0}r\frac{\bar{b}^{-1}}{ \bar{b}_{0}^{-1}})^{2}}>0.\]
Thus \(\sup_{x\in[\epsilon,1]}f(x)=f(1)\).
Now we are ready to find the upper bound of \(A_{2}\). Since, for any \(a_{0}\in[\epsilon,1]\), \(f(a_{0})\leq f(1)\) and \(\exp(-nh(a_{0}))\leq\exp(-nh(\epsilon))\), we have
\[A_{2} \leq f(1)\exp(-nh(\epsilon))\int_{\epsilon}^{1}a_{0}^{\alpha_{0}- 1}(1-a_{0})^{\beta_{0}-1}da_{0}\] \[\leq f(1)\exp(-nh(\epsilon))\int_{0}^{1}a_{0}^{\alpha_{0}-1}(1-a_ {0})^{\beta_{0}-1}da_{0}\] \[=f(1)\exp(-nh(\epsilon))\frac{\Gamma(\alpha_{0})\Gamma(\beta_{0}) }{\Gamma(\alpha_{0}+\beta_{0})}=C_{1}\exp(-nh(\epsilon)),\]
where \(C_{1}>0\) is an integration constant. Now we find the lower bound of \(A_{1}\). We know that
\[A_{1}\geq\int_{\frac{\epsilon}{2}}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_{0})^{ \beta_{0}-1}f(a_{0})\exp\left\{-nh(a_{0})\right\}da_{0}.\]
Further, \(a_{0}^{\alpha_{0}-1}\geq\min((\frac{\epsilon}{2})^{\alpha_{0}-1},\epsilon^{ \alpha_{0}-1})\), corresponding to \(\alpha_{0}\geq 1\) and \(\alpha_{0}<1\), respectively. Similarly, \((1-a_{0})^{\beta_{0}-1}\leq\min((1-\epsilon)^{\beta_{0}-1},(1-\frac{\epsilon}{ 2})^{\beta_{0}-1})\), corresponding to \(\beta_{0}\geq 1\) and \(\beta_{0}<1\), respectively.
Since \(h^{\prime\prime}(a_{0})<0\), \(\sup_{x\in[\frac{\epsilon}{2},\epsilon]}h^{\prime}(x)=h^{\prime}(\frac{\epsilon}{2})\). In addition, \(\inf_{x\in[\frac{\epsilon}{2},\epsilon]}f(x)=f(\frac{\epsilon}{2})\). Then we have
\[A_{1} \geq\int_{\frac{\epsilon}{2}}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a _{0})^{\beta_{0}-1}f(a_{0})\frac{1}{h^{\prime}(a_{0})}\exp\left\{-nh(a_{0}) \right\}h^{\prime}(a_{0})\,da_{0},\] \[=\int_{\frac{\epsilon}{2}}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_{0} )^{\beta_{0}-1}f(a_{0})\frac{1}{h^{\prime}(a_{0})}\exp\left\{-nh(a_{0}) \right\}\,dh(a_{0}),\] \[\geq f(\epsilon/2)\times\min((\epsilon/2)^{\alpha_{0}-1},\epsilon ^{\alpha_{0}-1})\times\min((1-\epsilon)^{\beta_{0}-1},(1-\epsilon/2)^{\beta_{ 0}-1})\times(h^{\prime}(\epsilon/2))^{-1}\] \[\times\int_{\frac{\epsilon}{2}}^{\epsilon}\exp(-nh(a_{0}))\,dh(a _{0}),\] \[=C_{2}\int_{\frac{\epsilon}{2}}^{\epsilon}\exp(-nh(a_{0}))\,dh(a _{0}),\] \[=C_{2}\frac{1}{n}[\exp(-nh(\epsilon/2))-\exp(-nh(\epsilon))],\]
where \(C_{2}>0\) is again an integration constant. Therefore,
\[0\leq\frac{A_{2}}{A_{1}}\leq\frac{C_{1}\exp(-nh(\epsilon))}{C_{2}\frac{1}{n}[ \exp(-nh(\epsilon/2))-\exp(-nh(\epsilon))]}=\frac{C_{1}n}{C_{2}[\exp(-n[h( \epsilon/2)-h(\epsilon)])-1]}.\]
Thus, \(\lim_{n\to\infty}\frac{A_{2}}{A_{1}}=0\) by L'Hopital's rule. Since \(\frac{A_{2}}{A_{1}}\geq\frac{A_{2}}{A}\), \(\lim_{n\to\infty}\frac{A_{2}}{A}=0\). Hence, \(\lim_{n\to\infty}\frac{A_{1}}{A}=1\).
### Proof for Corollary 2.1
Proof.: The result follows by setting \(\delta=0\) and \(\tilde{b}^{-1}=\tilde{b}_{0}^{-1}\) into \(8\).
### Proof for Theorem 2.2
Proof.: By Theorem A.1, we know that
\[L(\beta|D)\to N(\hat{\beta},\Sigma(\hat{\beta})),\]
where \(\Sigma(\beta)=-\left[\frac{\partial^{2}\log[L(\beta|D)]}{\partial\beta_{i} \partial\beta_{j}}\right]^{-1}\), and also
\[\frac{1}{c^{*}(a_{0})}L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)\to N(\hat{\beta}_{0 },\Sigma_{0}(a_{0},\hat{\beta})),\]
where \(\Sigma_{0}(a_{0},\beta)=-\left[\frac{\partial^{2}\log[L(\beta|D_{0})^{a_{0}} \pi_{0}(\beta)]}{\partial\beta_{i}\partial\beta_{j}}\right]^{-1}\). For simplicity of notation, let \(\Sigma=\Sigma(\hat{\beta})\) and \(\Sigma_{0}=\Sigma_{0}(a_{0},\hat{\beta})\). Then the marginal posterior of \(a_{0}\) becomes
\[\pi(a_{0}|D_{0},D,\alpha_{0},\beta_{0})\propto\pi^{*}(a_{0}|D_{0},D,\alpha_{0},\beta_{0}) \equiv\int L(\beta|D)\frac{L(\beta|D_{0})^{a_{0}}\pi_{0}(\beta)}{c^{* }(a_{0})}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}d\beta,\]
\[\pi^{*}(a_{0}|D_{0},D,\alpha_{0},\beta_{0}) \to a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\int N(\hat{\beta}, \Sigma)N(\hat{\beta}_{0},\Sigma_{0})d\beta\] \[\propto a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\int\exp\left\{- \frac{1}{2}(\beta-\hat{\beta})^{\prime}\Sigma^{-1}(\beta-\hat{\beta})\right\}\] \[\det(\Sigma_{0})^{-\frac{1}{2}}\exp\left\{-\frac{1}{2}(\beta- \hat{\beta}_{0})^{\prime}\Sigma_{0}^{-1}(\beta-\hat{\beta}_{0})\right\}d\beta\] (Assuming that \[\hat{\beta}-\hat{\beta}_{0}=\delta\] ) \[\propto a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\det(\Sigma_{0})^{- \frac{1}{2}}\det(\Sigma_{n})^{\frac{1}{2}}\] \[\exp\left\{\frac{1}{2}\left[\hat{\beta}^{\prime}\Sigma^{-1}\hat{ \beta}-\delta^{\prime}(\Sigma_{0}^{-1}-\Sigma_{0}^{-1}\Sigma_{n}\Sigma_{0}^{ -1})\delta\right]\right\},\]
where \(\Sigma_{n}=(\Sigma^{-1}+\Sigma_{0}^{-1})^{-1}\). Then
\[\pi(a_{0}|D_{0},D,\alpha_{0},\beta_{0}) \tag{9}\] \[\propto \frac{a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\det(\Sigma_{0}) ^{-\frac{1}{2}}\det(\Sigma_{n})^{\frac{1}{2}}\exp\big{\{}-\frac{1}{2}\delta^{ \prime}(\Sigma_{0}^{-1}-\Sigma_{0}^{-1}\Sigma_{n}\Sigma_{0}^{-1})\delta\big{\}} }{\int a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\det(\Sigma_{0})^{-\frac{1} {2}}\det(\Sigma_{n})^{\frac{1}{2}}\exp\big{\{}-\frac{1}{2}\delta^{\prime}( \Sigma_{0}^{-1}-\Sigma_{0}^{-1}\Sigma_{n}\Sigma_{0}^{-1})\delta\big{\}}da_{0}}. \tag{10}\]
We want to show that, if \(\Sigma\) and \(\Sigma_{0}\) are \(p\times p\) positive definite matrices,
\[\lim_{\begin{subarray}{c}\lim\\ n\rightarrow\infty\end{subarray}}\frac{\int_{0}^{*}a_{0}^{\alpha_{0}-1}(1-a_{ 0})^{\beta_{0}-1}\det(\Sigma_{0})^{-\frac{1}{2}}\det(\Sigma_{n})^{\frac{1}{2} }\exp\big{\{}-\frac{1}{2}\delta^{\prime}(\Sigma_{0}^{-1}-\Sigma_{0}^{-1} \Sigma_{n}\Sigma_{0}^{-1})\delta\big{\}}da_{0}}{\int_{0}^{1}a_{0}^{\alpha_{0}-1 }(1-a_{0})^{\beta_{0}-1}\det(\Sigma_{0})^{-\frac{1}{2}}\det(\Sigma_{n})^{\frac {1}{2}}\exp\big{\{}-\frac{1}{2}\delta^{\prime}(\Sigma_{0}^{-1}-\Sigma_{0}^{-1} \Sigma_{n}\Sigma_{0}^{-1})\delta\big{\}}da_{0}}=1,\]
for \(\delta\neq 0\) and \(\epsilon>0\).
We can write \(\Sigma=n^{-1}P\) and \(\Sigma_{0}=(nra_{0})^{-1}P_{0}\)(Fahrmeir and Kaufmann, 1985), where \(P\) and \(P_{0}\) are positive definite and independent of \(a_{0}\) and \(n\). Then \(\Sigma_{n}=(\Sigma^{-1}+\Sigma_{0}^{-1})^{-1}=n^{-1}(P^{-1}+ra_{0}P_{0}^{-1})^{ -1}\). Now,
\[I-\Sigma_{n}\Sigma_{0}^{-1}=I-(\Sigma^{-1}+\Sigma_{0}^{-1})^{-1}\Sigma_{0}^{-1 }=(\Sigma^{-1}+\Sigma_{0}^{-1})^{-1}\Sigma^{-1},\]
and
\[\Sigma_{0}^{-1}(I-\Sigma_{n}\Sigma_{0}^{-1})= \Sigma_{0}^{-1}\Sigma_{n}\Sigma^{-1},\] \[= nra_{0}P_{0}^{-1}n^{-1}(P^{-1}+ra_{0}P_{0}^{-1})^{-1}nP^{-1},\] \[= nra_{0}P_{0}^{-1}(P^{-1}+ra_{0}P_{0}^{-1})^{-1}P^{-1},\] \[= nra_{0}(P_{0}+a_{0}rP)^{-1},\] \[= nra_{0}P^{-1}[P_{0}P^{-1}+a_{0}rI]^{-1},\] \[= na_{0}P^{-1}[r^{-1}P_{0}P^{-1}+a_{0}I]^{-1}.\]
In addition,
\[\det(\Sigma_{0})^{-\frac{1}{2}}\det(\Sigma_{n})^{\frac{1}{2}}= \det((nra_{0})^{-1}P_{0})^{-\frac{1}{2}}\det(n^{-1}(P^{-1}+ra_{0}P_{ 0}^{-1})^{-1})^{\frac{1}{2}}\] \[= \det((ra_{0})^{-1}P_{0})^{-\frac{1}{2}}\det(P^{-1}+ra_{0}P_{0}^{-1 })^{-\frac{1}{2}}\] \[= \det((ra_{0})^{-1}P_{0}(P^{-1}+ra_{0}P_{0}^{-1}))^{-\frac{1}{2}}\] \[= \det(a_{0}^{-1}(r^{-1}P_{0}P^{-1}+a_{0}I))^{-\frac{1}{2}}\] \[= a_{0}^{\frac{p}{2}}\det(a_{0}I-(-r^{-1}P_{0}P^{-1}))^{-\frac{1} {2}}.\]
Let
\[h(a_{0})=\frac{1}{2n}\delta^{T}(\Sigma_{0}^{-1}-\Sigma_{0}^{-1}\Sigma_{n} \Sigma_{0}^{-1})\delta=\frac{1}{2}\delta^{T}a_{0}P^{-1}[r^{-1}P_{0}P^{-1}+a_{0} I]^{-1}\delta.\]
and
\[f(a_{0})=\det(\Sigma_{0})^{-\frac{1}{2}}\det(\Sigma_{n})^{\frac{1}{2}}=a_{0}^{ \frac{p}{2}}\det(a_{0}I-(-r^{-1}P_{0}P^{-1}))^{-\frac{1}{2}}.\]
Then the denominator is
\[A=\int_{0}^{1}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}f(a_{0})\exp\big{\{}- nh(a_{0})\big{\}}da_{0}.\]
First, we show \(h(a_{0})\) is differentiable.
**Lemma A.2**.: _Let \(A\) and \(B\) be positive definite matrices of the same dimension. Then, the eigenvalues of \(AB\) are positive._
Proof.: By the spectral decomposition, \(A=P\Lambda P^{T}\) where \(\Lambda=diag(\lambda_{1},\ldots,\lambda_{p})\) and \(\lambda_{1},\ldots,\lambda_{p}\) are the eigenvalues of \(A\). Then \(A^{\frac{1}{2}}=P\Lambda^{\frac{1}{2}}P^{T}\) is symmetric \(\Rightarrow v^{T}A^{\frac{1}{2}}BA^{\frac{1}{2}}v=(A^{\frac{1}{2}}v)^{T}B(A^{ \frac{1}{2}}v)>0\). So \(A^{\frac{1}{2}}BA^{\frac{1}{2}}\) is positive definite. Since \(A^{\frac{1}{2}}(A^{\frac{1}{2}}BA^{\frac{1}{2}})A^{-\frac{1}{2}}=AB\), \(A^{\frac{1}{2}}BA^{\frac{1}{2}}\) and \(AB\) are similar. Then they have the same eigenvalues and the eigenvalues of \(AB\) are positive.
Let \(B=a_{0}I-(-r^{-1}P_{0}P^{-1})\). Then
\[h(a_{0})=\frac{1}{2}a_{0}\delta^{T}P^{-1}B^{-1}\delta=\frac{\frac{1}{2}a_{0} \delta^{T}P^{-1}adj(B)\delta}{det(B)},\]
where \(adj(B)\) is the cofactor matrix of \(B\). The entries of \(adj(B)\) are polynomials in \(a_{0}\), so \(\frac{1}{2}\delta^{T}a_{0}P^{-1}adj(B)\delta\) is a polynomial in \(a_{0}\) and thus differentiable. Then we show that \(\det(B)^{-1}\) is differentiable on \((0,1)\). Since \(\det(B)\) is a polynomial of \(a_{0}\), it suffices to show that it is nonzero on \((0,1)\). Note that \(\det(B)\) is the characteristic polynomial of \(-r^{-1}P_{0}P^{-1}\). Since \(P_{0}\) and \(P^{-1}\) are positive definite, \(-r^{-1}P_{0}P^{-1}\) has negative eigenvalues by Lemma A.2. So \(\det(B)\) is nonzero on \((0,1)\). Thus, we have shown \(h(a_{0})\) is differentiable.
We then proceed to show that \(h^{\prime}(a)>0\).
Let \(E=P_{0}+a_{0}rP\). Then \(h(a_{0})=\frac{1}{2}a_{0}r\delta^{T}E^{-1}\delta\). Therefore,
\[h^{\prime}(a_{0})=\frac{1}{2}r\delta^{T}E^{-1}\delta+a_{0}r\frac{1}{2}\delta^ {T}(E^{-1})^{\prime}\delta.\]
We know that \((E^{-1})^{\prime}=E^{-1}E^{\prime}E^{-1}=E^{-1}PE^{-1}\). Since \(P\) is positive definite and \(E\) is symmetric, \(v^{T}E^{-1}PE^{-1}v=(E^{-1}v)^{T}PE^{-1}v>0.\) Thus, \(E^{-1}PE^{-1}\) is positive definite. Then
\(0\). Since \(E^{-1}\) is positive definite, \(\frac{1}{2}r\delta^{T}E^{-1}\delta>0\). So \(h^{\prime}(a_{0})>0\).
We also show \(h^{\prime}(a_{0})\) is continuous. It suffices to show that \(\det(E)\) is nonzero on \([0,1]\). Since \(E=rBP\) where \(P\) is full rank, \(\det(E)=c\det(B)\) where \(c\neq 0\). Since \(\det(B)\) is nonzero, \(\det(E)\) is also nonzero.
Next, we will show that \(f(a_{0})=a_{0}^{\frac{p}{2}}\det(a_{0}I-(-r^{-1}P_{0}P^{-1}))^{-\frac{1}{2}}=a _{0}^{\frac{p}{2}}\det(B)^{-\frac{1}{2}}\) is continuous on \([0,1]\). We have previously proven that \(\det(B)\) is nonzero on \([0,1]\). Then \(f(a_{0})\) is continuous on \([0,1]\), and it will attain its minima and maxima on the closed interval. Let \(t_{1}=\max_{[\epsilon,1]}(f(a_{0}))\) and \(t_{2}=\min_{[\frac{\epsilon}{2},\epsilon]}(f(a_{0}))\). Since \(a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}\) is continuous on \([\frac{\epsilon}{2},\epsilon]\), denote its minimum by \(t_{3}\).
We write \(A=A_{1}+A_{2}\) where
\[A_{1} =\int_{0}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}f (a_{0})\exp(-nh(a_{0}))da_{0}\quad\text{and}\] \[A_{2} =\int_{\epsilon}^{1}a_{0}^{\alpha_{0}-1}(1-a_{0})^{\beta_{0}-1}f (a_{0})\exp(-nh(a_{0}))da_{0}.\]
Now we want to show that \(\lim_{n\to\infty}\frac{A_{2}}{A_{1}}=0\).
First, we find the upper bound of \(A_{2}\). Since \(h(a_{0})\) is monotone increasing, \(\exp(-nh(a_{0}))\leq\exp(-nh(\epsilon))\). Since \(f(a_{0})\leq t_{1}\), we have
\[A_{2} \leq t_{1}\exp(-nh(\epsilon))\int_{\epsilon}^{1}a_{0}^{\alpha_{0} -1}(1-a_{0})^{\beta_{0}-1}da_{0}\] \[\leq t_{1}\exp(-nh(\epsilon))\int_{0}^{1}a_{0}^{\alpha_{0}-1}(1-a _{0})^{\beta_{0}-1}da_{0}\] \[=t_{1}\exp(-nh(\epsilon))\frac{\Gamma(\alpha_{0})\Gamma(\beta_{0})} {\Gamma(\alpha_{0}+\beta_{0})}\] \[=C_{1}\exp(-nh(\epsilon)).\]
Next, we find the lower bound of \(A_{1}\). We have previously shown that \(h^{\prime}(a_{0})\) is continuous on \((0,1)\). Then \(h^{\prime}(a_{0})\) attains its maximum on \([\frac{\epsilon}{2},\epsilon]\). Let \(t_{4}=\max_{[\frac{\epsilon}{2},\epsilon]}(h^{\prime}(a_{0}))\). We can write
\[A_{1} \geq\int_{\frac{\epsilon}{2}}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_ {0})^{\beta_{0}-1}f(a_{0})\exp(-nh(a_{0}))da_{0},\] \[\geq\int_{\frac{\epsilon}{2}}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_ {0})^{\beta_{0}-1}\frac{f(a_{0})}{h^{\prime}(a_{0})}\exp(-nh(a_{0}))h^{\prime }(a_{0})da_{0},\] \[=\int_{\frac{\epsilon}{2}}^{\epsilon}a_{0}^{\alpha_{0}-1}(1-a_{0} )^{\beta_{0}-1}\frac{f(a_{0})}{h^{\prime}(a_{0})}\exp(-nh(a_{0}))dh(a_{0}),\] \[\geq\frac{t_{2}t_{3}}{t_{4}}\int_{\frac{\epsilon}{2}}^{\epsilon} \exp(-nh(a_{0}))dh(a_{0}),\] \[=\frac{t_{2}t_{3}}{t_{4}}\frac{1}{n}[\exp(-nh(\epsilon/2)-\exp(- nh(\epsilon))],\] \[=C_{2}\frac{1}{n}[\exp(-nh(\epsilon/2)-\exp(-nh(\epsilon))].\]
Therefore,
\[0\leq\frac{A_{2}}{A_{1}}\leq\frac{C_{1}\exp(-nh(\epsilon))}{C_{2}\frac{1}{n}[ \exp(-nh(\epsilon/2))-\exp(-nh(\epsilon))]}=\frac{C_{1}n}{C_{2}[\exp(-n[h( \epsilon/2)-h(\epsilon)])-1]},\]
and \(\lim_{n\to\infty}\frac{A_{2}}{A_{1}}\to 0\) by L'Hopital's rule. Since \(\frac{A_{2}}{A_{1}}\geq\frac{A_{2}}{A}\), \(\lim_{n\to\infty}\frac{A_{2}}{A}\to 0\). Then \(\lim_{n\to\infty}\frac{A_{1}}{A}\to 1\).
### Proof of Corollary 2.2
Proof.: Based on the assumptions, we have \(\Sigma=a_{0}\Sigma_{0}\). The result follows if we plug \(\delta=0\) into 10.
### Proof for Theorem 2.3
Proof.: The Laplace approximation for multiple parameters has the form
\[\int\exp(-nf(\beta))d\beta\approx\exp(-nf(\hat{\beta}))\left(\frac{2\pi}{n} \right)^{p/2}|\hat{\Sigma}|^{1/2},\]
where \(\hat{\beta}\) is maximizes \(-f(\beta)\), and \(\hat{\Sigma}_{p\times p}=\left[\frac{\partial^{2}f(\hat{\beta})}{\partial \beta_{j}\partial\beta_{k}}\right]^{-1}\).
When \(X^{\prime}Y=X_{0}^{\prime}Y_{0}\) and \(X\neq X_{0}\),
\[\pi(a_{0}|D,D_{0}) \propto\int L(D|\beta)\frac{L(D_{0}|\beta)^{a_{0}}\pi_{0}(\beta)} {c(a_{0})}\pi(a_{0})d\beta,\] \[=\int L(D|\beta)\frac{\exp\left(a_{0}\left[\sum_{i=1}^{n}y_{i}x_{ i}^{\prime}\beta-\sum_{i=1}^{n}b(x_{0i}^{\prime}\beta)\right]\right)}{\int\exp \left(a_{0}\left[\sum_{i=1}^{n}y_{i}x_{i}^{\prime}\beta-\sum_{i=1}^{n}b(x_{0i} ^{\prime}\beta)\right]\right)d\beta}\pi(a_{0})d\beta,\] \[=\pi(a_{0})\frac{\int L(D|\beta)L^{*}(D_{0}|\beta,a_{0})d\beta}{ \int L^{*}(D_{0}|\beta,a_{0})d\beta},\] \[=\pi(a_{0})\frac{c_{1}(a_{0})}{c_{2}(a_{0})}.\]
Define
\[g_{n}(\beta) =-\frac{1}{n}[l(D|\beta)+a_{0}l^{*}(D_{0}|\beta,a_{0})]\] \[=-\frac{1}{n}\{\log(Q(Y))+\sum_{i=1}^{n}y_{i}x_{i}^{\prime}\beta- \sum_{i=1}^{n}b(x_{i}^{\prime}\beta)+a_{0}[\sum_{i=1}^{n}y_{i}x_{i}^{\prime} \beta-\sum_{i=1}^{n}b(x_{0i}^{\prime}\beta)]\}\] \[=-\frac{1}{n}\{\log(Q(Y))+(a_{0}+1)\sum_{i=1}^{n}y_{i}x_{i}^{ \prime}\beta-\sum_{i=1}^{n}b(x_{i}^{\prime}\beta)-a_{0}\sum_{i=1}^{n}b(x_{0i} ^{\prime}\beta)\}.\]
Then we have
\[c_{1}(a_{0})\approx\exp(-ng_{n}(\hat{\beta}))\left(\frac{2\pi}{n} \right)^{p/2}|\hat{\Sigma}_{g}|^{1/2},\]
where \(\hat{\beta}\) maximizes \(-g_{n}(\beta)\). Similarly, define
\[k_{n}(\beta) =-\frac{1}{n}a_{0}l^{*}(y|x_{0},\beta),\] \[=-\frac{1}{n}\left\{a_{0}\sum_{i=1}^{n}y_{i}x_{i}^{\prime}\beta-a _{0}\sum_{i=1}^{n}b(x_{0i}^{\prime}\beta)\right\}.\]
Then we have
\[c_{2}(a_{0})\approx\exp(-nk_{n}(\tilde{\beta}))\left(\frac{2\pi}{n}\right)^{p /2}|\tilde{\Sigma}_{g}|^{1/2},\]
where \(\hat{\beta}\) maximizes \(-k_{n}(\beta)\).
We compute the gradients of \(g_{n}(\beta)\) and \(k_{n}(\beta)\) and get
\[\nabla g_{n}(\beta) =-\frac{1}{n}\{(a_{0}+1)\sum_{i=1}^{n}y_{i}x_{i}-\sum_{i=1}^{n}b(x _{i}^{\prime}\beta)x_{i}-a_{0}\sum_{i=1}^{n}\dot{b}(x_{0i}^{\prime}\beta)x_{0i}\},\] \[\nabla k_{n}(\beta) =-\frac{1}{n}\{a_{0}\sum_{i=1}^{n}y_{i}x_{i}-a_{0}\sum_{i=1}^{n} \dot{b}(x_{0i}^{\prime}\beta)x_{0i}\},\] \[\nabla g_{n}(\beta) =0 \Rightarrow\sum_{i=1}^{n}\dot{b}(x_{i}^{\prime}\hat{\beta})x_{i} +a_{0}\sum_{i=1}^{n}\dot{b}(x_{0i}^{\prime}\hat{\beta})x_{0i}=(a_{0}+1)\sum_{ i=1}^{n}y_{i}x_{i},\] \[\nabla k_{n}(\beta) =0 \Rightarrow\sum_{i=1}^{n}\dot{b}(x_{0i}^{\prime}\hat{\beta})x_{0i }=\sum_{i=1}^{n}y_{i}x_{i}.\]
We can see that asymptotically, \(\hat{\beta}\neq\tilde{\beta}\). Then we have
\[\frac{c_{1}(a_{0})}{c_{2}(a_{0})}=\frac{|\hat{\Sigma}_{g}|^{1/2}}{|\hat{\Sigma }_{k}|^{1/2}}\exp\{-n[g_{n}(\hat{\beta})-k_{n}(\tilde{\beta})]\}, \tag{11}\]
where
\[\hat{\Sigma}_{g} =\left[\frac{1}{n}\sum_{i=1}^{n}\ddot{b}(x_{i}^{\prime}\hat{ \beta})x_{i}x_{i}^{\prime}+\frac{a_{0}}{n}\sum_{i=1}^{n_{0}}\ddot{b}(x_{0i}^{ \prime}\hat{\beta})x_{0i}x_{0i}^{\prime}\right]^{-1},\] \[\tilde{\Sigma}_{k} =\left[\frac{a_{0}}{n}\sum_{i=1}^{n_{0}}\ddot{b}(x_{0i}^{\prime} \tilde{\beta})x_{0i}x_{0i}^{\prime}\right]^{-1},\] \[\frac{|\hat{\Sigma}_{g}|^{1/2}}{|\tilde{\Sigma}_{k}|^{1/2}} =\frac{|a_{0}\sum_{i=1}^{n_{0}}\ddot{b}(x_{0i}^{\prime}\tilde{ \beta})x_{0i}x_{0i}^{\prime}|^{1/2}}{|\sum_{i=1}^{n}\ddot{b}(x_{i}^{\prime} \hat{\beta})x_{i}x_{i}^{\prime}+a_{0}\sum_{i=1}^{n_{0}}\ddot{b}(x_{0i}^{\prime }\hat{\beta})x_{0i}x_{0i}^{\prime}|^{1/2}}.\]
The marginal posterior of \(a_{0}\) is then proportional to 11 multiplied by \(\pi(a_{0})\).
## Appendix B Additional Simulations for MSE Criterion
Figures 7 and 8 show the MSE as a function of the prior mean of \(a_{0}\) for increasing ratios of \(n/n_{0}\) when the total sample size is fixed. We observe that as \(n/n_{0}\) increases, the model will increasingly benefit, i.e. the MSE is reduced, from borrowing more, but this trend is less prominent when the total sample size is larger.
The total sample size of the PLUTO trials in section 4.1 is about twice the total sample size of the melanoma trials in section 4.2. The total sample size of the melanoma trials is not large enough for the model to criticize the maximal tolerable difference that we chose. Therefore, the optimal prior derived using the MSE criterion encourages borrowing for the melanoma trial.
Figure 7: MSE as a function of prior mean of \(a_{0}\) for increasing ratios of \(n/n_{0}\) when the total sample size is fixed for the normal _i.i.d._ case.
Figure 8: MSE as a function of prior mean of \(a_{0}\) for increasing ratios of \(n/n_{0}\) when the total sample size is double the total sample size in Figure 7 for the normal _i.i.d._ case. |
2310.20384 | Do large language models solve verbal analogies like children do? | Analogy-making lies at the heart of human cognition. Adults solve analogies
such as \textit{Horse belongs to stable like chicken belongs to ...?} by
mapping relations (\textit{kept in}) and answering \textit{chicken coop}. In
contrast, children often use association, e.g., answering \textit{egg}. This
paper investigates whether large language models (LLMs) solve verbal analogies
in A:B::C:? form using associations, similar to what children do. We use verbal
analogies extracted from an online adaptive learning environment, where 14,002
7-12 year-olds from the Netherlands solved 622 analogies in Dutch. The six
tested Dutch monolingual and multilingual LLMs performed around the same level
as children, with MGPT performing worst, around the 7-year-old level, and XLM-V
and GPT-3 the best, slightly above the 11-year-old level. However, when we
control for associative processes this picture changes and each model's
performance level drops 1-2 years. Further experiments demonstrate that
associative processes often underlie correctly solved analogies. We conclude
that the LLMs we tested indeed tend to solve verbal analogies by association
with C like children do. | Claire E. Stevenson, Mathilde ter Veen, Rochelle Choenni, Han L. J. van der Maas, Ekaterina Shutova | 2023-10-31T11:49:11Z | http://arxiv.org/abs/2310.20384v1 | # Do large language models solve verbal analogies like children do?
###### Abstract
Analogy-making lies at the heart of human cognition. Adults solve analogies such as _Horse belongs to stable like chicken belongs to...?_ by mapping relations (_kept in_) and answering _chicken coop_. In contrast, children often use association, e.g., answering _egg_. This paper investigates whether large language models (LLMs) solve verbal analogies in A:B::C:? form using associations, similar to what children do. We use verbal analogies extracted from an online adaptive learning environment, where 14,006 7-12 year-olds from the Netherlands solved 622 analogies in Dutch. The six tested Dutch monolingual and multilingual LLMs performed around the same level as children, with M-GPT performing worst, around the 7-year-old level, and XLM-V and GPT-3 the best, slightly above the 11-year-old level. However, when we control for associative processes this picture changes and each model's performance level drops 1-2 years. Further experiments demonstrate that associative processes often underlie correctly solved analogies. We conclude that the LLMs we tested indeed tend to solve verbal analogies by association with C like children do.
## 1 Introduction
Analogy-making, using what you know about one thing to infer knowledge about a new, somehow related instance, lies at the heart of human intelligence and creativity and forms the core of educational practice Gentner (1988); Hofstadter (1997); Holyoak (2012). Given how important analogical reasoning is to learning and generalization, much research has focused on how this seemingly unique human ability emerges, develops, and can be improved Goswami (1991); Sternberg and Nigro (1980); Stevenson and Hickendorff (2018) as well as emulated in machines Gentner and Forbus (2011); Mitchell (2021). Recently, large language models (LLMs), such as GPT-3 Brown et al. (2020), have demonstrated surprisingly good performance in verbal analogy solving (e.g., _table is to legs as tree is to...? chair, leaves, branches or roots?_) Lu et al. (2022); Webb et al. (2023). The question then arises how LLMs solve these analogies. Is the process similar to adult humans using relational mapping? Or perhaps more similar to the associative processes children tend to use?
Earlier work shows that language models largely rely on semantic similarity between analogy terms to solve analogies Rogers et al. (2020); Ushio et al. (2021), which would indicate an associative process. In this paper we investigate whether six LLMs use similar associative processes to solve a set of Dutch children's verbal analogies. First, we examine how LLM performance compares to children and find that the best models, GPT-3 and XLM-V Liang et al. (2023) perform around the same level as 11-year-olds. Second, we examine whether LLM performance is influenced by the same item characteristics that affect children's analogy solving, where results showed that GPT-3's performance follows a similar pattern to children's, whereas XLM-V's performance is more stable across item characteristics. Third, through a series of prompting experiments we show that these LLMs appear to use associative solving processes, similar to children, where performance drops 1-2 years in age level when we control for associative processes.
This paper contributes to the study of analogical reasoning in LLMs in three ways: (1) it is the first to directly compare LLM analogy solving performance to that of children; (2) we use experiments to tap into the _process_ of analogy-making and compare this to human processes; and (3) we use Dutch rather than English language items and examine performance in mono- and multilingual
LLMs.
## 2 Theoretical Background
### The Analogical Reasoning Process
Although there are different cognitive models of analogical reasoning -varying in the order of processing steps and whether these occur sequentially or in parallel, there is a general consensus on which processes are involved. Taking the example of "body is to feet as tree is to...?" (or more abstractly, A:B::C:?), the basic analogy information processing steps are generally considered to be: (1) encoding the relevant information about the base (A:B) and target (C) domains; (2) searching and retrieving relationships and similarities between the analogy elements in the base domain, A and B (e.g., "stands on" for body and feet); (3) aligning the base and target domains ("body and tree are things that stand") and mapping the mostly likely relationship between A and B, to the target domain, C, to come up with D; and (4) evaluating the validity of the predicted solution Gentner and Hoyos (2017); Sternberg (1977); Thibaut and French (2016).
### Factors Affecting People's Verbal
Analogy Solving
This analogy solving process of encoding, finding relationships, alignment and mapping is consistently found in people from about 12 years and up Thibaut and French (2016). When adults make mistakes there are three main factors that to lead to errors: (1) more difficult types of relations to be mapped (e.g., causal versus categorical) (2) a large conceptual distance between analogy base and target domains, and (3) salient distractors amongst the multiple-choice options Jones et al. (2022).
Type of RelationJones et al. (2022) grouped analogical relations into three types: categorical, causal and compositional. They found that adults perform better on categorical analogies (e.g., tarantula:spider::bee:insect) than causal (e.g., fracture:cast::incision:scar) or compositional (e.g., fingernail:finger::knee:leg) analogies. Children's performance follows a similar pattern, assuming sufficient domain knowledge is in place (e.g., Sternberg and Nigro (1980); Goswami and Brown (1990); Alexander and Kulikowski (1991).
Conceptual Distance Between Base and Target DomainsThe greater the distance between an analogy base and target domain the more difficult the analogy is for adults and children to solve Jones et al. (2022); Thibaut and French (2016). For example, bowl:dish::spoon:silverware is easier for people to solve than wrench:tool::sad:mood.
Distractor SaliencePeople are sometimes lured to choose a distracting incorrect response in multiple choice verbal analogies, and are most easily distracted by answer options that have a strong semantic association with the C term Kucwaj et al. (2022). Jones et al. (2022) defines distractor salience as the relation between C:D relative to each of the C:D', where D' represents each distractor option. Distractor salience is high, when the semantic similarity between C and one of the incorrect answers D' is greater than the semantic similarity between C and the correct answer D. High distractor salience leads to lower performance in adults Ichien et al. (2020); Jones et al. (2022) and this is even more apparent in children Richland et al. (2006); Thibaut and French (2016).
### Analogical Reasoning Development
Children's verbal analogical reasoning improves with age, where a gradual shift occurs around 4-8 years of age from reasoning based on surface similarities and associations to reasoning based on (abstract) relations Gentner (1988); Stevenson and Hickendorff (2018); Gentile et al. (1977). For example, if we ask a four-year-old "Horse belongs to stable like chicken belongs to...?" they may use association and reply "egg", relying on the strong connection between the words chicken and egg to solve the problem. In contrast, older children and adults will likely give the intended relational response "chicken coop", using the underlying relation structure to solve the analogy.
Two main factors that seem to affect the transition from associative to relational reasoning are increased domain knowledge Goswami and Brown (1990); Gentner (1988); Alexander and Kulikowski (1991) and improved executive functions (working memory and inhibition control; Doumas et al. (2018); Thibaut and French (2016).
Children tend to fail in analogy solving if they are unfamiliar with the elements or relations in the analogy Gentner and Hoyos (2017); Goswami and Brown (1990); Goddu et al. (2020). If children are shown to possess the required domain knowledge and are provided clear instructions on how to solve the task then they can successfully solve verbal
analogs (in the form of pictures) as early as 3-years-old (Goswami, 1991; Goddu et al., 2020).
However, even when children can solve these analogies, evidence from scene analogy problems (Richland et al., 2006) and eye-tracking studies (Thibaut and French, 2016) shows that children up to 8 years-old tend to focus first on the C term when solving analogies, sometimes ignoring A and B altogether (Thibaut and French, 2016). This appears to be related to limited working memory capacity (Richland et al., 2006; Stevenson et al., 2013; Stevenson, 2017) and limits in inhibition- and executive control (Thibaut and French, 2016; Doumas et al., 2018). Performance improves when interventions are used that support children's processing capacities (Stevenson and Hickendorff, 2018) and when children are forced to focus first on the A:B pair (Glady et al., 2017).
So what does the analogy solving process look like in children? According to the current literature, children up to about 8 years old, tend to initially rely on associative reasoning to solve analogies, specifically focusing on the C term and selecting its nearest associate. Because executive control processes improve with age, they are increasingly able to inhibit associative responses and take more abstract or complex relations into account when choosing their solution. When instructions are clear, required domain knowledge is present and the cognitive load is low, this can be as early as 3-years-old. However, associative responses remain the fallback strategy when relations are unfamiliar or processing capacities are overtaxed until about 8-years-old (Stevenson and Hickendorff, 2018).
### Verbal Analogy Solving in LLMs
The extent to which LLMs can solve analogies is a subject of debate. Most of this work has focused on comparing models in terms of overall accuracy on benchmarks such as the Bigger Analogy Test Set (BATS; Mikolov et al., 2013) and verbal analogies from the Scholastic Assessment Test (SAT; Turney et al., 2003) and investigating the types of relations they can solve (e.g., syntactic versus semantic). More importantly, when LLMs demonstrate analogy solving abilities (e.g., Webb et al. (2023), it is unclear which processes underlie the achieved solutions. In this study we take the unique perspective of relating LLM verbal analogy solving to human processing and development; to get a better understanding of the what and why of LLM verbal analogy solving.
Word embeddingsA decade ago, Mikolov et al. (2013) published their seminal paper showing that pre-trained word embeddings (e.g., Word2Vec Mikolov et al., 2013) could be used to solve verbal analogies in the form of A:B::C:? using vector arithmetic, the most famous example being: \(embed(king)-embed(man)+embed(woman)\approx embed(queen)\), where \(embed\) represents the word embedding obtained from the pre-trained neural network. This milestone was tempered by Gladkova et al. (2016), who made clear that this method was limited in the breadth of relations that it could process. For example, the capitol-country relation was solved quite successfully, but others such as animal-sound and part-whole, were solved less successfully.
Transformer language modelsWith the rise of the Transformer architecture, featuring language models such as BERT (Devlin et al., 2018), verbal analogy solving by language models remained a challenge. Earlier work transferred the verbal analogy datasets, such as the BATS to the sentence level, and showed that BERT-based models and GPT-2 (Radford et al., 2019) performed at a similar level to GloVe (Pennington et al., 2014), a word embedding model, on analogies containing relations such as capitol-country and male-female pairs (Zhu and de Melo, 2020). More recently, Czinczoll et al. (2022) developed a dataset containing scientific and metaphor analogies. Here there was a clear advantage of transformer models over analogy solving with word embeddings, where GPT-2, BERT and M-BERT outperformed GloVe on the analogy items containing metaphors such as career:mountain::success:ascent. Yet, the general conclusion remained that verbal analogy solving is still a challenge for LLMs.
People versus LLMs in analogy solvingRecent research has shown that LLMs can solve verbal analogies with similar accuracy to people. For example, Ushio et al. (2021) showed that LLMs such as GPT-2 and RoBERTa generally perform well on analogies designed for 4th to 10th graders (9-16 year-olds). Also, Webb et al. (2023) concluded that GPT-3 generally performs around the same level as adults on four verbal analogy datasets.
Item factors affecting LLM verbal analogy solvingThere has been some research on the effect of _relationship type_ on LLM's verbal analogy solving performance. Ushio et al. (2021) showed that fine-tuned RoBERTa models performed slightly better on categorical relations (hypernymns) than compositional ones (meronyms). And Webb et al. (2023) found that categorical relations in the SAT verbal analogies were easier for GPT-3 than compositional (function) relations and also that categorical relations were easier than both compositional and causal relations on the same items as those administered in Jones et al. (2022). Similarly, Linford et al. (2022) found that categorical relations were easier for BERT models than causal relations, although performance on both was far lower than for human adults.
Similarly to people, LLMs have more difficulty when the _conceptual distance_ between the domains in the analogy are far rather than near. For example, the LLMs in Czinczoll et al. (2022) performed better on the BATS analogies than on their SCAN dataset comprising scientific and metaphor based analogies, where the semantic distance between the base and target domains where greater for the SCAN dataset. In addition the scientific analogies were solved better by LLMs than those based on metaphors, which was explained by there being a clearer correspondence between base and target domains in scientific analogies. Also, Webb et al. (2023), used the items from Jones et al. (2022) to investigate whether, like in people, a near conceptual distance between the base and target domains made analogies easier to solve for GPT-3 than far analogies; this was indeed the case. Interestingly, humans outperformed GPT-3 on the far analogies. Similarly, Linford et al. 2022 found causal relations more difficult than categorical relations for the BERT and BART models they tested.
There is little research on the effect of _distractor salience_ on LLM analogy solving. However, Ushio et al. (2021) did conduct an experiment to evaluate to what degree the entire context of the analogy was needed for LLMs to solve analogies, by masking the head or tail of the candidate analogy pair. They found that RoBERTa and BERT only dropped 10 to 15 percentage points in accuracy, still achieving accuracies of 30% or higher on the SAT analogies. On the one hand, this means that items may be constructed in a way that makes the correct solution the most likely choice. On the other hand, this could also indicate that associative processes, such as semantic similarity, may be a shortcut for LLM analogy solving (which we revisit in our experiments described below). Either way, we can expect that salient distractors, i.e. multiple-choice options that are semantically more similar to the analogy terms than the correct response, will have a greater chance of being "selected" by the LLMs.
## 3 Research Questions
In this pre-registered study (see [https://osf.io/g9c4j](https://osf.io/g9c4j)) we examine how six LLMs' (Dutch monolingual and multilingual models: RobBERT Delobelle et al. (2020), BERTje Liu et al. (2019), XLM-V Liang et al. (2023), D-GPT-2 Radford et al. (2019), M-GPT Shliazhko et al. (2022), and GPT-3 Brown et al. (2020)) solve verbal analogies extracted from a children's online adaptive learning environment.1
Footnote 1: We also tested GPT-4’s ability to solve these analogies. GPT-4 was able to solve >70% with only the C term (so without the A:B relation) and we concluded that data contamination from previously testing these same analogies a number of times in GPT-3 and ChatGPT-3.5-turbo was the likely cause.
RQ1: How well do LLMs perform compared to children ages 7-12 in verbal analogy solving?Based on the literature, we expect recent LLMs to solve the analogies with similar accuracy to older children (12-year-olds) as this is similar to adult performance (hypothesis 1a; Webb et al. (2023); Ushio et al. (2021). We also expect that recent larger models, also trained on more data (e.g., XLM-V and GPT-3 versus older BERT and GPT models) will perform better than older, smaller models trained on less data as this provides more domain knowledge to draw upon (hypothesis 1b)2.
Footnote 2: The remaining RQs were investigated with the best performing BERT model
RQ2: Which item characteristics influence children's and LLM performance on verbal analogies?We expect the pattern of results found in adults also to be found in children and in LLMs. First, we expect performance on categorical relations to be better than compositional and causal relations for both children Sternberg and Nigro (1980), hypothesis 2a1) and LLMs Webb et al. (2023), hypothesis 2a2). Second, we expect analogies with a near conceptual distance between A:B to be easier than far analogies for children
dren (Thibaut and French (2016); Hypothesis 2b1) and LLMs (Czinczoll et al., 2022; Webb et al., 2023, hypothesis 2b2). Third, we expect higher distractor salience to lead to far more errors in children, because distractors that are semantically more similar to the C term than the correct answer are the most likely choice when solving an analogy based on association with C (Thibaut and French, 2016, hypothesis 2c1). We expect association to be the main mechanism by which LLMs solve verbal analogies and that they, similar to children, choose answer options with a strong semantic association to C (Ushio et al., 2021b, hypothesis 2c2).
**RQ3: Do LLMs solve verbal analogies using associative processes or analogical reasoning?** We investigate this through a series of experiments comparing LLM performance on alternative formations of the verbal analogies, where we control for associative reasoning.
## 4 Methods
LLM data and code are publicly available on [https://osf.io/zsh2v/](https://osf.io/zsh2v/). Children's data is available upon request.
### Prowise Learn's Verbal Analogies Game
Prowise Learn is an online adaptive learning environment for elementary school children. Over 2,000 elementary schools and >200,000 children in the Netherlands use Prowise Learn have practiced their reading, writing and arithmetic skills both in- and outside of school.
The verbal analogies game is one game among a series of reading and writing games in the "Language Sea" game space. Figure 1 contains a screenshot of the verbal analogies game, where an analogy in the form of "A:B::C:?" is presented in written form and the children must choose among five answer options, all five of which are semantically associated with C.
Prowise Learn games are adaptive, so that children solve items that are neither too difficult nor too easy, presenting children with items that they have a 65-85% chance of solving correctly, using response time to improve estimations of their ability (Klinkenberg et al., 2011). Each time a child solves an item his/her ability score on the game is updated according to an algorithm similar to the adaptive ELO rating system used for chess players (for details see Klinkenberg et al., 2011). At the same time the item's difficulty level is adapted according to the same algorithm. In this way item difficulty is on the same scale as the children's ability, and, as such item difficulties can be used to study children's abilities (see van der Ven et al., 2015; de Bree et al., 2017; Gierasimczuk et al., 2013, for examples in the math, language and logical reasoning domains).
The ELO adaptive item presentation algorithm is based on the one-parameter logistic function from item response theory where we estimate the probability a child will solve an item correctly given the child's ability score \(\theta\) and the item's difficulty level \(\beta\) as shown in Equation 1.
\[P(X=1|\theta,\beta)=\frac{e^{(\theta-\beta)}}{1+e^{(\theta-\beta)}} \tag{1}\]
If a child's ability score is higher than the item's difficulty then they will have a higher probability of solving the item correctly; when \(\theta\) and \(\beta\) are equal then the probability of solving the item correctly is.5.
Data Collection with ChildrenFor this study, we extracted information on 14,006 7-12 year-old's (M = 10.73, SD = 1.15 years) performance on 872 verbal analogies from the Prowise Learn database. We applied three selection criteria when extracting the children's data (on June 19, 2021): (1) children solved at least 20 items in order to ensure we had stable ability estimates, (2) children had last played the game on or after September 1st 2020, the start of the school year and 4 months after the launch of the game, when item difficulty estimates were verified to have small standard errors3 and (3) children were ages 7-12 to avoid
Figure 1: Language Sea example analogy ”lawyer : defending :: teacher : educating”
confounds in performance (i.e., younger children most likely did not have sufficient reading abilities and older children had most likely repeated a grade).
Item SelectionThe game contained three types of verbal reasoning problems; verbal analogies was one of them. From the initial set of 872 verbal analogies, we removed items that: (1) contained words/tokens in the analogy or in two or more response options that were missing from the XLM-V vocabulary (removed 250) 4 and (2) were judged by two independent raters to contain errors (e.g., multiple correct solutions, requiring domain knowledge that children are most likely unfamiliar with; 10 items removed5). This resulted in 622 for data analysis.
Footnote 4: We decided to deviate from our pre-registration to only remove items that XLM-V processed in a way that was incompatible with our probing method (250) rather that all items that at least one LLM could not process (312) so as to have the largest possible item set to conduct our analyses on.
Footnote 5: We only checked items with >1.5 SD item difficulty ratings for errors.
Items for Experiment 4In our last experiment to investigate whether the LLMs indeed chose associated distractors above the correct response we adapted a selection of 34 items from the original item set. First, they were transposed to the form A:C::B:D (e.g., key:keyboard::page:book transposed to key:page::keyboard:book). Second, since the original distractors were related to the C term, we selected new distractors related to the B term using Small World of Words (De Devne and Storms, 2008), a project where native speakers list words that immediately come to mind when prompted with a stimulus word. The first and second author examined the listed associations and selected four distractor options that made sense given the context of the analogy, but could not be considered correct solutions (e.g., in the original form "paper, reading cover and title" were the distractors, in the transposed version "computer, typing, pc and piano" were selected as distractors).
Information extracted per itemThe following information was extracted per item: question text, answer options, item difficulty rating, standard error of the item difficulty rating, type of analogy relation, number of times the item was solved, proportion of times each response option was selected.
### Item characteristics
Relation TypeRelationship type is defined as how the A and B term are related. This relationship must be applied to the C term to find D. Table 1 provides an overview of the types of relations in the Prowise Learn analogies game6. For analyses related to RQ2 we selected 302 items that fall into the following three categories defined by Jones et al. (2022):
Footnote 6: These labels were chosen and annotated by the Prowise Learn item developers, Education or Psychology graduates with experience in item development.
* **Categorical**: one of the A:B terms defines the category and the other word is an example of this category. For example "yellow" is part of the category "color".
* **Causal**: one of the A:B terms is the cause and the other is the effect. For example "stumbling" will result in "falling".
* **Compositional**: one of the A:B terms is part of the other term. For example: "leaf" is part of a "tree".
Conceptual Distance Between Base and Target DomainsWe used three vector-based language models7 to compute the semantic distance (cosine distance) between the A:B and the C:D pair. We categorized the distances of these three vector models as near (distance ranging from 0-.35), middle (.36-.64) or far distance (.65-1.0). Then we determined the semantic distance between base and target (near or far) per item and per model and used the most frequent category (near or far) as the category for analysis. Items in the middle category were dropped for this analysis (217 items removed).
Footnote 7: Word2Vec trained by CLIPS on different Dutch corpora (Tulkens et al., 2016), Word2Vec trained by the Nordic Language Processing Laboratory on the CoNLL17 corpus (Kutzov et al., 2017), and FastText trained on Common Crawl and Wikipedia (Grave et al., 2018).
Distractor SalienceDistractor salience was measured by the cosine similarity between C and D minus the cosine similarity between C and each incorrect answer D'. The distractor salience is high when the similarity between C and D' is higher than the similarity between C and the correct answer (Jones et al., 2022). We used the same three vector-based models mentioned in Section 4.2 to get the vector representations of the
words and compute the cosine distances between C and each of the five D's. Then we determined distractor salience (high or low) per item for each vector model and used the most frequent category (high or low) as the category for analysis.
### Prompting with LLMs
Pretrained Language ModelsWe studied how six different pretrained LLMs solved the same set of verbal analogies as the children. Three of the LLMs are monolingual and 3 multilingual, all of which rely on the Transformer architecture Vaswani et al. (2017).
Three of our models are BERT-based masked language models: **BERTje**De Vries et al. (2019) and **RobBERT**Delobelle et al. (2020) that are both pretrained on Dutch data only, and RobBERT's multilingual variant **XLM-V**Liang et al. (2023) that was trained on 116 languages.8 Similar to their English counterparts, RobBERT is the robustly optimized version of BERTje9 that uses SentencePiece instead of WordPiece for tokenization and only uses Masked Language Modelling as a pretraining task. Moreover, identical to BERT Kenton and Toutanova (2019), all three models contain 12 layers with 12 attention heads each.
Footnote 8: Given that we only include single-token words, we found XLM-V to be more suitable than mBERT or XLM-R as it suffers less from overtokenization in Dutch and thus covers more of our test words.
The other three models are autoregressive Transformer-decoder based language models. The Dutch version of GPT-2 which we refer to as **D-GPT-2**de Vries and Nissim (2021) exploits the pretrained GPT-2 model Radford et al. (2019) and only retrains the lexical embeddings in order to account for a Dutch vocabulary. Whereas, **M-GPT**Shliazhko et al. (2022) is the multilingual version of GPT-2 that was pretrained from scratch on 60 different languages. We also use **GPT-3**Brown et al. (2020), specifically the 'text-davinci-003' engine.
Elicitation and Scoring MethodsWe wanted to mimic the way the children solved the analogy items in the best way possible. This was especially important because we are investigating whether an associative response is more likely in the presence of a correct response. Therefore, we prompted GPT-3 with the full analogy and asked it to choose from the five response options. For example, "tripping is to falling as picking up is to? Choose clean, junk, mess, room, or thrift store." The response options were presented in random order.
This method was not possible for the BERT-based models and led to unexpected output for earlier GPT models. Therefore, we used the masked language model approach and fed the models 'A is to B, as C is to D', replacing D with each possible multiple-choice solution. The D option with the
\begin{table}
\begin{tabular}{l l l l} \hline \hline Prowise Learn relations & N & relations\({}^{*}\) & example \\ \hline action-result & 36 & causal & parasol : shadow :: sun : warmth \\ classification & 51 & categorical & lego : toys :: sock : clothes \\ part-whole & 51 & compositional & gate : city :: door : house \\ problem-solution & 6 & causal & noisy : earplug :: illness : medicine \\ share characteristic & 25 & compositional & giant : mountain :: dwarf : mouse \\ same category & 28 & categorical & lion : tiger :: dog : wolf \\ lacks aspect & 18 & & naked : clothing :: bald : hair \\ belong together & 38 & & evening : dinner :: midday : lunch \\ item-characteristic & 45 & compositional & skyscraper : high :: lead : heavy \\ degree\(|\)strength & 24 & & warm : hot : strange :: crazy \\ object-function & 34 & compositional & pan : cooking :: pen : writing \\ object-location & 46 & & bracelect : arm :: necklace : neck \\ cause-effect & 11 & causal & falling : broken :: heating : hot \\ synonyms & 41 & & crying : whining :: sending : delivering \\ opposites & 51 & & remembering: forgetting :: heating : cooling \\ indicates & 27 & & mushrooms : autumn :: cold : winter \\ actor-action & 29 & & dog : wagging :: cat : purring \\ \hline \hline \end{tabular}
\end{table}
Table 1: \({}^{*}\) Mapping of relations in verbal analogies game to those examined in Jones et al. (2022).
highest probability for the completion was considered the selected response.
To account for variation in query quality, we designed 5 slightly different prompt templates in Dutch, all similar to 'A is to B, as C is to D', to retrieve analogy completions. This means that we administered each item five times (once for each template). We report the mode of the scores using multiple templates, reducing the effect of context and taking into account that the best template differed for the models (see Section 4.3).
Please note that due to tokenization issues some analogy terms or response options could not be processed by the models using our prompting method. We report accuracies for items where all of the analogy terms and at least two distractor response options could be processed. We also report the number of items the accuracy score was based on.
Word embeddingsWe also included two word embedding models, i.e. Word2Vec [12] and FastText [13], to establish a baseline by completing the analogy with vector arithmetic using the formula C - A + B where the selected D is the highest ranked outcome of the five answer options [14].
## 5 Results RQ1: How well do LLMs perform compared to children?
Figure 2 shows performance per model on the 622 items. Here we see that see that 7-year-olds outperform the baseline performance of vector models, but that the six LLMs we tested outperform 7-year-olds. We also see that the best performing models (RobBERT, XLM-V and GPT-3) have around the same level of accuracy as 11-year-olds, confirming hypotheses 1a and 1b.
Given these results we proceed with answering our RQs using the best performing GPT-based model, GPT-3, and the best BERT-based model, XLM-V.
## 6 Results RQ2: Which item factors influence analogy solving?
For RQ2, we tested the effects of solver (children, GPT-3, and XLM-V) and/or item characteristics on accuracy using logistic regression.
Relation TypeWe expected to find categorical relations to be the easiest followed by compositional and then causal relations for both children (H2a) and LLMs (H2b). As can be seen in Figure 3, GPT-3's results follow this pattern, where categorical relations were solved better than compositional relations (\(z=2.45,p=.014\)) and causal were solved worse (\(z=-2.61,p=.009\)). In children, causal relations are more difficult that compositional relations (\(z=-3.46,p<.001\)), but there are no reliable differences in performance on categorical versus compositional relations (\(z=0.79,p=.429\)). In contrast, XLM-V performed similarly on all three relation types (categorical versus compositional: \(z=0.79,p=.429\), causal versus compositional: \(z=0.79,p=.429\)). Interestingly, XLM-V performed better than children and GPT-3 on causal relations. This is surprising as BERT-based LLMs seem to have difficulty with causal relations (e.g., Bhagavatula et al., 2019). However, it is no surprise that children, like adults,
Figure 3: GPT-3 follows the adult pattern, where categorical relations are easiest, followed by compositional relations and causal are most difficult. XLM-V solves items of all three relations equally well.
Figure 2: At what age level does each LLM perform?
find these relations the most difficult Jones et al., 2022.
Near vs Far Distance between Base and Target DomainsAs can be seen in Figure 4, items with a near semantic distance between the base and target domains were easier for both children and LLMs than those with a far semantic distance, confirming hypothesis H2b (\(z=3.55,p<.001\)).
Distractor SalienceItems with lower distractor salience were easier to solve than those with high distractor salience for both children and LLMs (see Figure 5; \(z=5.66,p<.001\)), confirming H2c.
## 7 Results RQ3: Do LLMs use associative or analogical processes to solve analogies?
We conducted a series of experiments using alternative formulations of the analogies to investigate how LLMs solve the verbal analogies, explicitly testing and controlling for associative responses such as those seen in younger children.
### Experiment 1: C:?
In experiment 1, we prompt the LLMs with only the C term, e.g., "C is to [MASK]". If these are solved by association as we expect, then LLMs should still be able to solve a substantial portion of analogies purely by association with C (Ushio et al., 2021; Poliak et al., 2018); hypothesis 3a). This was indeed the case, where GPT-3 solved 33% of the items with only the C term and XLM-V solved 28% with only the C term.
For both GPT-3 and XLM-V, the items that were solved with C:? as a prompt were in all three relation categories and a near or far conceptual distance to the A:B pair was irrelevant. However, distractor salience had a clear effect, where those with low distractor salience were more often solved correctly with only C:? than those with highly salient distractors (GPT-3: \(z=5.52,p<.001\), XLM-V: \(z=6.84,p<.001\)).
### Experiment 2: A:B::C:? for selected items
We then removed the items that each model solved with only the C term and reevaluated their performance compared to children on the remaining items (see Figure 6). In both cases, the models
\begin{table}
\begin{tabular}{l|c c c c c c} \hline \hline Template & RoBERT & Bertje & XLM-V & D-GPT-2 & M-GPT & GPT-3 \\ \hline N items & 697 & 672 & 639 & 382 & 115 & 872 \\ \hline A staat tot B, zoals C staat tot D & 0.53 & 0.47 & 0.55 & **0.43** & **0.36** & **0.55** \\ A hoot bij B, zoals C hoot bij D & 0.53 & 0.46 & 0.56 & 0.42 & 0.33 & 0.54 \\ A is vergelijkbaar met B, zoals C vergelijkbaar is met D & 0.49 & 0.46 & 0.51 & 0.40 & **0.36** & 0.53 \\ A is tot B, zoals C is tot D & **0.55** & **0.48** & **0.59** & 0.38 & 0.32 & **0.55** \\ A hoot bij B op dezelfde manier dat C hoot bij D & 0.51 & 0.44 & 0.56 & 0.40 & 0.31 & 0.50 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Overview of accuracy score for each template for each LLM. Highest accuracy is marked in **bold**.
Figure 4: Near analogies are easier to solve than far analogies.
Figure 5: Analogies with low distractor salience are easier to solve than those with high distractor salience.
performance dropped to below that of the average child, and to around the 10 year-old-level.
### Experiment 3: A:C::B:?
Here we present the same verbal analogy items, but transposed to the form of "A is to C as B is to?". By switching the B and C terms this precludes the analogy from being solved purely by association as all of the distractor options are related to the original C term and not B. We expected this to result in better LLM performance than in experiment 2 because the full analogy is available and there are no distractors that complete the analogy by association with B (Ushio et al. (2021b); hypothesis 3b).
GPT-3 performed at nearly the same level on the transposed analogies as the original analogies (original.55, transposed.50) and XLM-V's performance was only slightly lower (original.56, transposed.49). This was expected (H3b) given that the analogies involved the same words, only in a different order, and the distractors were designed to be related to the C-term, not the B-term.
In both cases, performance was somewhat better than in experiment 2, where associative responses were ruled out (see Table 3). It is possible that the LLMs were forced to extract the relation between A and C, but it is also possible that LLM performance solely improved because distractors were not related to the B-term. To control for this we conducted one last experiment.
### Experiment 4: A:C::B:? for selected items
Here we again present verbal analogy items in the form of "A is to C as B is to?". But, now we selected items that were solved correctly in both A:B::C:? and A:C::B:? forms, but incorrectly in C:? form. This led to a selection of analogies that were robustly solved correctly by both LLMs, but not solved purely by association (as C:?). We then changed the distractor options to be associated to B. And removed items that could be solved as B:? without the A:C relation, leaving us with 34 items. We stipulated that if performance again degraded then there is further evidence that LLMs rely on association rather than relational mapping to solve these verbal analogies.
As can be seen in Table 3, our results showed that performance dropped substantially, albeit still above chance level (20%).
### RQ4: Do the LLMs make the same errors children do?
We also conducted an exploratory analysis to see if LLMs made the same mistakes children did, i.e., choosing the same erroneous distractor option. We selected items that were answered incorrectly by GPT-3 and XLM-V and then compared their most likely chosen distractor to the distractor that most children chose. GPT-3 chose the same distractor as most children 39% of the time, whereas for XLM-V the most probable incorrect response was the same as most children in 33% of cases. For example, in the analogy "pencil is to pencil case, as sweater is to?", children and LLMs most often chose "pants" (wrong answer) instead of "closet" (right answer). But, on other items such as "stem is to plant as snout is to?", children and GPT-3 chose "paws" (wrong answer), whereas XLM-V's most likely response was "mouse" (right answer). There were not many cases in which both LLMs chose the same erroneous distractor, highlighting the differences in the two language models.
Figure 6: Experiment 2. Model performance drops a year if we filter out items solved correctly with C:?. Children’s accuracy also decreased, but to a lesser degree (11-year-olds -.06 accuracy versus -.12 for both XLM-V and GPT-3).
\begin{table}
\begin{tabular}{l c c c c} \hline Exp & Problem & N items & XLM-V & GPT-3 \\ \hline
0 & A:B::C:? & 622 & 0.56 & 0.55 \\
1 & C:? & 622 & 0.28 & 0.33 \\
2 & A:B::C:? & 237 & 0.44 & 0.42 \\
3 & A:C::B:? & 622 & 0.49 & 0.50 \\
4 & A:C::B:? & 34 & 0.29 & 0.38 \\ \hline \end{tabular}
\end{table}
Table 3: LLM performance on the four experiments.
Discussion
The main goal of this paper was to investigate whether LLMs tend to use association to solve verbal analogies, similar to what children do, instead of relational mapping like adults. Direct comparison of performance between the children and LLMs showed that the best performing LLMs, XLM-V and GPT-3, scored at around the 11 year-old level, an age when we expect children to have transitioned to adult-like analogical reasoning (Thibaut and French, 2016). However, the more we controlled for associative responses, the greater the models' performance degraded. Our experiments provide evidence that association rather than relational reasoning likely underlies the process by which LLMs solve verbal analogies. LLMs are, in essence, next (or missing) word prediction machines (McCoy et al., 2023). Thus, it is no surprise that our research shows that these LLMs tend to respond with the most associated next word (i.e., the model's statistically most likely next word). This finding falls in line with the more general research that questions whether reasoning actually occurs or that LLMs are more like Searle's "Chinese Rooms" that pass on the right output because this is what they were trained to do (McCoy et al., 2023; Zecevic et al., 2023; Wu et al., 2023).
However, previous research concluded that analogical reasoning has likely emerged in GPT-3 (Webb et al., 2023). The question then arises why LLM performance on this children's verbal analogy task is not as strong as their performance on other verbal analogy tasks and benchmarks where they seem to outperform even adult college students (e.g., Webb et al., 2023). We see three main explanations. First, the items were in Dutch and not English which means that the models may have less training on the analogy terms. We tried to account for this by removing items where XLM-V could not process part of the analogy given the way we probed the model. But, explicit testing of domain knowledge both in children and LLMs would be advisable for future research. Second, our items contained four distractor options rather than the traditional one distractor used by Webb et al. (2023) and Jones et al. (2022). This reduces the chance of choosing the correct option from 50% to 20%, making our test more robust (Mitchell, 2023). Third and finally, our items were designed to lure children into associative reasoning. All of the distractors were associated with the C-term. The other tests were designed for adults and may not have provided distractors that LLMs would be "lured to choose".
Although our findings point towards LLMs choosing associative solutions similar to what children choose to solve the verbal analogies, does this mean that they solve the analogies like children do? We examined how relation type, conceptual distance between the analogy base and target and the salience of the distractors influenced the LLMs and children's performance. GPT-3's performance was affected in the same way as children's on all accounts. As previous research with people and GPT-3 shows, causal relations were more difficult than compositional ones and categorical items were easiest (Jones et al., 2022; Webb et al., 2023). And both XLM-V and GPT-3, like children, performed better on near analogies than far ones and solved items with low distractor salience better than those with highly salient distractors. When the LLMs made errors, their most likely response was the same as what most children were lured by about \(1/3\) of the time. In sum, there is clearly some overlap between children and LLMs in which solutions were chosen, but further research still needs to uncover which mechanisms underlie why this is the case.
### Conclusion
The most important take-away point from our study is that LLMs may solve analogies as well as 11 year-olds, but to ascertain whether analogical reasoning is emerging in these systems we must take the _mechanisms_ by which they obtain these solutions into account. In contrast to Webb et al. (2023), our findings provide evidence against emerging analogical reasoning based on relational mapping and point towards associative processes, perhaps similar to those in children.
|
2309.04351 | MFO Report: The dry ten Martini problem for Sturmian dynamical systems | This extended Oberwolfach report (to appear in the proceedings of the MFO
Workshop 2335: Aspects of Aperiodic Order) announces the full solution to the
Dry Ten Martini Problem for Sturmian Hamiltonians. Specifically, we show that
all spectral gaps of Sturmian Hamiltonians (as predicted by the gap labeling
theorem) are open for all nonzero couplings and all irrational rotations. We
present here the proof strategy. | Ram Band, Siegfried Beckus, Raphael Loewy | 2023-09-08T14:22:30Z | http://arxiv.org/abs/2309.04351v1 | # Myo report: The dry ten Martini problem for Sturmian dynamical systems
###### Abstract.
This extended Oberwolfach report (to appear in [BCDS]) announces the full solution to the Dry Ten Martini Problem for Sturmian Hamiltonians [1]. Specifically, we show that all spectral gaps of Sturmian Hamiltonians (as predicted by the gap labeling theorem) are open for all nonzero couplings and all irrational rotations. We present here the proof strategy.
Key words and phrases:A connected component \(g:=(a,b)\) in \(\mathbb{R}\setminus\sigma(H_{\alpha,V})\) is called a _spectral gap_, i.e. \(a,b\in\sigma(H_{\alpha,V})\) and \((a,b)\cap\sigma(H_{\alpha,V})=\emptyset\). By (IDS1) and (IDS2), we have \(N_{\alpha,V}(E)=N_{\alpha,V}(E^{\prime})\) for all \(E,E^{\prime}\in[a,b]\). The value \(N_{\alpha,V}(E)\) for some (any) \(E\in g\) is called the _gap label_ of \(g\). The gap labeling theorem
[2] prescribes all the possible labels that the spectral gaps of \(H_{\alpha,V}\) may have. Specifically, for Sturmian Hamiltonians, we have
\[\{N_{\alpha,V}(E)\,|\,E\in\mathbb{R}\setminus\sigma(H_{\alpha,V})\}\subseteq\{ \{n\alpha\}\,|\,n\in\mathbb{Z}\}\cup\{1\},\]
where \(\{n\alpha\}\) is the fractional part of \(n\alpha\) as above. One may ask whether the latter inclusion is an equality - this is the Dry Ten Martini Problem for Sturmian dynamical systems. A complete solution of this problem appears in [1] (see also [1] for the first announcement).
Figure 1. Kohmoto butterfly for \(V=2\): The vertical axis represents the frequency values \(\alpha\). For each \(\alpha\) the spectrum of \(H_{\alpha,V}\) is plotted horizontally. A few spectra \(H_{\frac{p}{q},V}\) are colored for \(\frac{p}{q}\in\{\frac{0}{1},\frac{1}{1},\frac{1}{2},\frac{2}{3},\frac{3}{5}, \frac{5}{8}\}\) representing the first periodic approximations of the Fibonacci Hamiltonian. According to Definition 3, \(A\)-type bands (\(B\)-type bands) are colored blue (red).
**Theorem 1** (Sturmian Dry TMP).: _For all \(\alpha\in[0,1]\setminus\mathbb{Q}\) and all \(V\neq 0\), all possible spectral gaps of \(H_{\alpha,V}\) are open, i.e._
\[\{N_{\alpha,V}(E)\,|\,E\in\mathbb{R}\setminus\sigma(H_{\alpha,V})\}=\big{\{}\{n \alpha\}\,|\,n\in\mathbb{Z}\big{\}}\cup\{1\}.\]
Let us shortly discuss the progress that has been done so far towards a resolution of the Sturmian Dry TMP. For large couplings, \(V>4\), the previous theorem was proven by Raymond [14] (see also [1] for a review). For the Fibonacci Hamiltonian (\(\alpha\) equals to the golden mean \(\varphi:=\frac{\sqrt{5}-1}{2}\)), Damanik and Gorodetski proved that all gaps are open for sufficiently small values of the coupling constant \(V\), see [1, Theorem 1.5]. This (small coupling result) was extended by Mei in [13, Theorem 1.5] to all \(\alpha\in[0,1]\setminus\mathbb{Q}\) with eventually periodic continued fraction expansion. Later, Damanik, Gorodetski and Yessen provided a complete solution for the Fibonacci Hamiltonian in [1, Theorem 1.3]. One might extend this result to all irrational \(\alpha\)'s with eventually periodic continued fraction expansion, confer [1, Section 7]. In the latter work it was also conjectured that the Sturmian Dry TMP is true for all irrational \(\alpha\in[0,1]\) and couplings \(V\neq 0\).
The rest of the note presents the main strategy of the proof of Theorem 1 in five steps.
### Step I: Spectra of periodic approximations
Let \(\alpha\in[0,1]\setminus\mathbb{Q}\). Then the continued fraction expansion of \(\alpha\) is given by the unique sequence \(a_{k}\in\mathbb{N}\) for \(k\geq 0\) satisfying
\[\alpha=a_{0}+\frac{1}{a_{1}+\frac{1}{a_{2}+\frac{1}{\ddots}}}=:[a_{0},a_{1},a_ {2},\ldots].\]
For \(k\geq 0\), we also consider the Diophantine approximations \(\alpha_{k}:=[0,a_{1},\ldots,a_{k}]=\frac{p_{k}}{q_{k}}\) (where \(p_{k},q_{k}\in\mathbb{N}\) are coprime). We refer to these as finite continued fraction expansions.
Since \(\alpha_{k}\in\mathbb{Q}\), the operator \(H_{\alpha_{k},V}\) is periodic. Thus, the spectrum \(\sigma(H_{\alpha_{k},V})\) is a finite union of exactly \(q_{k}\) intervals by the Floquet-Bloch theory. These intervals of \(\sigma(H_{\alpha_{k},V})\) are called _spectral bands_. Moreover, we have the following result [1, 1, 14].
**Proposition 2** (Spectral approximations).: _Let \(V>0\), \(\alpha\in[0,1]\setminus\mathbb{Q}\) and \(\alpha_{k}=[0,a_{1},\ldots,a_{k}]\) for \(k\in\mathbb{N}\). Define the compact sets_
\[\Sigma_{k,V}:=\sigma(H_{\alpha_{k},V})\cup\sigma(H_{\alpha_{k+1},V})\subseteq \mathbb{R},\qquad k\in\mathbb{N}.\]
_Then \(\Sigma_{k+1,V}\subseteq\Sigma_{k,V}\) and_
\[\lim_{k\to\infty}\Sigma_{k,V}=\bigcap_{k\in\mathbb{N}}\Sigma_{k,V}=\sigma(H_{ \alpha,V})\]
_where the limit is taken with respect to the Hausdorff metric on the compact subsets of \(\mathbb{R}\)._
We refer the reader to Figure 1, which demonstrates the first few periodic approximations of the Fibonacci Hamiltonian (\(\alpha=\varphi=[0,1,1,1,1,\ldots]\)).
**Remark**.: _A more general result is proven in [14]. Here, we provide a simplified version, which is sufficient to present our proof strategy._
**Step II: Combinatorial structure of the spectral approximations.** The monotone convergence in Proposition 2 allows to classify the spectral bands into two types.
**Definition 3**.: _A spectral band \(I(V)\) in \(\sigma(H_{\alpha_{k},V})\) is called_
* _of_ type A _if it is strictly contained in a spectral band of_ \(\sigma(H_{\alpha_{k-1},V})\)_;_
* _of_ type B _if it is not of type A and it is strictly contained in a spectral band of_ \(\sigma(H_{\alpha_{k-2},V})\)_;_
**Remark**.: _We note that the analogous notions of types \(A\) and \(B\) appearing in [1, 2] are different than those in Definition 3, even though both are equivalent. The different definition there is essential to prove the next theorem._
To state the next result, we define an order relation on spectral bands. Specifically, \([a,b]\prec[c,d]\) holds if \(a<c\) and \(b<d\). With this at hand, we have the following (see Figure 2 (b)).
**Theorem 4**.: _Let \(V>0\), \(\alpha\in[0,1]\setminus\mathbb{Q}\) and \(\alpha_{k}=[0,a_{1},\ldots,a_{k}]\) for \(k\in\mathbb{N}\). Then the following assertions hold._
* _Each spectral band_ \(I(V)\) _in_ \(\sigma(H_{\alpha_{k},V})\) _is either of type A or of type B._
* _Let_ \(I(V)\) _be a spectral band in_ \(\sigma(H_{\alpha_{k},V})\) _and_ \[p:=\begin{cases}a_{k+1}-1,&\text{if $I(V)$ is of type $A$},\\ a_{k+1},&\text{if $I(V)$ is of type $B$}.\end{cases}\] _Then there exist_
* \(p\) _spectral bands_ \(J^{(1)},\ldots,J^{(p)}\) _of type A in_ \(\sigma(H_{\alpha_{k+1},V})\)_, and_
* \(p+1\) _spectral bands_ \(K^{(1)},\ldots,K^{(p+1)}\) _of type B in_ \(\sigma(H_{\alpha_{k+2},V})\) _that are all strictly contained in_ \(I(V)\) _and they interlace_ \[K^{(j)}\prec J^{(j)}\prec K^{(j+1)},\qquad 1\leq j\leq p.\]
**Remark**.: _We point out that in the interlacing property spectral bands may overlap (as in Figure 2 (b)) for small couplings \(V\), which is not the case in [10]. These overlaps are the source of the difficulty in proving the statement for small couplings. This issue is resolved in [1] by combining trace maps together with a new viewpoint - applying an interlacing theorem to matrix eigenvalues of the periodic approximations. Another crucial ingredient is changing the perspective from considering a single approximation \((\alpha_{k})_{k\in\mathbb{N}}\) to the analysis over the whole space of all finite continued fraction expansions simultaneously using a two-level induction._
Figure 2. (a) The root of the tree \(T\) and two adjacent vertices. (b) The interval \(I(V)\) of \(\sigma(H_{\alpha_{k},V})\) and the spectral bands of \(\sigma(H_{\alpha_{k+1},V})\) and \(\sigma(H_{\alpha_{k+2},V})\) that it includes.
**Step III: The spectrum as the boundary of an infinite tree.** Theorem 4 allows us to construct a (directed) tree \(T\) whose vertices represent the spectral bands. The rules that uniquely define \(T\) in terms of \([0,a_{1},a_{2},a_{3},\ldots]\) are sketched in Figure 2.
Specifically, each vertex has a type (A or B) according to the spectral band it represents, which is well-defined by Theorem 4 (a). The root of this tree and the two vertices to which the root is connected are sketched in Figure 2 (a). Then, for each vertex at level \(k\) (which is a spectral band \(I(V)\) of \(H_{\alpha_{k},V}\)), we connect it with a directed edge to each of the vertices (spectral bands) \(J^{(1)},\ldots,J^{(p)}\) of type A and \(K^{(1)},\ldots,K^{(p+1)}\) of type B, see Theorem 4. We emphasize that the combinatorial data in Theorem 4 - leading to the construction of the tree - depends only on the infinite continued fraction expansion of \(\alpha\) and is independent of \(V>0\). An example of such a tree is provided in Figure 3.
We denote by \(\gamma:=(I_{-1},I_{0},I_{1},I_{2},I_{3},\ldots)\) an infinite path that starts at the root \(I_{-1}=\mathbb{R}\), where \(I_{m}\) are vertices in \(T\) satisfying \(I_{m}\to I_{m+1}\) (meaning \(I_{m+1}\) is strictly included in \(I_{m}\)) for all \(m\geq-1\). Then the boundary of the tree \(\partial T\) is the set of all such infinite paths. We have the following correspondence between \(\partial T\) and \(\sigma(H_{\alpha,V})\).
**Proposition 5**.: _Let \(V>0\) and \(\alpha\in[0,1]\setminus\mathbb{Q}\). Then there exists a surjective map_
\[E_{V}:\partial T\to\sigma(H_{\alpha,V}),\quad\gamma\mapsto E_{V}(\gamma).\]
**Proof outline.** We start by making the following observation. Even though the combinatorial structure of the tree does not depend on \(V>0\), each vertex \(I\) in its own represents a particular spectral band \(I(V)\) depending on \(V\). We now explicitly construct the map \(E_{V}\).
Let \(V>0\) and \(\gamma=(I_{-1},I_{0},I_{1},I_{2},\ldots)\in\partial T\). Since \(\gamma\) is a path, we have \(I_{m}\to I_{m+1}\) for all \(m\geq-1\). By definition of the tree \(T\), this implies \(I_{m+1}(V)\subseteq I_{m}(V)\) for all \(m\geq-1\). Thus, the intersection \(\bigcap_{m\geq-1}I_{m}(V)\) is non-empty. By Proposition 2, this intersection is contained in \(\sigma(H_{\alpha,V})\). Actually, this intersection contains exactly one point: assume by contradiction that there are two different points \(E_{1}\neq E_{2}\) in \(\bigcap_{m\geq-1}I_{m}(V)\). Since we have an intersection of intervals, we conclude
\[\operatorname{Leb}\bigl{(}\sigma(H_{\alpha,V})\bigr{)}\geq|E_{1}-E_{2}|>0,\]
where \(\operatorname{Leb}\) denotes the Lebesgue measure. This contradicts \(\operatorname{Leb}\bigl{(}\sigma(H_{\alpha,V})\bigr{)}=0\) proven in [1]. Thus, there is a unique \(E_{V}(\gamma)\in\sigma(H_{\alpha,V})\) such that \(\cap_{m\geq-1}I_{m}(V)=\{E_{V}(\gamma)\}\). This defines our map \(E_{V}\).
Finally, one can conclude the surjectivity of \(E_{V}:\partial T\to\sigma(H_{\alpha,V})\) by the monotone convergence of the spectra, see Proposition 2. \(\Box\)
We note that classifying spectral bands into a few types and using a tree encoding to identify open gaps was also used recently in [1, Theorem 1.11] for the period doubling Hamiltonian.
**Step IV: The IDS as a function of the tree boundary \(\partial T\).** We start by observing that in the definition of the IDS,
\[N_{\alpha,V}(E):=\lim_{n\to\infty}\frac{\sharp\bigl{\{}\lambda\in\sigma\bigl{(} H_{\alpha,V}|_{[1,n]}\bigr{)}\,\big{|}\,\lambda\leq E\bigr{\}}}{n},\qquad E \in\mathbb{R},\]
the Dirichlet boundary conditions in \(H_{\alpha,V}|_{[1,n]}\) can be replaced by periodic boundary conditions. This change might modify the numerator, but will not affect the limit. Thus, counting eigenvalues below \(E\) may be replaced with counting spectral bands of \(H_{\alpha_{k},V}\) that lie to the left of \(E\). Together with some combinatorial information proven in [10], one concludes the following.
**Proposition 6**.: _Let \(V>0\), \(\alpha\in[0,1]\setminus\mathbb{Q}\) and \(\alpha_{k}=[0,a_{1},\ldots,a_{k}]=\frac{p_{k}}{q_{k}}\). For each \(\gamma\in\partial T\), there are \(\pi_{k}(\gamma)\in\{0,\ldots,a_{k+1}\}\) for \(k\geq-1,\) such that_
\[N_{\alpha,V}\big{(}E_{V}(\gamma)\big{)}=-\alpha+\sum_{k=-1}^{\infty}(-1)^{k}\pi _{k}(\gamma)(q_{k}\alpha-p_{k}).\]
By the right hand side of the previous equation, it is apparent that the value \(N_{\alpha,V}\big{(}E_{V}(\gamma)\big{)}\) depends only on \(\gamma\) and is independent of \(V>0\). Interestingly, the values \(\pi_{k}(\gamma)\) for \(k\geq-1\) are explicitly described by local properties of \(\gamma\) at level \(k\) of the tree, see [11, 1].
### Step V: Identifying a gap label within the tree
With Proposition 6 at hand and knowing \(\pi_{k}(\gamma)\) explicitly, one can inductively prove the following statement.
**Proposition 7**.: _Let \(V>0\), \(\alpha\in[0,1]\setminus\mathbb{Q}\) and \(n\in\mathbb{Z}\). Then there exist two different paths \(\gamma,\eta\in\partial T\) such that_
\[N_{\alpha,V}\big{(}E_{V}(\gamma)\big{)}=N_{\alpha,V}\big{(}E_{V}(\eta)\big{)} =\{n\alpha\}.\]
Recall that the IDS is monotonically increasing, continuous and it is only constant within spectral gaps, confer (IDS1) and (IDS2). Thus, if there are two different \(E_{1},E_{2}\in\sigma(H_{\alpha,V})\) satisfying
\[N_{\alpha,V}\big{(}E_{1}\big{)}=N_{\alpha,V}\big{(}E_{2}\big{)}=\{n\alpha\},\]
then \((E_{1},E_{2})\) is an open spectral gap of \(\sigma(H_{\alpha,V})\) with gap label \(\{n\alpha\}\). In light of this, Theorem 1 follows from Proposition 7, once we prove that \(E_{V}:\partial T\to\sigma(H_{\alpha,V})\) is injective.
**Theorem 8**.: _For all \(V>0\), the map \(E_{V}:\partial T\to\sigma(H_{\alpha,V})\) is injective._
For \(V>4\), Theorem 8 was proven by Raymond [11]. The proof of the statement for \(V>0\) can be found in [1]. The extra difficulty in extending the result from \(V>4\) to \(V>0\) comes from possible overlaps of spectral bands, which do not belong to the same path (see also
Figure 3. An example of a tree for the continued fraction expansion \([0,1,2,4,\ldots]\) is sketched. A specific infinite path starting at the root and approaching a point in the spectrum is indicated in brown (see Proposition 5).
remark after Theorem 4). Nevertheless, it is possible to control these overlaps uniformly over the whole space of all finite continued fraction expansions.
**Proof of Theorem 1.** As was already mentioned above Theorem 1 follows for \(V>0\) by combining Proposition 7 and Theorem 8. In order to extend this result to \(V<0\), one observes that the spectrum \(H_{\alpha_{k},V}\) is antisymmetric with respect to \(V\mapsto-V\). This yields now \(\sigma(H_{\alpha,-V})=-\sigma(H_{\alpha,V})\) and \(N_{\alpha,-V}(-E)=1-N_{\alpha,V}(E)\), which completes the proof of Theorem 1. \(\Box\)
|
2309.14142 | ADMM-Tracking Gradient for Distributed Optimization over Asynchronous
and Unreliable Networks | In this paper, we propose a novel distributed algorithm for consensus
optimization over networks and a robust extension tailored to deal with
asynchronous agents and packet losses. Indeed, to robustly achieve dynamic
consensus on the solution estimates and the global descent direction, we embed
in our algorithms a distributed implementation of the Alternating Direction
Method of Multipliers (ADMM). Such a mechanism is suitably interlaced with a
local proportional action steering each agent estimate to the solution of the
original consensus optimization problem. First, in the case of ideal networks,
by using tools from system theory, we prove the linear convergence of the
scheme with strongly convex costs. Then, by exploiting the averaging theory, we
extend such a first result to prove that the robust extension of our method
preserves linear convergence in the case of asynchronous agents and packet
losses. Further, by using the notion of Input-to-State Stability, we also
guarantee the robustness of the schemes with respect to additional, generic
errors affecting the agents' updates. Finally, some numerical simulations
confirm our theoretical findings and compare our algorithms with other
distributed schemes in terms of speed and robustness. | Guido Carnevale, Nicola Bastianello, Giuseppe Notarstefano, Ruggero Carli | 2023-09-25T13:55:58Z | http://arxiv.org/abs/2309.14142v2 | # ADMM-Tracking Gradient
###### Abstract
In this paper, we propose (i) a novel distributed algorithm for consensus optimization over networks and (ii) a robust extension tailored to deal with asynchronous agents and packet losses. The key idea is to achieve dynamic consensus on (i) the agents' average and (ii) the global descent direction by iteratively solving an online auxiliary optimization problem through a distributed implementation of the Alternating Direction Method of Multipliers (ADMM). Such a mechanism is suitably interfaced with a local proportional action steering each agent estimate to the solution of the original consensus optimization problem. First, in the case of ideal networks, by using tools from system theory, we prove the linear convergence of the scheme with strongly convex costs. Then, by exploiting the averaging theory, we extend such a first result to prove that the robust extension of our method preserves linear convergence in the case of asynchronous agents and packet losses. Further, by using the notion of Input-to-State Stability, we also guarantee the robustness of the schemes with respect to additional, generic errors affecting the agents' updates. Finally, some numerical simulations confirm our theoretical findings and show that the proposed methods outperform the existing state-of-the-art distributed methods for consensus optimization.
## I Introduction
Multi-agent systems have become ubiquitous in many engineering domains, ranging from robotics to power grids, from traffic networks to the Internet of Things. Recent technological advances have indeed led to the widespread adoption of devices equipped with computational and communication resources. These devices (or agents) then form interconnected systems capable of cooperatively performing a variety of tasks, e.g. learning, resource allocation, and exploration. Many of these tasks can be formulated as the _consensus (or cost-coupled) optimization_ problem
\[\min_{x\in\mathbb{R}^{c}}\sum_{i=1}^{N}f_{i}(x),\]
with \(f_{i}\) the local costs of the \(N\) agents. This motivated a wide interest in algorithms to enable multi-agent systems to collaborate towards a solution of consensus problems [1, 2, 3].
Different approaches to designing distributed algorithms for consensus optimization have been explored in the literature, with the main ones being decentralized (sub)gradient, Gradient Tracking (GT), and alternating direction method of multipliers (ADMM) [1]. The class of decentralized (sub)gradient methods has the drawback that exact convergence is only achieved using diminishing stepsizes [2]. Gradient Tracking algorithms were then introduced to guarantee exact convergence with fixed step-sizes, see [4, 5, 6], and have proved their versatility in different setups, see _e.g._[7, 8, 9, 10, 11, 12, 13]. Besides these gradient-based methods, ADMM, popularized by [14], has received wide attention and has proven especially suited to distributed setups [15, 16, 17, 18].
However, the successful deployment of distributed algorithms designed according to these approaches is hindered by the practical challenges that multi-agent systems raise. First of all, the agents cooperating may have highly heterogeneous computational resources, implying that they may perform local updates at different rates. One option is to enforce synchronization of the agents so that they update in lockstep. But besides the technical challenge of synchronization, it would result in faster agents sitting idle before they can initiate a new update [19]. For these reasons, we are interested in _asynchronous_ distributed algorithms. A second challenge is that of _unreliable communications_ between the agents, which may be subject to packet losses or quantization errors, especially when wireless transmission is employed [20]. It is therefore important to characterize the resilience of distributed algorithms to imperfect communications. Finally, in applications such as learning, the large size of the data set stored by an agent implies that evaluating the local gradient may be too computationally expensive. For this reason, _stochastic gradients_ are often employed to approximate the local gradient over a subset of the data, which introduces additive errors [21, 22, 23].
Let us now turn back to the distributed algorithms discussed above; among them, ADMM has arguably proven to be the most robust. In particular, it guarantees exact, almost sure convergence in the presence of asynchrony and packet losses [16, 18, 24, 25], and can reach an approximate solution in the presence of additive communication errors [26, 27, 28, 29]. The drawback of ADMM is that the agents need to solve a local minimization problem to update their state, which in general is
more computationally expensive than the gradient evaluations of GT. Gradient Tracking methods also exhibit convergence in the presence of asynchrony and packet losses [7, 8, 30]. However, as highlighted in [31, 32, 33, 34, 35], they are not robust to additive errors _e.g._ due to unreliable communications. Indeed, GT algorithms are designed to reconstruct the global gradient \(\nabla\sum_{i=1}^{N}f_{i}\) using local gradients \(\nabla f_{i}\), which they accomplish employing a dynamic average consensus block (see [36, 37]). The average consensus, though, introduces a _marginally stable dynamics_[34], which causes the lack of robustness to errors.
The objective of this paper then is to propose a novel gradient tracking algorithm that is provably robust to asynchrony and unreliable communications. Our starting point is the insight that the dynamic average consensus block in GT algorithms can be _replaced by a different, robust consensus protocol_, as done with the ratio (or push-sum) consensus in [30, 38]. Specifically, our algorithm uses an ADMM-based dynamic consensus protocol, which was shown to be robust to both asynchrony and unreliable communications [29], differently from ratio consensus. The protocol is derived by formulating dynamic consensus as an _online, quadratic problem_ and applying the robust distributed ADMM to it [29]. Since the consensus problem is quadratic, the agents' updates have a closed form and no minimization needs to be solved, avoiding the larger computational footprint of ADMM for general problems. Our contribution then brings together the best of both Gradient Tracking, with its lightweight computations, and ADMM, with its robustness.
In the following, we summarize our contributions. We propose a novel distributed gradient tracking method that employs an ADMM-based dynamic consensus protocol. The algorithm then inherits the robustness properties of ADMM to both asynchrony and unreliable communications. We analyze the convergence of the proposed algorithm - for synchronous and reliable networks - by interpreting it as a _singularly perturbed_ system given by the interconnection between (i) a slow subsystem given by an approximation of the gradient descent and a consensus step, and (ii) a fast one related to the states of the ADMM-based consensus protocol. With these tools, we are able to show linear convergence to the optimal solution of the consensus optimization problem. Then, this result allows us to analyze a robust version of the algorithm tailored for networks with asynchrony and packet losses. In this stochastic setup, we apply _averaging theory_ to the singularly perturbed system that represents our algorithm, and by studying the associated averaged system, we are able to show that the linear convergence of the scheme is preserved. Moreover, we are further able to prove that the proposed schemes exhibit Input-to-State Stability (ISS) with respect to additive errors, due, e.g., to unreliable communications or the use of stochastic gradients. Finally, we showcase the performance of the algorithms in numerical simulations for quadratic and logistic regression problems over random networks. We compare its performance with a standard and a robust version of the GT, which lack the same robustness properties. Preliminary results related to this paper appeared in [39]. However, the preliminary work [39] does not consider the scenario over asynchronous and unreliable networks and does not analyze the algorithm robustness. Moreover, in [39], the convergence proof in the case of ideal networks is omitted.
The paper is organized as follows. Section II introduces the problem setup. In Section III, we design the novel distributed algorithm and provide its convergence properties. Section IV is devoted to analyzing the algorithm. In Section V, we present a robust version of the algorithm to address the scenario with imperfect networks, while Section VI is devoted to analyzing this algorithm. Section VII characterizes the robustness of the proposed algorithms. Finally, Section VIII compares our methods and Gradient Tracking.
_Notation:_ The identity matrix in \(\mathbb{R}^{m\times m}\) is \(I_{m}\)., while \(0_{m}\) is the all-zero matrix in \(\mathbb{R}^{m\times m}\). The vector of \(N\) ones is denoted by \(1_{N}\), while \(\mathbf{1}_{N,n}:=1_{N}\otimes I_{n}\) with \(\otimes\) being the Kronecker product. For a finite set \(S\), we denote by \(|S|\) its cardinality. The symbol \(\textsc{col}(v_{i})_{i\in\{1,\ldots,N\}}\) denotes the vertical concatenation of the column vectors \(v_{1},\ldots,v_{N}\). We denote as \(\text{blkdiag}(M_{1},\ldots,M_{N})\in\mathbb{R}^{\sum_{i=1}^{N}n_{i}}\) the block diagonal matrix whose \(i\)-th block is given by \(M_{i}\in\mathbb{R}^{n_{i}\times n_{i}}\).
## II Problem Description and Preliminaries
### _Problem Setup_
We consider a network of \(N\) agents that aim to solve
\[\min_{x\in\mathbb{R}^{n}}\sum_{i=1}^{N}f_{i}(x), \tag{1}\]
where \(x\in\mathbb{R}^{n}\) is the (common) decision variable, while \(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is the objective function of agent \(i\in\{1,\ldots,N\}\). In the following, we will also use the function \(f:\mathbb{R}^{n}\rightarrow\mathbb{R}\) defined as \(f(x):=\sum_{i=1}^{N}f_{i}(x)\). Our goal is to design an algorithm to solve (1) in a distributed way, namely with update laws implementable over a network of agents using only (i) local information and (ii) neighboring communication. Indeed, we consider a network of agents communicating according to an undirected graph \(\mathcal{G}=(\{1,\ldots,N\},\mathcal{E})\), with \(\mathcal{E}\subset\{1,\ldots,N\}\times\{1,\ldots,N\}\) such that \(i\) and \(j\) can exchange information only if \((i,j)\in\mathcal{E}\). The set of neighbors of \(i\) is \(\mathcal{N}_{i}:=\{j\in\{1,\ldots,N\}\mid(i,j)\in\mathcal{E}\}\), while its degree is \(d_{i}:=|\mathcal{N}_{i}|\) and \(\mathbf{d}:=\sum_{i=1}^{N}d_{i}\). Notice that it holds \(\mathbf{d}=2|\mathcal{E}|\).
The following assumptions formalize the considered setup.
**Assumption II.1**.: _[Graph] The graph \(\mathcal{G}\) is connected. _[Objective functions] The objective function \(f\) is \(\mathbf{c}\)-strongly convex, while the gradients \(\nabla f_{i}(\cdot)\) are \(L\)-Lipschitz continuous for all \(i\in\{1,\ldots,N\}\). _[_]_
We notice that Assumption II.2 ensures that problem (1) has a unique minimizer and we denote it as \(x^{\star}\in\mathbb{R}^{n}\).
### _Fundamentals of singularly perturbed systems_
In the next, we will propose two novel distributed algorithms and, to assess their convergence property, we will resort to a system-theoretic perspective based on _singular perturbation_ theory. To this end, we provide below a generic convergence result extending [40, Theorem II.5] for the time-varying case. Indeed, we will also consider the case in which the agents are asynchronous and their communication is subject to packet
losses and, thus, the result [40, Theorem II.5] does not suit our purposes since it only considers time-invariant dynamics.
**Theorem II.3** (Global exponential stability for time-varying singularly perturbed systems).: _Consider the system_
\[x^{t+1} =x^{t}+\gamma f(x^{t},z^{t},t) \tag{2a}\] \[z^{t+1} =g(z^{t},x^{t},t), \tag{2b}\]
_with \(x^{t}\in\mathcal{D}\subseteq\mathbb{R}^{n}\), \(z^{t}\in\mathbb{R}^{m}\), \(f:\mathcal{D}\times\mathbb{R}^{m}\times\mathbb{N}\rightarrow\mathbb{R}^{n}\), \(g:\mathbb{R}^{m}\times\mathbb{R}^{n}\times\mathbb{N}\rightarrow\mathbb{R}^{m}\), \(\gamma>0\). Assume that \(f\) and \(g\) are Lipschitz continuous uniformly in \(t\) with respect to \(x^{t}\) and \(z^{t}\) with Lipschitz constants \(L_{f},L_{g}>0\), respectively. Assume that there exists \(z^{\text{eq}}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{m}\) such that for any \(x\in\mathbb{R}^{n}\)_
\[0 =\gamma f(0,z^{\text{eq}}(0),t)\] \[z^{\text{eq}}(x) =g(z^{\text{eq}}(x),x,t),\]
_for all \(t\in\mathbb{N}\) and with \(z^{\text{eq}}\) being Lipschitz continuous with Lipschitz constant \(L_{z^{\text{eq}}}>0\). Let_
\[x^{t+1}=x^{t}+\gamma f(x^{t},z^{\text{eq}}(x^{t}),t) \tag{3}\]
_be the reduced system and_
\[\tilde{z}^{t+1}=g(\tilde{z}^{t}+z^{\text{eq}}(x),x,t)-z^{\text{eq}}(x) \tag{4}\]
_be the boundary layer system with \(\tilde{z}^{t}\in\mathbb{R}^{m}\)._
_Assume that there exist a continuous function \(U:\mathbb{R}^{m}\times\mathbb{N}\rightarrow\mathbb{R}\) and \(\bar{\gamma}_{1}>0\) such that, for any \(\gamma\in(0,\bar{\gamma}_{1})\), there exist \(b_{1},b_{2},b_{3},b_{4}>0\) such that_
\[b_{1}\left\|\tilde{z}\right\|^{2}\leq U(\tilde{z},t)\leq b_{2} \left\|\tilde{z}\right\|^{2} \tag{5a}\] \[U(g(\tilde{z}+z^{\text{eq}}(x),x,t)-z^{\text{eq}}(x),t+1)-U( \tilde{z},t)\] \[\leq-b_{3}\left\|\tilde{z}\right\|^{2}\] (5b) \[\left|U(\tilde{z}_{1},t)-U(\tilde{z}_{2},t)\right|\leq b_{4} \left\|\tilde{z}_{1}-\tilde{z}_{2}\right\|\left\|\tilde{z}_{1}\right\|\] \[\quad+b_{4}\left\|\tilde{z}_{1}-\tilde{z}_{2}\right\|\left\| \tilde{z}_{2}\right\|, \tag{5c}\]
_for any \(\tilde{z},\tilde{z}_{1},\tilde{z}_{2}\in\mathbb{R}^{m}\), \(x\in\mathbb{R}^{n}\), \(t\in N\). Further, assume there exist a continuous function \(W:\mathcal{D}\times\mathbb{N}\rightarrow\mathbb{R}\) and \(\bar{\gamma}_{2}>0\) such that, for any \(\gamma\in(0,\bar{\gamma}_{2})\), there exist \(c_{1},c_{2},c_{3},c_{4}>0\) such that_
\[c_{1}\left\|x\right\|^{2}\leq W(x,t)\leq c_{2}\left\|x\right\|^{2} \tag{6a}\] \[W(x+\gamma f(x,z^{\text{eq}}(x),t),t)-W(x,t)\leq-c_{3}\gamma \left\|x\right\|^{2}\] (6b) \[\left|W(x_{1},t)-W(x_{2},t)\right|\leq c_{4}\left\|x_{1}-x_{2} \right\|\left\|x_{1}\right\|\] \[\quad+c_{4}\left\|x_{1}-x_{2}\right\|\left\|x_{2}\right\|, \tag{6c}\]
_for any \(x,x_{1},x_{2},x_{3}\in\mathcal{D}\), \(t\in\mathbb{N}\)._
_Then, there exist \(\bar{\gamma}\in(0,\min\{\bar{\gamma}_{1},\bar{\gamma}_{2}\})\), \(\kappa_{1}>0\), and \(\kappa_{2}>0\) such that, for all \(\gamma\in(0,\bar{\gamma})\), it holds_
\[\left\|\left[\begin{matrix}x^{t}-x^{\star}\\ z^{t}-z^{\text{eq}}(x^{t})\end{matrix}\right]\right\|\leq\kappa_{1}\left\| \left[\begin{matrix}x^{0}\\ z^{0}-z^{\text{eq}}(x^{0})\end{matrix}\right]\right\|e^{-\kappa_{2}t},\]
_for any \((x^{0},z^{0})\in\mathcal{D}\times\mathbb{R}^{m}\)._
The proof of Theorem II.3 is given in Appendix B.
## III ADMM-Tracking Gradient: Algorithm Design and Convergence Properties
Let \(x_{i}^{t}\in\mathbb{R}^{n}\) be the estimate of the solution to problem (1) maintained by agent \(i\) at iteration \(t\in\mathbb{N}\). We follow a control-oriented design for the update of \(x_{i}^{t}\). Then, let \(u_{i}^{t}\in\mathbb{R}^{n}\) be the \(i\)-th control input and consider the single integrator dynamics
\[x_{i}^{t+1}=x_{i}^{t}+u_{i}^{t}. \tag{7}\]
The control law determining \(u_{i}^{t}\) should have the twofold purpose of removing (i) the consensus error with respect to the other agents' estimates, and (ii) the optimality error related to problem (1). Hence, one may design \(u_{i}^{t}\) as a proportional action with respect to the above errors, namely
\[u_{i}^{t}=\left(\frac{1}{N}\sum_{j=1}^{N}x_{j}^{t}-x_{i}^{t}\right)-\frac{ \delta}{N}\sum_{j=1}^{N}\nabla f_{j}(x_{j}^{t}), \tag{8}\]
where \(\delta>0\) is the stepsize. However, in a distributed setting, agent \(i\) cannot access \(\frac{1}{N}\sum_{j=1}^{N}x_{j}^{t}\) and \(\frac{1}{N}\sum_{j=1}^{N}\nabla f_{j}(x_{j}^{t})\). Therefore, we modify the control law (8) by employing two auxiliary variables \(y_{i}^{t},s_{i}^{t}\in\mathbb{R}^{n}\) aimed at reconstructing \(\frac{1}{N}\sum_{j=1}^{N}x_{j}^{t}\) and \(\frac{1}{N}\sum_{j=1}^{N}\nabla f_{j}(x_{j}^{t})\), respectively, obtaining
\[u_{i}^{t}=(y_{i}^{t}-x_{i}^{t})-s_{i}^{t}. \tag{9}\]
We note that, if \(y_{i}^{t}=\frac{1}{N}\sum_{j=1}^{N}x_{j}^{t}\) and \(s_{i}^{t}=\frac{1}{N}\sum_{j=1}^{N}\nabla f_{j}(x_{j}^{t})\), then the desired update (8) is recovered. Then, inspired by [29], for each iteration \(t\geq 0\), we turn these two consensus-oriented goals into the online optimization problem
\[\min_{\begin{subarray}{c}(y_{1},\ldots,y_{N})\in\mathbb{R}^{Nn}\\ (s_{1},\ldots,s_{N})\in\mathbb{R}^{Nn}\end{subarray}} \sum_{i=1}^{N}\ell_{i}^{t}(y_{i},s_{i})\] (10) s.t.: \[\left[\begin{matrix}y_{i}\\ s_{i}\end{matrix}\right]=\left[\begin{matrix}y_{j}\\ s_{j}\end{matrix}\right]\ \forall(i,j)\in\mathcal{E},\]
where, for all \(i\in\{1,\ldots,N\}\), \(\ell_{i}^{t}:\mathbb{R}^{n}\times\mathbb{R}^{n}\rightarrow\mathbb{R}\) reads as
\[\ell_{i}^{t}(y_{i},s_{i})=\frac{1}{2}\left\|y_{i}-x_{i}^{t}\right\|^{2}+\frac{1}{ 2}\left\|s_{i}-\nabla f_{i}(x_{i}^{t})\right\|^{2}.\]
Indeed, if the graph \(\mathcal{G}\) is connected, then the (unique) optimal solution of problem (10), say it \((y_{i}^{t},s_{\star}^{t})\in\mathbb{R}^{2Nn}\), reads as \((y_{i}^{t},s_{\star}^{t})=(\mathbf{1}_{N,n}\frac{1}{N}\sum_{j=1}^{N}x_{j}^{t}, \mathbf{1}_{N,n}\frac{1}{N}\sum_{j=1}^{N}\nabla f_{j}(x_{j}^{t}))\)[29]. From this observation, we design the updates of \(y_{i}^{t}\) and \(s_{i}^{t}\) by resorting to the distributed ADMM proposed in [18] (see Appendix A for a description of this method in a generic framework). Hence, each agent \(i\) maintains an additional variable \(z_{ij}^{t}\in\mathbb{R}^{2n}\) for each neighbor \(j\in\mathcal{N}_{i}\) and implements
with \(\rho>0\) and \(\alpha\in(0,1)\), Being \(\ell_{i}^{t}\) quadratic, the above updates are equivalent to the closed form
\[\begin{bmatrix}y_{i}^{t}\\ s_{i}^{t}\end{bmatrix} =\frac{1}{1+\rho d_{i}}\left(\begin{bmatrix}x_{i}^{t}\\ \nabla f_{i}(x_{i}^{t})\end{bmatrix}+\sum_{j\in\mathcal{N}_{i}}z_{ij}^{t}\right) \tag{11a}\] \[z_{ij}^{t+1} =(1-\alpha)z_{ij}^{t}+\alpha m_{ji}^{t}, \tag{11b}\]
in which we also introduced \(m_{ji}^{t}\in\mathbb{R}^{2n}\) to denote the message from agent \(j\) needed by agent \(i\) to perform (11b), namely
\[m_{ji}^{t}:=-z_{ji}^{t}+2\rho\begin{bmatrix}y_{i}^{t}\\ s_{j}^{t}\end{bmatrix}, \tag{12}\]
for all \(i\in\{1,\ldots,N\}\) and \(j\in\mathcal{N}_{i}\). Moreover, by resorting to the concepts of singular perturbations, we modify (7) adding a tuning gain \(\gamma>0\) as follows
\[x_{i}^{t+1}=x_{i}^{t}+\gamma u_{i}^{t}. \tag{13}\]
The parameter \(\gamma\) allows us to arbitrarily reduce the speed of variations of \(x_{i}^{t}\) and, as we will detail in the analysis, allows for a proper interconnection with (11). The whole distributed protocol arising by combining (9), (13), and (11) is reported in Algorithm 1 and we name it ADMM-Tracking Gradient.
```
Initialization:\(x_{i}^{0}\in\mathbb{R}^{n},z_{i}^{0}\in\mathbb{R}^{2nd_{i}}\). for\(t=0,1,\ldots\)do \(\begin{bmatrix}y_{i}^{t}\\ s_{i}^{t}\end{bmatrix}=\frac{1}{1+\rho d_{i}}\left(\begin{bmatrix}x_{i}^{t}\\ \nabla f_{i}(x_{i}^{t})\end{bmatrix}+\sum_{j\in\mathcal{N}_{i}}z_{ij}^{t}\right)\) \(x_{i}^{t+1}=x_{i}^{t}+\gamma(y_{i}^{t}-x_{i}^{t})-\gamma\delta s_{i}^{t}\) for\(j\in\mathcal{N}_{i}\)do \(m_{ij}^{t}=-z_{ij}^{t}+2\rho\begin{bmatrix}y_{i}^{t}\\ s_{i}^{t}\end{bmatrix}\) transmit \(m_{ij}^{t}\) to \(j\) and receive \(m_{ji}^{t}\) to \(j\) \(z_{ij}^{t+1}=(1-\alpha)z_{ij}^{t}+\alpha m_{ji}^{t}\)
```
**Algorithm 1** ADMM-Tracking Gradient (Agent \(i\))
We remark that Algorithm 1 can be implemented in a fully-distributed fashion since it only requires neighboring communication and local variables. The next theorem states the convergence features of Algorithm 1.
**Theorem III.1**.: _Consider Algorithm 1 and let Assumptions II.1 and II.2 hold. Then, there exist \(\bar{\gamma},\bar{\delta},c_{1},c_{2}>0\) such that, for any \(\rho>0\), \(\alpha\in(0,1)\), \(\gamma\in(0,\bar{\gamma})\), \(\delta\in(0,\bar{\gamma})\), \((x_{i}^{0},z_{i}^{0})\in\mathbb{R}^{n}\times\mathbb{R}^{2nd_{i}}\), for all \(i\in\{1,\ldots,N\}\), it holds_
\[\left\|x_{i}^{t}-x^{\star}\right\|\leq c_{1}\exp(-c_{2}t).\qed\]
Theorem III.1 is proved in Section IV-D. In detail, with tools from system theory, we will show global exponential stability of \((\mathbf{1}_{N,n}x^{\star},z^{\star})\) for the aggregate form of ADMM-Tracking Gradient, where the quantity \(z^{\star}\in\mathbb{R}^{2nd}\) will be specified later.
## IV Convergence Analysis of ADMM-Tracking Gradient
Here, we provide the analysis of Algorithm 1. First, in Section IV-A, we interpret the aggregate form of ADMM-Tracking Gradient as a _singularly perturbed_ system. Then, in Section IV-B and IV-C, respectively, we separately study the identified subsystems by relying on Lyapunov theory. Finally, in Section IV-D, we use the results collected in the previous steps to prove Theorem III.1.
### _ADMM-Tracking Gradient as a Singularly Perturbed System_
We start by providing a more compact form for the local update of ADMM-Tracking Gradient. To this end, let \(z_{i}^{t}\) be the vector stacking all the variables \(z_{ij}^{t}\) of the agent \(i\), i.e., \(z_{i}^{t}:=\textsc{col}(z_{ij}^{t})_{j\in\mathcal{N}_{i}}\in\mathbb{R}^{2nd_{i}}\), while let \(z_{\mathcal{N}_{i}}^{t}:=\textsc{col}(z_{ij}^{t})_{j\in\mathcal{N}_{i}}\in \mathbb{R}^{2nd_{i}}\), i.e., the vector stacking all the variables \(z_{ij}^{t}\) with \(j\in\mathcal{N}_{i}\). Let \(h_{i}^{x}:\mathbb{R}^{n}\times\mathbb{R}^{nd_{i}}\rightarrow\mathbb{R}^{n}\), \(h_{i}^{\nabla}:\mathbb{R}^{n}\times\mathbb{R}^{nd_{i}}\rightarrow\mathbb{R}^{n}\), and \(h_{\mathcal{N}_{i}}:\mathbb{R}^{ndi_{i}}\times\mathbb{R}^{ndi_{i}}\to \mathbb{R}^{2nd_{i}}\) be the functions described by
\[h_{i}^{x}(x_{i},z_{i}) =\frac{x_{i}+\begin{bmatrix}I_{n}&0_{n}\end{bmatrix}\sum_{j\in \mathcal{N}_{i}}z_{ij}}{1+\rho d_{i}}\] \[h_{i}^{\nabla}(x_{i},z_{i}) =\frac{\nabla f_{i}(x_{i})+\begin{bmatrix}0_{n}&I_{n}\end{bmatrix} \sum_{j\in\mathcal{N}_{i}}z_{ij}}{1+\rho d_{i}} \tag{14}\] \[h_{\mathcal{N}_{i}}(x_{\mathcal{N}_{i}},z_{\mathcal{N}_{i}}) =\textsc{col}\qed\qed\left(\frac{\textsc{col}(x_{j}^{t},\nabla f_{j} (x_{j}^{t}))+\sum_{j\in\mathcal{N}_{i}}z_{ji}^{t}}{1+\rho d_{j}}\right)_{j\in \mathcal{N}_{i}}.\]
Then, Algorithm 1 can be compactly described as
\[x_{i}^{t+1} =x_{i}^{t}+\gamma\left(h_{i}^{x}(x_{i}^{t},z_{i}^{t})-x_{i}^{t} \right)-\gamma\delta h_{i}^{\nabla}(x_{i}^{t},z_{i}^{t}) \tag{15a}\] \[z_{i}^{t+1} =(1-\alpha)z_{i}^{t}+\alpha(-z_{\mathcal{N}_{i}}^{t}+2\rho h_{ \mathcal{N}_{i}}(x_{\mathcal{N}_{i}}^{t},z_{\mathcal{N}_{i}}^{t})). \tag{15b}\]
Now, let us provide the aggregate formulation of (15). To this end, let us introduce the permutation matrix \(P\in\mathbb{R}^{2nd\times 2nd}\) that swaps the \(ij\)-th element with the \(ji\)-th one, the matrices \(A_{x}\in\mathbb{R}^{2nd\times nN}\), \(A_{z}\in\mathbb{R}^{n\times nN}\), \(A\in\mathbb{R}^{2nd\times Nn}\), \(H\in\mathbb{R}^{Nn\times Nn}\), and \(\mathcal{H}\in\mathbb{R}^{2Nn\times 2Nn}\) defined as
\[A_{x} :=\begin{bmatrix}\mathbf{1}_{d_{1},n}&&&\\ 0_{d_{1}n}&&\\ &\ddots&&\\ &&\mathbf{1}_{d_{N},n}\\ &&&0_{dNn}\end{bmatrix},\,A_{z}:=\begin{bmatrix}0_{d_{1}n}&&\\ \mathbf{1}_{d_{1},n}&&\\ &&\ddots&\\ &&\mathbf{1}_{d_{N},n}\end{bmatrix}\] \[A :=\begin{bmatrix}\mathbf{1}_{d_{1},2n}&&\\ &\ddots&&\\ &&\ddots&\\ &&\frac{1}{1+\rho d_{N}}I_{2n}\end{bmatrix}.\]
Then, we define the stacking vectors \(x^{t}:=\textsc{col}(x_{1}^{t},\ldots,x_{N}^{t})\in\mathbb{R}^{Nn}\) and \(z^{t}:=\textsc{col}(z_{1}^{t},\ldots,z_{N}^{t})\in\mathbb{R}^{2nd}\). The aggregate formulation of (15) reads as
\[x^{t+1} =x^{t}+\gamma\left(H\left(x^{t}+A_{x}^{\top}z^{t}\right)-x^{t}\right)\] \[\quad-\gamma\delta H\left(G(x^{t})+A_{\nabla}^{\top}z^{t}\right) \tag{16a}\] \[z^{t+1} =z^{t}-\alpha(I+P-2\rho PA\mathcal{H}A\mathcal{H}^{\top})z^{t}\] \[\quad+2\alpha\rho PA\mathcal{H}v(x^{t}) \tag{16b}\]
where we introduced the operators \(G:\mathbb{R}^{Nn}\rightarrow\mathbb{R}^{Nn}\) and \(v:\mathbb{R}^{Nn}\rightarrow\mathbb{R}^{2Nn}\) that, given any
\(\mathbb{R}^{Nn}\) with \(x_{i}\in\mathbb{R}^{n}\) for all \(i\in\{1,\ldots,N\}\), are defined as \(G(x):=\textsc{col}(\nabla f_{1}(x_{1}),\ldots,\nabla f_{N}(x_{N}))\) and \(v(x):=\textsc{col}(x_{1},\nabla f_{1}(x_{1}),\ldots,x_{N},\nabla f_{N}(x_{N}))\). Fig. 1 reports a block diagram graphically describing (16).
From the discussion in [18], we deduce that the matrix \((I+P-2\rho PA\mathcal{H}A^{\top})\) has some eigenvalues in \(0\) that are all semi-simple. Next, we introduce a decomposition to express \(z^{t}\) according to a basis of the subspace corresponding to the kernel of \((I+P-2\rho PA\mathcal{H}A^{\top})\). To this end, let \(b\in\mathbb{N}\) be the dimension of the subspace \(\mathcal{S}\) spanned by the eigenvectors of \((I+P-2\rho PA\mathcal{H}A^{\top})\) associated to \(0\), \(B\in\mathbb{R}^{2nd\times b}\) be the matrix whose columns represent an orthonormal basis of \(\mathcal{S}\), and \(M\in\mathbb{R}^{2nd\times n_{d}}\) be the matrix such that \(M^{\top}M=I\) and \(B^{\top}M=0\), with \(n_{d}:=2n\mathbf{d}-b\). The next lemma highlights some useful properties of the matrices \(B\) and \(M\).
**Lemma IV.1**.: _Consider the matrices \(B\) and \(M\). Then, it holds_
\[A_{x}^{\top}B =0,\quad A_{\nabla}^{\top}B=0 \tag{17a}\] \[B^{\top}PA =0\] (17b) \[B^{\top}(I+P-2\rho PA\mathcal{H}A^{\top}) =0. \tag{17c}\]
Proof.: By invoking [18, Lemma 2], we claim that
\[\ker(I+P-2\rho PA\mathcal{H}A^{\top})\subset\ker(A^{\top}). \tag{18}\]
Further, by construction of \(A_{x}\) and \(A_{\nabla}\), the results \(\ker(A)\subseteq\ker(A_{x})\) and \(\ker(A)\subseteq\ker(A_{\nabla})\) hold true. Thus, the proof of (17a) follows. In order to prove (17b), we note that (18) implies \((I+P)B=0\). Hence, since \(P=P^{\top}\), it holds
\[B^{\top}(I+P)=0. \tag{19}\]
Further, the result (18) also implies
\[B^{\top}A=0. \tag{20}\]
Moreover, by construction, it holds
\[\ker(A^{\top}P)\subseteq\ker(A^{\top}). \tag{21}\]
The result (17b) follows by combining (20), (21) and the fact that \(P=P^{\top}\). Finally, as for (17c), it follows by combining (17b) and (19).
Now, let us introduce the novel coordinates \(\bar{z}\in\mathbb{R}^{b}\) and \(z_{\perp}\in\mathbb{R}^{n_{d}}\) defined as
\[\bar{z}:=B^{\top}z,\quad z_{\perp}:=M^{\top}z. \tag{22}\]
Then, by invoking Lemma IV.1, we use the coordinates (22) to equivalently rewrite system (16) as
\[x^{t+1} =x^{t}+\gamma\left(H\left(x^{t}+A_{x}^{\top}Mz_{\perp}^{t}\right) -x^{t}\right)\] \[\quad\quad-\gamma\delta H\left(G(x^{t})+A_{\nabla}^{\top}Mz_{ \perp}^{t}\right) \tag{23a}\] \[\bar{z}^{t+1} =\bar{z}^{t}\] (23b) \[z_{\perp}^{t+1} =z_{\perp}^{t}-\alpha M^{\top}(I+P-2\rho PA\mathcal{H}A^{\top}) Mz_{\perp}^{t}\] \[\quad+2\alpha\rho M^{\top}PA\mathcal{H}v(x^{t}). \tag{23c}\]
Notably, the variable \(\bar{z}^{t}\) does not affect the other updates of (23) (and it holds \(\bar{z}^{t}\equiv\bar{z}^{0}\) for all \(t\geq 0\)). Thus, by ignoring the variable \(\bar{z}^{t}\), and introducing the error coordinate \(\bar{x}^{t}:=x^{t}-\mathbf{1}_{N,n}x^{\star}\), system (23) is equivalent to
\[\bar{x}^{t+1} =\bar{x}^{t}+\gamma\left(H\left(x^{t}+A_{x}^{\top}Mz_{\perp}^{t} \right)-x^{t}\right)\] \[\quad-\gamma\delta H\left(G(x^{t})+A_{\nabla}^{\top}Mz_{\perp}^{t}\right) \tag{24a}\] \[z_{\perp}^{t+1} =z_{\perp}^{t}-\alpha Fz_{\perp}^{t}+2\alpha\rho M^{\top}PA \mathcal{H}v(x^{t}), \tag{24b}\]
where, with a slight abuse of notation, we used \(x^{t}=\tilde{x}^{t}+\mathbf{1}_{N,n}x^{\star}\) and introduced the matrix \(F\in\mathbb{R}^{n_{d}\times n_{d}}\) defined as
\[F:=M^{\top}(I+P-2\rho PA\mathcal{H}A^{\top})M. \tag{25}\]
Next, we provide a crucial property about the matrix \(I-\alpha F\).
**Lemma IV.2**.: _Consider the matrix \(F\) as defined in (25). Then, the matrix \(I-\alpha F\) is Schur for any \(\alpha\in(0,1)\)._
Proof.: By using (17c) and (25), it holds
\[\begin{bmatrix}B^{\top}\\ M^{\top}\end{bmatrix}(I-\alpha(I+P-2\rho PA\mathcal{H}A^{\top}))\begin{bmatrix} B&M\end{bmatrix}\] \[=\begin{bmatrix}I&0\\ 0&I-\alpha F\end{bmatrix}.\]
The result [29, Lemma 2] ensures that all the eigenvalues of \(I-\alpha(I+P-2\rho PA\mathcal{H}A^{\top})\) are inside the circle on the complex plane with center \(1-\alpha\) and radius \(\alpha\). Since we isolated the eigenvalues equal to \(1\), we deduce that the matrix \(I-\alpha F\) is Schur for any \(\alpha\in(0,1)\).
We interpret (24) as a singularly perturbed system in the form of (2), i.e., the interconnection between the slow subsystem (24a) and the fast one (24b). Indeed, the fast scheme (24b) has an equilibrium parametrized in the slow state through the function \(z_{\perp}^{\text{eq}}:\mathbb{R}^{Nn}\to\mathbb{R}^{n_{d}}\) defined as
\[z_{\perp}^{\text{eq}}(\tilde{x}^{t}):=2\rho F^{-1}M^{\top}PA\mathcal{H}v( \tilde{x}^{t}+\mathbf{1}_{N,n}x^{\star}). \tag{26}\]
Moreover, given any \(\tilde{x}\in\mathbb{R}^{Nn}\) and the corresponding \(x=\tilde{x}+\mathbf{1}_{N,n}x^{\star}\), it is possible to show that
\[HA_{x}^{\top}Mz_{\perp}^{\text{eq}}(\tilde{x}) =\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N}x-Hx \tag{27a}\] \[HA_{\nabla}^{\top}Mz_{\perp}^{\text{eq}}(\tilde{x}) =\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N}G(x)-HG(x). \tag{27b}\]
### _Boundary Layer System_
In this section, we will study the so-called boundary layer system associated to (24). Therefore, we consider an arbitrarily fixed \(\tilde{x}^{t}\equiv\tilde{x}\in\mathbb{R}^{Nn}\) for all \(t\in\mathbb{N}\) and accordingly
Figure 1: Block diagram representing (16).
rewrite (24b) using the error coordinate \(\tilde{z}_{\perp}^{t}:=z_{\perp}^{t}-z_{\perp}^{\text{eq}}(\tilde{x})\). Hence, the definition of \(z_{\perp}^{\text{eq}}\) (cf. (26)) leads to
\[\tilde{z}_{\perp}^{t+1}=\tilde{z}_{\perp}^{t}-\alpha F\tilde{z}_{\perp}^{t}. \tag{28}\]
The next lemma provides a function \(U\) satisfying (5) for (28).
**Lemma IV.3**.: _Consider (28). Then, there exists \(U:\mathbb{R}^{n_{d}}\to\mathbb{R}\) such that the conditions (5) are satisfied._
Proof.: We recall that the matrix \(I-\alpha F\) is Schur for any \(\alpha\in(0,1)\) (cf. Lemma IV.2). Then, we arbitrarily choose \(Q_{\tilde{z}_{\perp}}\in\mathbb{R}^{n_{d}\times n_{d}}\), \(Q_{\tilde{z}_{\perp}}=Q_{\tilde{z}_{\perp}}^{\top}>0\) and, in view of the Schur property of \(I-\alpha F\), we claim that there exists \(S_{\tilde{z}_{\perp}}\in\mathbb{R}^{n_{d}\times n_{d}}\), \(S_{\tilde{z}_{\perp}}=S_{\tilde{z}_{\perp}}^{\top}>0\) such that
\[(I-\alpha F)^{\top}S_{\tilde{z}_{\perp}}(I-\alpha F)-S_{\tilde{z}_{\perp}}=- Q_{\tilde{z}_{\perp}}. \tag{29}\]
We then choose the candidate Lyapunov function \(U(\tilde{z}_{\perp})=\tilde{z}_{\perp}^{\top}S_{\tilde{z}_{\perp}}\tilde{z}_{\perp}\). Conditions (5a) and (5c) are satisfied since the chosen \(U\) is a quadratic positive definite function, while (5b) follows by applying (29).
### _Reduced System_
Now, we study the so-called reduced system associated to (24), i.e., the system obtained by considering \(z_{\perp}^{t}=z_{\perp}^{\text{eq}}(\tilde{x}^{t})\) into (24a) for all \(t\geq 0\). By using (27), such a system reads as
\[\tilde{x}^{t+1} =\tilde{x}^{t}-\gamma\left(I-\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N}\right)\tilde{x}^{t}\] \[\quad-\gamma\delta\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{ N}G(\tilde{x}^{t}+\mathbf{1}_{N,n}x^{\star}). \tag{30}\]
The next lemma provides a Lyapunov function for system (30) satisfying the conditions required in (6).
**Lemma IV.4**.: _Consider (30). Then, there exists \(W:\mathbb{R}^{Nn}\to\mathbb{R}\) such that, for any \(\gamma\in(0,1)\) and \(\delta\in(0,\min\{2\mathbf{c}/(\gamma NL^{2}),2\})\), \(W\) satisfies (6)._
Proof.: We start by introducing an additional change of variables to isolate (i) the optimality error and (ii) the consensus error related to the vector \(\tilde{x}^{t}\). To this end, let \(R\in\mathbb{R}^{Nn\times(N-1)n}\) be the matrix such that its columns span the space orthogonal to the one of \(\mathbf{1}_{N,n}\), i.e., such that \(R^{\top}\mathbf{1}_{N,n}=0\) and \(R^{\top}R=I_{(N-1)n}\). Then, let us introduce the new coordinates \(\mu\in\mathbb{R}^{n}\) and \(x_{\perp}\in\mathbb{R}^{(N-1)n}\) be defined as
\[\mu:=\frac{\mathbf{1}_{N,n}^{\top}}{N}\tilde{x},\quad x_{\perp}:=R^{\top} \tilde{x}. \tag{31}\]
By using the coordinates (31), \(\mathbf{1}_{N,n}\in\ker(I_{Nn}-\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top} }{N})\), and \(R^{\top}\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N}=0\), we rewrite (30) as the cascade system
\[\mu^{t+1} =\mu^{t}-\gamma\frac{\mathbf{1}_{N,n}^{\top}}{N}G(\mathbf{1}_{N, n}\mu^{t}+Rx_{\perp}^{t}+\mathbf{1}_{N,n}x^{\star}) \tag{32a}\] \[x_{\perp}^{t+1} =(1-\gamma)x_{\perp}^{t}. \tag{32b}\]
For the sake of compactness, let us introduce \(\tilde{G}:\mathbb{R}^{n}\times\mathbb{R}^{(N-1)n}\to\mathbb{R}^{n}\) defined as
\[\tilde{G}\left(\mu,x_{\perp}\right) :=-\frac{\mathbf{1}_{N,n}^{\top}}{N}G(\mathbf{1}_{N,n}\mu+Rx_{ \perp}+\mathbf{1}_{N,n}x^{\star})\] \[\quad+\frac{\mathbf{1}_{N,n}^{\top}}{N}G(\mathbf{1}_{N,n}\mu+ \mathbf{1}_{N,n}x^{\star}). \tag{33}\]
By using this notation and adding and subtracting \(\frac{\gamma}{N}\nabla f(\mu^{t}+x^{\star})\) into (32a), we compactly rewrite system (32) as
\[\mu^{t+1} =\mu^{t}-\frac{\gamma\delta}{N}\nabla f(\mu^{t}+x^{\star})+ \gamma\delta\tilde{G}\left(\mu^{t},x_{\perp}^{t}\right) \tag{34a}\] \[x_{\perp}^{t+1} =(1-\gamma)x_{\perp}^{t}. \tag{34b}\]
Once this formulation is available, we consider the candidate Lyapunov function \(W:\mathbb{R}^{Nn}\times\mathbb{R}^{(N-1)n}\to\mathbb{R}\) defined as
\[W(\tilde{x}) :=\frac{1}{2}\tilde{x}^{\top}\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N^{2}}\tilde{x}+\frac{m}{2}\tilde{x}^{\top}RR^{\top}\tilde{x}\] \[=\underbrace{\frac{1}{2}\left\|\mu\right\|^{2}}_{:=W_{\mu}(\mu)}+ \underbrace{\frac{m}{2}\left\|x_{\perp}\right\|^{2}}_{:=W_{x_{\perp}}(x_{ \perp})}, \tag{35}\]
where \(m>0\) will be fixed later. Being \(W\) quadratic and positive definite, the conditions (6a) and (6c) are satisfied. To check condition (6b), we write \(\Delta W_{\mu}(\mu^{t}):=W_{\mu}(\mu^{t+1})-W_{\mu}(\mu^{t})\) along (34a), thus obtaining
\[\Delta W_{\mu}(\mu^{t}) =-\frac{\gamma\delta}{N}(\mu^{t})^{\top}\nabla f(\mu^{t}+x^{\star})+ \gamma\delta(\mu^{t})^{\top}\tilde{G}(\mu^{t},x_{\perp}^{t})\] \[\quad+\gamma^{2}\delta^{2}\nabla f(\mu^{t}+x^{\star})^{\top} \tilde{G}(\mu^{t},x_{\perp}^{t})\] \[\quad+\frac{\gamma^{2}\delta^{2}}{2}\left\|\nabla f(\mu^{t}+x^{ \star})\right\|^{2}+\frac{\gamma^{2}\delta^{2}}{2}\left\|\tilde{G}(\mu^{t},x_{ \perp}^{t})\right\|^{2}. \tag{36}\]
Since \(x^{\star}\) is the (unique) solution of problem (1), it holds \(\nabla f(x^{\star})=0\) which allows us to write
\[-(\mu^{t})^{\top}\nabla f(\mu^{t}+x^{\star}) =-(\mu^{t})^{\top}\left(\nabla f(\mu^{t}+x^{\star})-\nabla f(x^{ \star})\right)\] \[\stackrel{{(a)}}{{\leq}}L\left\|\mu^{t}\right\|^{2}, \tag{37}\]
where in \((a)\) we use the c-strong convexity of \(f\) (cf. Assumption II.2). Then, since \(\nabla f(x^{\star})=0\) and the gradients \(\nabla f_{i}\) are Lipschitz continuous (cf. Assumption II.2), we get
\[\left\|\nabla f(\mu^{t}+x^{\star})\right\| =\left\|\nabla f(\mu^{t}+x^{\star})-\nabla f(x^{\star})\right\|\] \[\stackrel{{(a)}}{{\leq}}L\left\|\mu^{t}\right\|. \tag{38}\]
Then, by using the Lipschitz continuity of the gradients \(\nabla f_{i}\) (cf. Assumption II.2) and \(\left\|R\right\|=1\), we write the bound
\[\left\|\tilde{G}(\mu^{t},x_{\perp}^{t})\right\|\leq\frac{L}{\sqrt{N}}\left\|x_ {\perp}^{t}\right\|. \tag{39}\]
Hence, by using (37), (38), and (39), we bound (36) as
\[\Delta W_{\mu}(\mu^{t}) \leq-\frac{\gamma\delta}{N}\mathbf{c}\left\|\mu^{t}\right\|^{2}+ \gamma\delta k_{1}L\left\|\mu^{t}\right\|\left\|x_{\perp}^{t}\right\|\] \[\quad+\gamma^{2}\delta^{2}k_{1}^{2}\left\|\mu^{t}\right\|\left\|x_ {\perp}^{t}\right\|+\gamma^{2}\delta^{2}\frac{L^{2}}{2}\left\|\mu^{t}\right\|^{2}\] \[\quad+\gamma^{2}\delta^{2}\frac{k
Then, by defining \(\Delta W(\tilde{x}^{t}):=\Delta W_{\mu}(\mu^{t})+\Delta W_{x_{\perp}}(x_{\perp}^{t})\) and using (40) and (41), we get
\[\Delta W(\tilde{x}^{t})\leq-\gamma\begin{bmatrix}\left\|\mu^{t} \right\|\end{bmatrix}^{\top}Q\begin{bmatrix}\left\|\mu^{t}\right\|\end{bmatrix}, \tag{42}\]
where we introduced the matrix \(Q=Q^{\top}\in\mathbb{R}^{2\times 2}\) given by
\[Q:=\begin{bmatrix}\delta\mathbf{c}/N-\gamma\delta^{2}L^{2}/2&- \delta(k_{1}L+\gamma\delta k_{1}^{2})/2\\ -\delta(k_{1}L+\gamma\delta k_{1}^{2})/2&m(1-\gamma/2)-\gamma\delta^{2}k_{1}^ {2}/2\end{bmatrix}\]
By Sylvester Criterion, it holds \(Q>0\) if and only if
\[\begin{cases}\delta\mathbf{c}/N-\gamma\delta^{2}L^{2}/2>0\\ m>\bar{m}(\gamma,\delta),\end{cases} \tag{43}\]
where \(\bar{m}(\gamma,\delta)\) is defined as
\[\bar{m}(\gamma,\delta):=\frac{\gamma^{2}\delta k_{1}^{2}/2(\mathbf{c}/N-\gamma \delta L^{2}/2)+\delta(k_{1}L+\delta\gamma k_{1}^{2})^{2}/4}{\gamma(\mathbf{c}/ N-\gamma\delta L^{2}/2)(1-\gamma/2)}.\]
Hence, we fix \(\gamma\in(0,1)\), then pick \(\delta\in(0,\min\{\frac{2\mathbf{c}}{\gamma NL^{2}},2\})\), and, finally, choose \(m>\bar{m}(\gamma,\delta)\). In this way, both the conditions (43) are satisfied. Thus, we use \(q>0\) to denote the smallest eigenvalue of \(Q\) and bound (42) as
\[\Delta W(\tilde{x}^{t}) \leq-\gamma q(\left\|\mu^{t}\right\|^{2}+\left\|x_{\perp}^{t} \right\|^{2})\] \[\leq-\gamma q\tilde{x}^{\top}\left(\frac{\mathbf{1}_{N,n}\mathbf{ \mathbbm{1}}_{N,n}^{\top}}{N^{2}}+RR^{\top}\right)\tilde{x}^{t},\]
which ensures that also condition (6b) is satisfied since \(\frac{\mathbf{1}_{N,n}\mathbf{\mathbbm{1}}_{N,n}^{\top}}{N^{2}}+RR^{\top}>0\).
### _Proof of Theorem iii.1_
The proof of Theorem III.1 is based on the exploitation of Theorem II.3. In order to apply such a result we need to (i) provide a function \(U\) satisfying the conditions (5) when applied to system (28), (ii) provide a function \(W\) satisfying the conditions (6) when system (30) is considered, and (iii) the Lipschitz continuity of the dynamics of (24) and \(z_{\perp}^{\text{eq}}(\cdot)\) (cf. (26)). As for points (i) and (ii), we invoke Lemma IV.3 and Lemma IV.4, respectively, to claim that, for any \(\alpha\in(0,1)\), \(\gamma\in(0,1)\), and \(\delta\in(0,\min\{2\mathbf{c}/(\delta NL^{2}),2\})\), these points are satisfied. Finally, the Lipschitz continuity of the gradients of the objective functions (cf. Assumption II.2) allows us to claim that point (iii) is satisfied too. Therefore, we can apply Theorem II.3 which allows us to guarantee that there exists \(\bar{\gamma}>0\) such that, for any \(\gamma\in(0,\bar{\gamma})\), the point \((0,z_{\perp}^{\text{eq}}(0))\) is globally exponentially stable for system (24). The proof follows by turning out to the original coordinates \((x,z)\).
## V Asynchronous and Lossy Networks
In this section, we study the case with imperfect networks. Specifically, we consider the case in which the agents are (possibly) asynchronous and communicate with (possible) packet losses. More formally, for all \(i\in\{1,\dots,N\}\), we introduce a stochastic process \(\lambda_{i}^{t}\in\{0,1\}\) modeling the fact that agent \(i\) is active or not in the following sense
\[\lambda_{i}^{t} =1\implies\text{agent $i$ updates and transmits variables at $t$}\] \[\lambda_{i}^{t} =0\implies\text{agent $i$ does not update and transmit variables at $t$}.\]
Further, for each pair \((i,j)\in\mathcal{E}\), let \(\beta_{ij}^{t}\in\{0,1\}\) be the stochastic process modeling the packet losses in the sense
\[\beta_{ij}^{t} =1\implies i\text{ receives message $m_{ji}^{t}$ from $j$ at $t$}\] \[\beta_{ij}^{t} =0\implies i\text{ does not receive message $m_{ji}^{t}$ from $j$ at $t$},\]
in which \(m_{ij}^{t}\) has the same meaning as in (12).
**Remark V.1**.: _We note that each process \(\beta_{ij}^{t}\) depends on the process \(\lambda_{j}^{t}\). Indeed, the fact that agent \(j\) is active at iteration \(t\) is a necessary (but not sufficient) condition to ensure that agent \(i\) successfully receives a message from agent \(j\). \(\square\)_
In light of the above remark, we also introduce the stochastic process \(\psi_{ij}^{t}:=\lambda_{i}^{t}\beta_{ij}^{t}\) for each pair \((i,j)\in\mathcal{E}\). Now, in order to determine the class of considered network imperfections, we provide the following definition.
**Definition V.2**.: _Let \(p^{t}\) be a stochastic process. Then, \(p^{t}\) is said to be mean-ergodic if it holds_
\[\lim_{T\to\infty}\frac{1}{T}\sum_{\tau=1}^{T}p^{\tau}=\mathbb{E}[p^{t}].\]
**Assumption V.3** (Network Imperfections).: _For all \(i\in\{1,\dots,N\}\) and \(j\in\mathcal{N}_{i}\), the stochastic processes \(\lambda_{i}^{t}\) and \(\psi_{ij}^{t}\) are ergodic in mean. Further, it holds \(\mathbb{E}[\psi_{ij}^{t}]>\epsilon_{\psi}\) for all \((i,j)\in\mathcal{E}\), for some \(\epsilon_{\psi}\in(0,1)\). Finally, for all \(i\in\{1,\dots,N\}\), it holds \(\mathbb{E}[\lambda_{i}^{t}]>\epsilon_{\lambda}\) for some \(\epsilon_{\lambda}\in(0,1)\). \(\square\)_
### _Robust ADMM-Tracking Gradient: Algorithm Description and Convergence Properties_
To address the challenging framework described above, we propose a slightly different version of Algorithm 1 that we call Robust ADMM-Tracking Gradient and report in Algorithm 2.
```
Initialization:\(x_{i}^{0}\in\mathbb{R}^{n}\), \(z_{ij}^{0}\in\mathbb{R}^{2n}\)\(\forall j\in\mathcal{N}_{i}\). for\(t=0,1,\dots\)do if active then \(\begin{bmatrix}y_{i}^{t}\\ s_{i}^{t}\end{bmatrix}=\frac{1}{1+pd_{i}}\left(\begin{bmatrix}x_{i}^{t}\\ \nabla f_{i}(x_{i}^{t})\end{bmatrix}+\sum_{j\in\mathcal{N}_{i}}z_{ij}^{t}\right) \quad x_{i+1}^{t+1}=x_{i}^{t}+\gamma\left(y_{i}^{t}-x_{i}^{t}\right)-\gamma \delta_{i}^{t}\) for\(j\in\mathcal{N}_{i}\)do \(m_{ij}^{t}=-z_{ij}^{t}+2\rho\begin{bmatrix}y_{i}^{t}\\ s_{i}^{t}\end{bmatrix}\) transmit \(m_{ij}^{t}\) to \(j\) and receive \(m_{ji}^{t}\) to \(j\) if\(m_{ji}^{t}\) is received then \(z_{ij}^{t+1}=(1-\alpha)z_{ij}^{t}+\alpha m_{ji}^{t}\)
```
**Algorithm 2** Robust ADMM-Tracking Gradient (Agent \(i\))
In Algorithm 2, we note that each variable \(z_{ij}^{t}\) is updated only if, at iteration \(t\), the message from agent \(j\) has been received by agent \(i\). In other words, the update (11b) is performed only if \(\beta_{ij}^{t}=1\). Further, as one may expect from the above discussion, the additional condition to perform such an update is that agent \(i\) is active at iteration \(t\). The next theorem assesses the convergence properties of Algorithm 2.
**Theorem V.4**.: _Consider Algorithm 2 and let Assumptions II.1, II.2, and V.3 hold. Then, there exist \(\bar{\gamma}_{\textsf{R}},a_{\textsf{R}},a_{3},a_{4}>0\) such that, for any \(\rho>0\), \(\alpha\in(0,\bar{\alpha}_{\textsf{R}})\), \(\gamma\in(0,\bar{\gamma}_{\textsf{R}})\), \(\delta\in(0,4\mathbf{c}/(\delta NL^{2}))\), \((x_{i}^{0},z_{i}^{0})\in\mathbb{R}^{n}\times\mathbb{R}^{2nd_{i}}\), for all \(i\in\{1,\ldots,N\}\), it holds_
\[\left\|x_{i}^{t}-x^{\star}\right\|\leq a_{3}\exp(-a_{4}t).\qed\]
The proof of Theorem V.4 is provided in Section VI-D. More in detail, the proof takes on (i) a singular perturbation interpretation to handle the interaction between the variables \(x_{i}\) and \(z_{i}\), and (ii) averaging theory to overcome the imperfections of the networks formalized in Assumption V.3.
**Remark V.5**.: _We remark that Assumption V.3 models the scenario of time-varying networks (with or without packet losses) as a special case. Indeed, one may interpret \(\beta_{ij}^{t}\) as the probability that edge \((i,j)\) exists at iteration \(t\). From this perspective, Assumption V.3, combined with Assumption II.1, guarantees that the time-varying graph has some connectivity-like property in terms of expected value. Therefore, Theorem V.4 guarantees the linear convergence for Robust ADMM-Tracking Gradient also in the case of time-varying networks. \(\qed\)_
## VI Convergence analysis of Robust ADMM-Tracking Gradient
Here, we provide the analysis of ADMM-Tracking Gradient. First, in Section VI-A, we interpret the aggregate form of Robust ADMM-Tracking Gradient as a _singularly perturbed_ system. Then, in Section VI-B and VI-C, respectively, we separately study the identified subsystems by relying on Lyapunov and averaging theory. Finally, in Section VI-D, we use the results collected in the previous steps to prove Theorem V.4.
### _Robust ADMM-Tracking Gradient as a Singularly Perturbed System_
By using the functions \(h_{i}^{x}\), \(h_{i}^{\nabla}\), and \(h_{\mathcal{N}_{i}}\) (cf. (14)) and the stochastic processes \(\lambda_{i}^{t}\) and \(\psi_{ij}^{t}\), we compactly rewrite the local update of Robust ADMM-Tracking Gradient as
\[x_{i}^{t+1} =(1-\lambda_{i}^{t})x_{i}^{t}+\lambda_{i}^{t}x_{i}^{t}+\lambda_{ i}^{t}\gamma\big{(}\big{(}h_{i}^{x}(x_{i}^{t},\begin{bmatrix}I&0\end{bmatrix}z_{i}^{t} )\big{)}-x_{i}^{t}\big{)}\] \[\quad-\lambda_{i}^{t}\gamma\delta h_{i}^{\nabla}(x_{i}^{t}, \begin{bmatrix}0&I\end{bmatrix}z_{i}^{t}) \tag{44a}\] \[z_{i}^{t+1} =(I-\Psi_{i}^{t})z_{i}^{t}+\Psi_{i}^{t}(1-\alpha)z_{i}^{t}\] \[\quad+\Psi_{i}^{t}\alpha(-z_{\mathcal{N}_{i}}^{t}+2\rho h_{ \mathcal{N}_{i}}(x_{\mathcal{N}_{i}}^{t},z_{\mathcal{N}_{i}}^{t})), \tag{44b}\]
where we introduced the matrix \(\Psi_{i}^{t}\in\mathbb{R}^{2nd_{i}\times 2nd_{i}}\) defined as
\[\Psi_{i}^{t}=\text{blkdiag}\left(\text{col}(\psi_{ij}^{t})_{j\in\mathcal{N}_{i }}\otimes I_{2n}\right).\]
For the sake of readability, we algebraically rearrange (44) as
\[x_{i}^{t+1} =x_{i}^{t}+\gamma\lambda_{i}^{t}\big{(}\big{(}h_{i}^{x}(x_{i}^{t },\begin{bmatrix}I&0\end{bmatrix}z_{i}^{t})\big{)}-x_{i}^{t}\big{)}\] \[\quad-\gamma\delta\lambda_{i}^{t}h_{i}^{\nabla}(x_{i}^{t}, \begin{bmatrix}0&I\end{bmatrix}z_{i}^{t}) \tag{45a}\] \[z_{i}^{t+1} =z_{i}^{t}+\alpha\Psi_{i}^{t}(-z_{i}^{t}-z_{\mathcal{N}_{i}}^{t} +2\rho h_{\mathcal{N}_{i}}(x_{\mathcal{N}_{i}}^{t},z_{\mathcal{N}_{i}}^{t})). \tag{45b}\]
The local update (45) leads to the aggregate form
\[x^{t+1} =x^{t}+\gamma\Lambda^{t}\left(H\left(x^{t}+A_{x}^{\top}z^{t} \right)-x^{t}\right)\] \[\quad-\gamma\delta\Lambda^{t}H\left(G(x^{t})+A_{\nabla}^{\top}z ^{t}\right) \tag{46a}\] \[z^{t+1} =z^{t}-\alpha\Psi^{t}\left((I\!+\!P)z^{t}\!-\!2\rho PA\mathcal{H }(z^{t}\!+\!v(x^{t}))\right)\!, \tag{46b}\]
where \(x^{t}\), \(z^{t}\), \(H\), \(A_{x}\), \(A_{\nabla}\), \(P\), and \(v\) have the same meaning as in (16) and we further introduced the matrices \(\Lambda^{t}:=\text{blkdiag}(\lambda_{1}^{t}I_{n},\ldots,\lambda_{N}^{t}I_{n}) \in\mathbb{R}^{Nn\times Nn}\) and \(\Psi^{t}:=\text{blkdiag}(\psi_{i}^{t}I_{2nd_{1}},\ldots,\psi_{N}^{t}I_{2nd_{N} })\in\mathbb{R}^{2nd}\). Fig. 2 reports a block diagram graphically describing (46).
By using the change of variables (22) and Lemma IV.1, we rewrite (46) as
\[x^{t+1} =x^{t}+\gamma\Lambda^{t}\left(H\left(x^{t}+A_{x}^{\top}M^{\top}z _{\perp}^{t}\right)-x^{t}\right)\] \[\quad-\gamma\delta\Lambda^{t}H\left(G(x^{t})+A_{\nabla}^{\top}M^ {\top}z_{\perp}^{t}\right) \tag{47a}\] \[\bar{z}^{t+1} =\bar{z}^{t}-\alpha B^{\top}\Psi^{t}\left(I+P-2\rho PA\mathcal{H }\Lambda^{\top}\right)Mar_{\perp}^{t}\] \[\quad+\alpha 2\rho B^{\top}\Psi^{t}PA\mathcal{H}v(x^{t})\] (47b) \[z_{\perp}^{t+1} =z_{\perp}^{t}-\alpha M^{\top}\Psi^{t}\left(I+P-2\rho PA \mathcal{H}\Lambda^{\top}\right)Mar_{\perp}^{t}\] \[\quad+\alpha 2\rho M^{\top}\Psi^{t}PA\mathcal{H}v(x^{t}). \tag{47c}\]
As in Section IV, we observe that \(\bar{z}\) does not affect the other states of system (47) and, thus, we will ignore it in our analysis. Moreover, by using \(\tilde{x}^{t}=x^{t}-\mathbf{1}_{N,n}x^{\star}\) to transform \(x^{t}\), system (47) is equivalent to
\[\tilde{x}^{t+1} =\tilde{x}^{t}+\gamma\Lambda^{t}\left(H\left(x^{t}+A_{x}^{\top}M^ {\top}z_{\perp}^{t}\right)-x^{t}\right)\] \[\quad-\gamma\delta\Lambda^{t}H\left(G(x^{t})+A_{\nabla}^{\top}M^{ \top}z_{\perp}^{t}\right) \tag{48a}\] \[z_{\perp}^{t+1} =z_{\perp}^{t}-\alpha M^{\top}\Psi^{t}\left(I+P-2\rho PA \mathcal{H}\Lambda^{\top}\right)Mar_{\perp}^{t}\] \[\quad+\alpha 2\rho M^{\top}\Psi^{t}PA\mathcal{H}v(\bar{x}^{t}+ \mathbf{1}_{N,n}x^{\star}), \tag{48b}\]
where, with a slight abuse of notation, we maintained a hybrid notation with \(x^{t}=\mathbf{1}_{N,n}\mu^{t}+Rx_{\perp}^{t}+\mathbf{1}_{N,n}x^{\star}\). System (48) is a singularly perturbed system. Specifically, (48a) plays the role of the slow subsystem, while (48b) plays the role of the fast one. As in system (24), the subsystem (48b) has an equilibrium \(z_{\perp}^{\text{eq}}(\tilde{x}^{t})\) parametrized in the slow state \(\tilde{x}^{t}\). Indeed, by using \(x=\tilde{x}+\mathbf{1}_{N,n}x^{\star}\), one may check this claim through the following chain of equations
\[(I+P-2\rho PA\mathcal{H}\Lambda^{\top})Mz_{\perp}^{\text{eq}}( \tilde{x}^{t})-2\rho PA\mathcal{H}v(x)\] \[\overset{(a)}{=}2\rho\left((I+P-2\rho PA\mathcal{H}\Lambda^{ \top})(MF^{-1}M^{\top})-I\right)PA\mathcal{H}v(x)\] \[\overset{(b)}{=}2\rho\left(MM^{\top}-I\right)PA\mathcal{H}v(x)\] \[\overset{(c)}{=}-2\rho BB^{\top}PA\mathcal{H}v(x)=0, \tag{49}\]
Figure 2: Block diagram representing (46).
where in \((a)\) we used the definition of \(z_{\perp}^{\text{eq}}\) (cf. (26)), in \((b)\) we used the result \((I+P-2\rho PA\mathcal{H}A^{\top})\bar{M}F^{-1}M^{\top}=MM^{\top}\), in \((c)\) we used \(MM^{\top}=I-BB^{\top}\), while the last equality follows because, since \(A^{\top}B=0\), it holds \(B^{\top}PA\mathcal{H}=0\).
### _Boundary Layer System: Averaging Analysis_
Here, we study the boundary layer system of (48), i.e., following the same steps of Section IV-B, we study the subsystem (48b) by considering an arbitrarily fixed \(\tilde{x}^{t}=\tilde{x}\in\mathbb{R}^{Nn}\) and the error variable \(\tilde{z}_{\perp}^{t}=z_{\perp}^{t}-z_{\perp}^{\text{eq}}(\tilde{x})\). Therefore, using the result in (49), we get
\[\tilde{z}_{\perp}^{t+1}=\tilde{z}_{\perp}^{t}-\alpha M^{\top}\Psi^{t}\left(I+P -2\rho PA\mathcal{H}A^{\top}\right)M\tilde{z}_{\perp}^{t}. \tag{50}\]
It is worth noting that (50) is a time-varying system in view of the presence of \(\Psi^{t}\). For this reason, we study it by resorting to averaging theory for discrete-time systems (cf., e.g., [41] or [42, Ch. 7]). Specifically, we will assess the convergence properties of (50) by studying its averaged system. To this end, we need to show that the following limit exists
\[g_{\text{av}}(\tilde{z}_{\perp})=\lim_{T\to\infty}\!\!\frac{1}{T}\sum_{\tau=t+ 1}^{\bar{t}+T}M^{\top}\Psi^{\tau}\left(I\!+\!P\!-\!2\rho PA\mathcal{H}A^{\top} \right)M\tilde{z}_{\perp},\]
uniformly in \(\bar{t}\in\mathbb{N}\) and for any \(\tilde{z}_{\perp}\in\mathbb{R}^{n_{d}}\). In this regard, since the components of \(\Psi^{t}\) are generated by a stochastic process ergodic in mean (cf. Assumption V.3), we get
\[g_{\text{av}}(\tilde{z}_{\perp})=M^{\top}E_{\Psi}\left(I+P-2\rho PA\mathcal{H }A^{\top}\right)M\tilde{z}_{\perp}, \tag{51}\]
where we introduced the symbol \(E_{\Psi}:=\mathbb{E}[\Psi^{t}]\). Then, by using (51), the averaged system associated to (50) reads as
\[\tilde{z}_{\perp,\text{av}}^{t+1}=\tilde{z}_{\perp,\text{av}}^{t}-\alpha F_{ \text{av}}\tilde{z}_{\perp,\text{av}}^{t}, \tag{52}\]
where \(\tilde{z}_{\perp,\text{av}}^{t}\in\mathbb{R}^{n_{d}}\) denotes the state of the system, while the matrix \(F_{\text{av}}\in\mathbb{R}^{n_{d}\times m}\) is defined as \(F_{\text{av}}:=M^{\top}E_{\Psi}\left(I+P-2\rho PA\mathcal{H}A^{\top}\right)M\). The next lemma ensures that the origin is globally exponentially stable for (52).
**Lemma VI.1**.: _Consider (52). Then, the origin is a globally exponentially stable equilibrium point for (52)._
Proof.: Since \(I>E_{\Psi}=E_{\Psi}^{\top}>\epsilon_{\psi}I\) with \(\epsilon_{\psi}\in(0,1)\) (cf. Assumption V.3), then
\[\ker(E_{\Psi}\left(I\!+\!P\!-\!2\rho PA\mathcal{H}A^{\top}\right))\!\equiv\! \ker(I\!+\!P\!-\!2\rho PA\mathcal{H}A^{\top}).\]
Further, we recall that the columns of \(M\) span the subspace orthogonal with respect to \(\ker\left(I+P-2\rho PA\mathcal{H}A^{\top}\right)\). With these arguments and the ones in the proof of Lemma IV.2, we guarantee that the matrix \(I-\alpha F_{\text{av}}\) is Schur for any \(\alpha\in(0,1)\). The proof follows by noting that (52) is an autonomous linear system governed by a Schur state matrix.
### _Reduced System: Averaging Analysis_
In this section, we will study the reduced system associated to (48), i.e., the system obtained by plugging \(z_{\perp}^{t}=z_{\perp}^{\text{eq}}(\tilde{x}^{t})\) for all \(t\geq 0\) into (48a). Then, by enforcing the definition of \(z_{\perp}^{\text{eq}}\) (cf. (26)), the results in (27), \(\mathbf{1}_{N,n}\in\ker\left(\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N} -I\right)\), and \(\mathbf{1}_{N,n}^{\top}R=0\), such a reduced system reads
\[\tilde{x}^{t+1} =\tilde{x}^{t}+\gamma\Lambda^{t}\left(I-\frac{\mathbf{1}_{N,n} \mathbf{1}_{N,n}^{\top}}{N}\right)\tilde{x}^{t}\] \[\qquad-\gamma\delta\Lambda^{t}\frac{\mathbf{1}_{N,n}\mathbf{1}_ {N,n}^{\top}}{N}G(\tilde{x}^{t}+\mathbf{1}_{N,n}x^{\star}). \tag{53}\]
To simplify the notation, we equivalently write system (53) as
\[\tilde{x}^{t+1}=\tilde{x}^{t}+\gamma r(\tilde{x}^{t},t), \tag{54}\]
where we introduced \(r:\mathbb{R}^{Nn}\times N\to\mathbb{R}^{Nn}\) defined as
\[r(\tilde{x}^{t},t) =\Lambda^{t}\left(\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N} -I\right)\tilde{x}\] \[\quad-\delta\Lambda^{t}\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{ \top}}{N}G(\tilde{x}+\mathbf{1}_{N,n}x^{\star}). \tag{55}\]
We note that \(r(0,0,t)=0\) for any \(t\in\mathbb{N}\). As in Section VI-B, since system (53) is time-varying, we study it by resorting to averaging theory. To this end, let \(r_{\text{av}}:\mathbb{R}^{Nn}\to\mathbb{R}^{Nn}\) be
\[r_{\text{av}}(\tilde{x}):=\lim_{T\to\infty}\frac{1}{T}\sum_{\tau=\bar{t}+1}^{ \bar{t}+T}r(\tilde{x},\tau).\]
Since the components of \(\Lambda^{t}\) are generated by stochastic processes ergodic in mean (cf. Assumption V.3) and using the same considerations done to derive (51), such a limit exists uniformly in \(\bar{t}\) and for any \(\tilde{x}\in\mathbb{R}^{Nn}\). Specifically, it reads as
\[r_{\text{av}}(\tilde{x}) =E_{\Lambda}\left(\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{ N}-I\right)\tilde{x}\] \[\quad-\delta E_{\Lambda}\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{ \top}}{N}G(\tilde{x}+\mathbf{1}_{N,n}x^{\star}), \tag{56}\]
where we introduced the symbol \(E_{\Lambda}:=\text{blkdiag}(E_{\lambda_{1}}I,\dots,E_{\lambda_{N}}I)\), with \(E_{\lambda_{i}}:=\mathbb{E}[\lambda_{i}^{t}]\) for all \(i\in\{1,\dots,N\}\). The next lemma provides a suitable function turning out to be crucial to conclude the proof of Theorem V.4.
**Lemma VI.2**.: _Consider \(r_{\text{av}}(\cdot)\) as defined in (56). Then, there exist \(\bar{\delta},\beta_{1},\beta_{2},\beta_{3},\beta_{4}>0\), and \(W_{\text{av}}:\mathbb{R}^{Nn}\to\mathbb{R}\) such that, for any \(\delta\in(0,\delta)\) and \(\tilde{x}_{\text{av}}\in\mathbb{R}^{Nn}\), it holds_
\[\beta_{1}\left\|\tilde{x}_{\text{av}}\right\|^{2}\leq W_{\text{ av}}(\tilde{x}_{\text{av}})\leq\beta_{2}\left\|\tilde{x}_{\text{av}}\right\|^{2} \tag{57a}\] \[\nabla W_{\text{av}}(\tilde{x}_{\text{av}})^{\top}r_{\text{av}}( \tilde{x}_{\text{av}})\leq-\beta_{3}\left\|\tilde{x}_{\text{av}}\right\|^{2}\] (57b) \[\left\|\nabla W_{\text{av}}(\tilde{x}_{\text{av}})\right\|\leq\beta_{4} \left\|\tilde{x}_{\text{av}}\right\|. \tag{57c}\]
Proof.: Let us rewrite (56) in a block-wise sense. To this end, for all \(i\in\{1,\dots,N\}\), let \(r_{i,\text{av}}:\mathbb{R}^{Nn}\to\mathbb{R}^{n}\) be defined as
\[r_{i,\text{av}}(\tilde{x}_{\text{av}}) =E_{\lambda_{i}}\left(\frac{1}{N}\sum_{j=1}^{N}\tilde{x}_{j, \text{av}}-\tilde{x}_{i,\text{av}}\right)\] \[\quad-\delta E_{\lambda_{1}}\mathbf{1}_{N,n}^{\top}G(\tilde{x}^{t }+\mathbf{1}_{N,n}x^{\star})/N,\]
where we decomposed \(\tilde{x}_{\text{av}}\) according to \(\tilde{x}_{\text{av}}:=\text{col}(\tilde{x}_{1,\text{av}},\dots,\tilde{x}_{N, \text{av}})\) with \(\tilde{x}
Then, it holds \(r_{\text{av}}(\tilde{x}):=\textsc{col}(r_{1,\text{av}}(\tilde{x}_{\text{av}}), \ldots,r_{N,\text{av}}(\tilde{x}_{\text{av}}))\). Now, let us consider the function \(W_{i,\text{av}}:\mathbb{R}^{n}\to\mathbb{R}\) defined as
\[W_{i,\text{av}}(\tilde{x}_{i,\text{av}})=\frac{1}{2E_{\lambda_{i}}}\left\|\tilde {x}_{i,\text{av}}\right\|^{2}.\]
Then, it holds
\[\nabla W_{i,\text{av}}(\tilde{x}_{i,\text{av}})^{\top}r_{i,\text{ av}}(\tilde{x}_{\text{av}}) =-\left\|\tilde{x}_{i,\text{av}}\right\|^{2}+\tilde{x}_{i,\text{ av}}^{\top}\sum_{j=1}^{N}\tilde{x}_{j,\text{av}}/N\] \[\quad-\delta\tilde{x}_{i,\text{av}}^{\top}\mathbf{1}_{N,n}^{\top} G(\tilde{x}_{\text{av}})/N\] \[\stackrel{{(a)}}{{=}}-\left\|\tilde{x}_{i,\text{av}} \right\|^{2}+\tilde{x}_{i,\text{av}}^{\top}\mu_{\text{av}}\] \[\quad-\delta\tilde{x}_{i,\text{av}}^{\top}\mathbf{1}_{N,n}^{\top} G(\tilde{x}_{\text{av}})/N, \tag{58}\]
where in \((a)\) we used \(\mu_{\text{av}}:=\mathbf{1}_{N,n}^{\top}\tilde{x}_{\text{av}}/N\). Let \(W_{\text{av}}(\tilde{x}_{\text{av}}):=\sum_{i=1}^{N}W_{i,\text{av}}(\tilde{x} _{i,\text{av}})\). Being \(\epsilon_{\lambda}<E_{\lambda_{i}}\leq 1\) for some \(\epsilon_{\lambda}\in(0,1)\) (cf. Assumption V.3) for all \(i\in\{1,\ldots,N\}\), \(W_{\text{av}}\) is positive definite and, thus, the conditions (57a) and (57c) are satisfied. In order to check (57b), we use (58) obtaining
\[\nabla W_{\text{av}}(\tilde{x}_{\text{av}})^{\top}\tilde{x}_{\text {av}} =-\left\|\tilde{x}_{\text{av}}\right\|^{2}+N\mu_{\text{av}}^{\top} \mu_{\text{av}}-\mu_{\text{av}}^{\top}\mathbf{1}_{N,n}^{\top}G(\tilde{x}_{ \text{av}})\] \[\stackrel{{(a)}}{{=}}-\left\|\mathbf{1}_{N,n}\mu_{ \text{av}}+RR_{\perp,\text{av}}\right\|^{2}+N\mu_{\text{av}}^{\top}\mu_{ \text{av}}\] \[\stackrel{{(b)}}{{=}}-\left\|x_{\perp,\text{av}} \right\|^{2}-\delta\mu_{\text{av}}^{\top}\mathbf{1}_{N,n}^{\top}G(\tilde{x}_{ \text{av}})\] \[\stackrel{{(c)}}{{=}}-\left\|x_{\perp,\text{av}} \right\|^{2}-\delta\mu_{\text{av}}^{\top}\nabla f(\mu_{\text{av}}+x^{\star})\] \[\quad+\delta\mu_{\text{av}}^{\top}N\tilde{G}(\mu_{\text{av}}^{t}, x_{\perp,\text{av}}), \tag{59}\]
where in \((a)\) we used \(x_{\perp,\text{av}}:=R^{\top}\tilde{x}_{\text{av}}\) and decomposed \(\tilde{x}_{\text{av}}\) according to \(\tilde{x}_{\text{av}}=\mathbf{1}_{N,n}\mu_{\text{av}}+RR_{\perp,\text{av}}\), in \((b)\) we expanded the square norm and used \(R^{\top}R=I\), \(\mathbf{1}_{N,n}^{\top}\mathbf{1}_{N,n}=NI\), and \(R^{\top}\mathbf{1}_{N,n}=0\), while in \((c)\) we added and subtracted the term \(\delta\mu_{\text{av}}^{\top}\nabla f(\mu_{\text{av}}+x^{\star})\) and used the operator \(\tilde{G}(\cdot,\cdot)\) with the same meaning as in (33). By using the strong convexity of \(f\) (cf. Assumption II.2) and \(\nabla f(x^{\star})=0\), we bound (59) as
\[W_{\text{av}}(\tilde{x}_{\text{av}})^{\top}\tilde{x}_{\text{av}} \leq-\|x_{\perp,\text{av}}\|^{2}-\delta\mathbf{c}\left\|\mu_{ \text{av}}\right\|^{2}+\delta\mu_{\text{av}}^{\top}N\tilde{G}(\mu_{\text{av}}^ {t},x_{\perp,\text{av}})\] \[\stackrel{{(a)}}{{\leq}}-\|x_{\perp,\text{av}}\|^{2}- \delta\mathbf{c}\left\|\mu_{\text{av}}\right\|^{2}+\delta L\sqrt{N}\left\|\mu_{ \text{av}}\right\|\|x_{\perp,\text{av}}\|\] \[\stackrel{{(b)}}{{\leq}}\begin{bmatrix}\left\|\mu_{ \text{av}}\right\|\\ \left\|x_{\perp,\text{av}}\right\|\end{bmatrix}^{\top}Q_{\text{av}}(\delta) \begin{bmatrix}\left\|\mu_{\text{av}}\right\|\\ \left\|x_{\perp,\text{av}}\right\|\end{bmatrix}, \tag{60}\]
where in \((a)\) we used the Cauchy-Schwarz inequality and the result (39) to bound the term \(\mu_{\text{av}}^{\top}N\tilde{G}(\mu_{\text{av}}^{t},x_{\perp,\text{av}})\), while in \((b)\) we rearranged the inequality in a matrix form by introducing \(Q_{\text{av}}(\delta)=Q_{\text{av}}(\delta)^{\top}\in\mathbb{R}^{2\times 2}\) defined as
\[Q_{\text{av}}(\delta)=\begin{bmatrix}\delta\mathbf{c}&-\delta L\sqrt{N}/2\\ -\delta L\sqrt{N}/2&1\end{bmatrix}.\]
By Sylvester Criterion, it holds \(Q_{\text{av}}(\delta)>0\) if and only if \(\delta<\frac{4\mathbf{c}}{L^{2}N}\). Therefore, by choosing any \(\delta\in(0,\frac{4\mathbf{c}}{L^{2}N})\), the inequality (60) leads to
\[\nabla W_{\text{av}}(\tilde{x}_{\text{av}})^{\top}r_{\text{av}}( \tilde{x}_{\text{av}}) \leq-q_{\text{av}}\left(\left\|\mu_{\text{av}}\right\|^{2}+\left\|x _{\perp,\text{av}}\right\|^{2}\right)\] \[=-q_{\text{av}}\tilde{x}_{\text{av}}^{\top}\left(\frac{\mathbf{1} _{N,n}\mathbf{1}_{N,n}^{\top}}{N^{2}}+RR^{\top}\right)\tilde{x}_{\text{av}},\]
where \(q_{\text{av}}>0\) is the smallest eigenvalue of the positive definite matrix \(Q_{\text{av}}(\delta)\). Thus, since \(\frac{\mathbf{1}_{N,n}\mathbf{1}_{N,n}^{\top}}{N^{2}}+RR^{\top}>0\), also (57b) is ensured and the proof is concluded.
### _Proof of Theorem v.4_
The proof of Theorem V.4 is based on a proper exploitation of Theorem II.3. More in details, to apply Theorem V.4, we first need to use two results arising in averaging theory, i.e., [41, Th. 2.2.2] and [42, Prop. 7.3]. Indeed, in order to apply Theorem II.3 we need to (i) provide a function \(U\) satisfying the conditions (5) when applied to system (50), (ii) provide a function \(W\) satisfying the conditions (6) when system (53) is considered, and (iii) guarantee the Lipschitz continuity of the dynamics of (48) and \(z_{\perp}^{\text{eq}}(\cdot)\) (cf. (26)).
Point (i)we recall that Lemma VI.1 guarantees that the origin is a globally exponentially stable equilibrium point of system (52), i.e., the averaged system associated to the boundary layer system (50) of (48). Hence, by combining this result with the Lipschitz continuity of the gradients of the objective functions (cf. Assumption II.2) and the fact that the components of \(\Psi^{t}\) are generated by stochastic processes ergodic in mean (cf. Assumption V.3), we can apply [41, Th. 2.2.2] to claim the existence of \(\bar{\alpha}_{\text{R}}\in(0,1)\) such that, for any \(\alpha\in(0,\bar{\alpha}_{\text{R}})\), the origin is a globally exponentially stable equilibrium point for (50). In turn, the function \(U\) satisfying (5) can be obtained by applying the Converse Lyapunov Theorem [42, Th. 5.8]. Indeed, we notice that both system (50) and its averaged version (52) are independent with respect to the slow state \(\tilde{x}^{t}\). Thus, the uniformity with respect to \(\tilde{x}^{t}\) of the conditions (5) is guaranteed.
Point (ii)by invoking Lemma VI.2, the fact that the components of \(\Lambda^{t}\) are generated by stochastic processes ergodic in mean (cf. Assumption V.3), and the Lipschitz continuity of the gradients of the objective functions (cf. Assumption II.2), we can apply [42, Prop. 7.3]. Specifically, we apply [42, Prop. 7.3] which guarantees the existence of a function \(W\) and \(\tilde{\gamma}_{\text{av}}>0\) such that, for any \(\gamma\in(0,\bar{\gamma}_{\text{av}})\), \(W\) satisfies the conditions (6) when the reduced system (53) is considered.
Point (iii)Finally, the Lipschitz continuity of the gradients of the objective functions (cf. Assumption II
communication (cf. Section V), we study the local updates
\[x_{i}^{t+1} =x_{i}^{t}+\gamma\lambda_{i}^{t}\left(h_{i}^{x}(x_{i}^{t},\begin{bmatrix} I&0\end{bmatrix}z_{i}^{t})-x_{i}^{t}\right)\] \[\quad-\gamma\delta\lambda_{i}^{t}h_{i}^{\nabla}(x_{i}^{t}, \begin{bmatrix}0&I\end{bmatrix}z_{i}^{t})+e_{i,x}^{t} \tag{61a}\] \[z_{i}^{t+1} =z_{i}^{t}+\alpha\Psi_{i}^{t}(-z_{i}^{t}-z_{\mathcal{N}_{i}}^{t}+2 \rho h_{\mathcal{N}_{i}}(x_{\mathcal{N}_{i}}^{t},z_{\mathcal{N}_{i}}^{t}))+e_{ i,z}^{t}. \tag{61b}\]
Let \(e\!:=\!\textsc{col}(e_{x},e_{z})\!\in\!\mathbb{R}^{(N+2\mathsf{d})n}\) be the vector staking all the local errors, where \(e_{x}\in\mathbb{R}^{Nn}\) and \(e_{z}\in\mathbb{R}^{2n\mathsf{d}}\) are defined as
\[e_{x}:=\begin{bmatrix}e_{1,x}\\ \vdots\\ e_{N,x}\end{bmatrix},\quad e_{z}:=\begin{bmatrix}e_{1,z}\\ \vdots\\ e_{N,z}\end{bmatrix}. \tag{62}\]
The next result characterizes the robustness of (61) with respect to the errors \(e_{i,x}^{t}\) and \(e_{i,z}^{t}\) in terms of ISS property.
**Theorem VII.1**.: _Consider (61) and let Assumptions II.1, II.2, and V3 hold. Then, there exist a \(\mathcal{KL}\) function \(\beta(\cdot)\), a \(\mathcal{K}_{\infty}\) function \(\nu(\cdot)\), and some constants \(\bar{\gamma}_{\mathcal{R}},\bar{\alpha}_{\mathcal{R}},\bar{\delta},c^{0}>0\) such that, for any \(\rho>0\), \(\alpha\in(0,\bar{\alpha}_{\mathcal{R}})\), \(\gamma\in(0,\bar{\gamma}_{\mathcal{R}})\), \(\delta\in(0,\bar{\delta})\), \((x_{i}^{0},z_{i}^{0})\in\mathbb{R}^{n}\times\mathbb{R}^{2n\mathsf{d}_{i}}\), for all \(i\in\{1,\ldots,N\}\), it holds_
\[\left\|x_{i}^{t}-x^{\star}\right\|\leq\beta(c^{0},t)+\nu(\left\|e\right\|_{ \infty}),\]
_for any \(e\in\mathcal{L}_{\infty}^{(N+\mathsf{d})n}\) and \(t\in\mathbb{N}\).1_
Footnote 1: See [43, Chapter 4] for the function classes’ definitions.
Proof.: By using x and z as given in (62), the matrices and the quantities \(x^{t}\), \(z^{t}\), and \(G(x^{t})\) with the same meaning as in (16) and (46), the aggregate form of (61) reads as
\[x^{t+1} =x^{t}+\gamma\Lambda^{t}\left(H\left(x^{t}+A_{\nabla}^{\top}z^{t} \right)-x^{t}\right)\] \[\quad-\gamma\delta\Lambda^{t}H\left(G(x^{t})+A_{\nabla}^{\top}z^{ t}\right)+e_{x}^{t} \tag{63a}\] \[z^{t+1} =z^{t}\!-\!\alpha\Psi^{t}\left((I\!+\!P)z^{t}\!-\!2\rho P \mathcal{A}\!H(z^{t}\!+\!v(x^{t}))\right)\!+e_{z}^{t}. \tag{63b}\]
As in Section IV and VI, we rewrite system (63) by (i) using the changes of coordinates (22) and \(\tilde{x}^{t}:=x^{t}-\mathbf{1}_{N,n}x^{\star}\), and (ii) ignoring the evolution of \(\tilde{z}^{t}\). Thus, we get
\[\tilde{x}^{t+1} =\tilde{x}^{t}+\gamma\Lambda^{t}\left(H\left(x^{t}+A_{\nabla}^{ \top}M^{\top}z_{\perp}^{t}\right)-x^{t}\right)\] \[\quad-\gamma\delta\Lambda^{t}H\left(G(x^{t})+A_{\nabla}^{\top}M^{ \top}z_{\perp}^{t}\right)+e_{x}^{t} \tag{64a}\] \[z_{\perp}^{t+1} =z_{\perp}^{t}-\alpha M^{\top}\Psi^{t}\left(I+P-2\rho P\mathcal{ A}\!H\Lambda^{\top}\right)Mz_{\perp}^{t}\] \[\quad+\alpha 2\rho M^{\top}\Psi^{t}P\mathcal{A}\!H\nu(x^{t})+M^{\top}e_{z} ^{t}, \tag{64b}\]
where, with a slight abuse of notation, we maintained a hybrid notation with \(x^{t}=\tilde{x}^{t}+\mathbf{1}_{N,n}x^{\star}\). We note that system (64) can be viewed as a perturbed version of (48). To take advantage of this interpretation, let \(\xi\in\mathbb{R}^{(N+2\mathsf{d})n}\) be defined as
\[\xi:=\begin{bmatrix}\tilde{x}\\ z_{\perp}-z_{\perp}^{\mathsf{q}\mathsf{q}}(\tilde{x})\end{bmatrix}, \tag{65}\]
which allows us to compactly rewrite system (64) as
\[\xi^{t+1}=f_{\xi}(\xi^{t},t)+\tilde{e}^{t}, \tag{66}\]
where \(f_{\xi}:=\mathbb{R}^{(N+2\mathsf{d})n}\to\mathbb{R}^{(N+2\mathsf{d})n}\) is suitably defined to contain all the terms arising in (64), while \(\tilde{e}^{t}\in\mathbb{R}^{(N+2\mathsf{d})n}\) is defined as \(\tilde{e}^{t}:=\textsc{col}(e_{x}^{t},M^{\top}e_{z}^{t})\). In the proof of Theorem V.4 (cf. Section VI-D), by using Theorem II.3, we have guaranteed the existence of \(\bar{\alpha}_{\mathcal{R}},\bar{\delta}_{\mathcal{R}}>0\) such that, for any \(\alpha\in(0,\bar{\alpha}_{\mathcal{R}})\) and \(\gamma\in(0,\bar{\gamma}_{\mathcal{R}})\), the origin is a globally exponentially stable equilibrium for the nominal system described by
\[\xi^{t+1}=f_{\xi}(\xi^{t},t). \tag{67}\]
Therefore, by using the Converse Lyapunov Theorem [42, Th. 5.8], there exist some constants \(a_{5},a_{6},a_{7},a_{8}>0\) and a Lyapunov function \(V_{\xi}:\mathbb{R}^{(N+2\mathsf{d})n}\times\mathbb{N}\to\mathbb{R}\) such that
\[a_{5}\left\|\xi\right\|^{2} \leq V_{\xi}(\xi,t) \leq a_{6}\left\|\xi\right\|^{2} \tag{68a}\] \[V_{\xi}(f_{\xi}(\xi,t),t+1)-V_{\xi}(\xi,t) \leq-a_{7}\left\|\xi\right\|^{2}\] (68b) \[|V_{\xi}(\xi_{1},t)-V_{\xi}(\xi_{2},t) \leq a_{8}\left\|\xi_{1}-\xi_{2}\right\|(\left\|\xi_{1}\right\|+ \left\|\xi_{2}\right\|), \tag{68c}\]
for any \(\xi,\xi_{1},\xi_{2}\in\mathbb{R}^{(N+2\mathsf{d})n}\) and \(t\in\mathbb{N}\). Therefore, by evaluating \(\Delta V_{\xi}(\xi^{t},t):=V_{\xi}(\xi^{t+1},t+1)-V_{\xi}(\xi^{t},t)\) along the trajectories of system (66), we get
\[\Delta V_{\xi}(\xi^{t},t) =V_{\xi}(f_{\xi}(\xi^{t},t)+\tilde{e}^{t},t+1)-V_{\xi}(\xi^{t},t)\] \[\stackrel{{(a)}}{{=}}V_{\xi}(f_{\xi}(\xi^{t},t),t+1)-V_{ \xi}(\xi^{t},t)\] \[\quad+V(f_{\xi}(\xi^{t},t)+\tilde{e}^{t},t+1)-V_{\xi}(f_{\xi}( \xi^{t},t),t+1)\] \[\stackrel{{(b)}}{{\leq}}-a_{7}\left\|\xi^{t}\right\|^{2}\] \[\stackrel{{(c)}}{{\leq}}-a_{7}\left\|\xi^{t}\right\|^{2}\] \[\quad+a_{8}\left\|\tilde{e}^{t}\right\|(\left\|f_{\xi}(\xi^{t},t)+ \tilde{e}^{t}\right\|\!+\!\left\|f_{\xi}(\xi^{t},t)\right\|\!\right), \tag{69}\]
where in \((a)\) we add and subtract the term \(V_{\xi}(f_{\xi}(\xi^{t},t),t+1)\), in \((b)\) we apply (68b), in \((c)\) we apply (68c). Being the gradients of \(f_{i}\) Lipschitz continuous (cf. Assumption II.2), then \(f_{\xi}(\cdot,t)\) is Lipschitz, i.e., there exists \(L_{\xi}>0\) such that
\[\left\|f_{\xi}(\xi_{1},t)-f_{\xi}(\xi_{2},t)\right\|\leq L_{\xi}\left\|\xi_{1}-\xi _{2}\right\|, \tag{70}\]
for any \(\xi_{1},\xi_{2}\in\mathbb{R}^{
systems [44, Def. 1], (ii) using the inequalities \(\|x_{i}^{t}-x^{\star}\|\leq\xi^{t}\) and \(\|\bar{e}^{t}\|\leq\|e^{t}\|\), and (iii) setting \(c^{0}=\xi^{0}\).
**Remark VII.2**.: _The above analysis can be easily adapted to claim the ISS robustness of ADMM-Tracking Gradient in the case of synchronous agents' updates and without packet losses. Indeed, the key element in the proof of Theorem VII.1 is the exploitation of the global exponential stability of \((0,z_{i}^{eq}(0))\) for (48) proved in the proof of Theorem V.4 (cf. Section VI-D). Then, since in the proof of Theorem III.1 (cf. Section IV-D) we prove that \((0,z_{i}^{eq}(0))\) is globally exponentially stable for (24), i.e., for the system that equivalently describes the evolution of ADMM-Tracking Gradient, the same steps of the proof of Theorem VII.1 can be used to assess the ISS of ADMM-Tracking Gradient with respect to the generic errors \(e_{i,x}^{t}\) and \(e_{i,z}^{t}\) considered in (61). \(\square\)_
## VIII Numerical Simulations
In this section, we perform some numerical simulations to compare ADMM-Tracking Gradient with the well-known Gradient Tracking algorithm [4, 5, 6], which is described by
\[x_{i}^{t+1} =\sum_{j\in\mathcal{N}_{i}}w_{ij}x_{j}^{t}-\gamma z_{i}^{t} \tag{73a}\] \[z_{i}^{t+1} =\sum_{j\in\mathcal{N}_{i}}w_{ij}z_{j}^{t}+\nabla f_{i}(x^{t+1}) -\nabla f_{i}(x^{t}), \tag{73b}\]
for all \(i\in\{1,\ldots,N\}\), where \(\gamma>0\) is a parameter called step-size, \(x_{i}^{t}\in\mathbb{R}^{n}\) is the local solution estimates \(x_{i}^{t}\in\mathbb{R}^{n}\), \(z_{i}^{t}\in\mathbb{R}^{Nn}\) is the so-called tracker approximating the unavailable global gradient, while each weight \(w_{ij}\geq 0\) is the \((i,j)\)-entry of a weighted adjacency matrix \(\mathcal{W}\in\mathbb{R}^{N\times N}\) matching the graph \(\mathcal{G}\), i.e., \(w_{ij}>0\) whenever \((j,i)\in\mathcal{E}\) and \(w_{ij}=0\) otherwise. Here, also self-loops are considered, i.e., \((i,i)\in\mathcal{E}\) for all \(i\in\{1,\ldots,N\}\) and, thus, \(w_{ii}>0\) for all \(i\in\{1,\ldots,N\}\). First, we consider a quadratic scenario and, then, we address a logistic regression problem in the case in which some errors affect the update equations of the algorithms. Finally, we test Robust ADMM-Tracking Gradient and compare it with a robust extension of (73) by considering imperfect networks, i.e., asynchronous agents and unreliable communications, and the same logistic regression problem mentioned above. The comparisons are done in terms of the convergence of the optimality error norm \(\|x^{t}-\mathbf{1}_{N,n}x^{\star}\|\). All the simulations are performed by considering networks of \(N=10\) agents and an underlying randomly generated Erdos-Renyi graph with connectivity parameter \(0.1\). In all the simulations, we run our schemes by randomly selecting \(x_{i}^{0}\in\mathbb{R}^{n}\) and \(z_{ij}^{0}\in\mathbb{R}^{2n}\) for all \(i\in\{1,\ldots,N\}\) and \(j\in\mathcal{N}_{i}\). As for Gradient Tracking, we run it by setting the same \(x_{i}^{0}\) set for our schemes, while, as prescribed in [4, 5, 6], we set \(z_{i}^{0}=\nabla f_{i}(x_{i}^{0})\) for all \(i\in\{1,\ldots,N\}\).
### _Quadratic Scenario_
In this section, we consider a quadratic setup described by
\[\min_{x\in\mathbb{R}^{n}}\ \sum_{i=1}^{N}\left(\tfrac{1}{2}x^{\top}Q_{i}x+r_{i}^ {\top}x\right),\]
where \(Q_{i}\in\mathbb{R}^{n\times n}\) and \(r_{i}\in\mathbb{R}^{n}\). Moreover, it holds \(Q_{i}=Q_{i}^{\top}>0\) for all \(i\in\{1,\ldots,N\}\) and, thus, the problem is strongly convex. We set \(n=2\) and, for all \(i\in\{1,\ldots,N\}\), we randomly generate each matrix \(Q_{i}\) so that all its eigenvalues belong to the interval \([1,5]\), while the components of each vector \(r_{i}\) are randomly generated from the interval \([-10,20]\) with a uniform distribution. Since the quadratic scenario gives rise to algorithm updates with linear form, we choose the parameters of the algorithms as the ones minimizing the largest eigenvalue of the matrices describing the algorithms in error coordinates. Specifically, we choose \(\gamma=0.865\), \(\delta=0.865\), \(\rho=0.3029\), \(\alpha=0.865\) for ADMM-Tracking Gradient, while we set \(\gamma=0.106\) for Gradient Tracking. Fig. 3 reports the simulation results and shows that ADMM-Tracking Gradient outperforms Gradient Tracking in terms of convergence rate.
### _Logistic Regression Scenario with Errors_
In this section, we consider a logistic regression scenario. A network of agents aims to cooperatively train a linear classifier for a set of points in a given feature space. Each agent \(i\) is equipped with \(m_{i}\in\mathbb{N}\) points \(p_{i,1},\ldots,p_{i,m_{i}}\in\mathbb{R}^{n-1}\) with binary labels \(l_{i,k}\in\{-1,1\}\) for all \(k\in\{1,\ldots,m_{i}\}\). The problem consists of building a linear classification model from these points solving the minimization problem described by
\[\min_{w\in\mathbb{R}^{n-1},k\in\mathbb{R}}\ \sum_{i=1}^{N}\sum_{k=1}^{m_{i}} \log\left(1+e^{-l_{i,k}(w^{\top}p_{i,k}+b)}\right)+\frac{C}{2}\left\|\left[ \begin{bmatrix}w\\ b\end{bmatrix}\right]\right\|^{2},\]
where \(C>0\) is the so-called regularization parameter. Notice that the presence of the regularization makes the cost function strongly convex. We set \(n=2\) and we randomly generate both the points and labels. We empirically tune the algorithms' parameters by choosing \(\alpha=0.9\), \(\rho=0.9\), and \(\delta=1\) for ADMM-Tracking Gradient, while we set \(\gamma=0.1\) for both ADMM-Tracking Gradient and Gradient Tracking. Moreover, we consider the presence of fixed errors affecting the update equations of the algorithms. In detail, we consider an additive error \(e_{i,x}^{t}=\epsilon 1_{n}\) for all \(t\geq 0\) affecting the update equations (15a), (73a), and (73b), and, analogously, an additive error \(e_{i,z}^{t}=\epsilon 1_{2nd_{i}}\) for all \(t\geq 0\) affecting (15b), where \(\epsilon>0\) denotes the amplitudes of these errors. Fig. 4 compares the algorithms' performance for different values of \(\epsilon\). Differently
Figure 3: Quadratic scenario: comparison between ADMM-Tracking Gradient (ATG) and Gradient Tracking (GT).
from Gradient Tracking and as predicted by Theorem VII.1, we note that ADMM-Tracking Gradient does not diverge and, hence, is robust with respect to these errors.
### _Logistic Regression Scenario with Asynchronous and Lossy Networks_
In this section, we test the effectiveness of Robust ADMM-Tracking Gradient by considering the same logistic regression problem tackled in Section VIII-B in the case of imperfect networks with asynchronous agents and packet losses (see Section V). For the sake of fairness, we compare Robust ADMM-Tracking Gradient with a robust extension of (73) in which each agent \(i\), at each iteration \(t\), maintains local variables \(x_{ij}^{t},z_{ij}^{t}\in\mathbb{R}^{n}\) for each neighbor \(j\) in order to compensate the (possible) packet losses. Then, when agent \(i\) is active, it updates its variables according to
\[\begin{bmatrix}x_{ij}^{t}\\ z_{ij}^{t}\end{bmatrix}=\begin{cases}\text{col}(x_{j}^{t},z_{j}^{t})&\text{if } \psi_{ij}^{t}=1\\ \text{col}(x_{ji}^{t-1},z_{j}^{t-1})&\text{if }\psi_{ij}^{t}=0\end{cases} \tag{74a}\] \[x_{i}^{t+1}=\sum_{j\in\mathcal{N}_{i}}w_{ij}x_{ij}^{t}-\gamma z_{ i}^{t}\] (74b) \[z_{i}^{t+1}=\sum_{j\in\mathcal{N}_{i}}w_{ij}z_{ij}^{t}+\nabla f_ {i}(x^{t+1})-\nabla f_{i}(x^{t}). \tag{74c}\]
For all \(t\geq 0\), \(i\in\{1,\dots,N\}\), and \(j\in\mathcal{N}_{i}\) we generate each \(\lambda_{i}^{t}\) and \(\beta_{ij}^{t}\) by extracting them from \(\{0,1\}\) according to the binomial distribution with randomly generated probabilities of success which, in turn, are uniformly randomly generated from the interval \([0.3,1)\). We empirically tune the algorithms' parameters by choosing \(\alpha=0.9\), \(\rho=0.9\), and \(\delta=1\) for ADMM-Tracking Gradient, while we set \(\gamma=0.1\) for both ADMM-Tracking Gradient and Gradient Tracking. Fig. 5 shows that, differently from (74) and as predicted by Theorem V.4, Robust ADMM-Tracking Gradient preserves its exact, linear convergence in the case of imperfect networks.
## IX Conclusions
In this paper, we proposed a novel distributed algorithm for consensus optimization and a robust extension to deal with imperfect networks. In detail, we designed the algorithm by interpreting the dynamic consensus problem as an additional optimization problem addressed through the ADMM. We interlaced the obtained scheme with suitable local, proportional actions giving rise to the novel distributed algorithm that we named ADMM-Tracking Gradient. In the case of strongly convex problems, we proved the linear convergence of the algorithm through a system theoretical analysis. Then, we considered the case in which the agents' updates are asynchronous and the communication is affected by packet losses. For this scenario, we proposed a robust version of the first scheme and proved that such a method preserves linear convergence. Moreover, by relying on ISS concepts, we demonstrated that our schemes are robust with respect to additional, generic errors. Finally, numerical tests confirmed our theoretical findings.
|
Subsets and Splits